CN115601569A - Different-source image optimization matching method and system based on improved PIIFD - Google Patents

Different-source image optimization matching method and system based on improved PIIFD Download PDF

Info

Publication number
CN115601569A
CN115601569A CN202211268522.0A CN202211268522A CN115601569A CN 115601569 A CN115601569 A CN 115601569A CN 202211268522 A CN202211268522 A CN 202211268522A CN 115601569 A CN115601569 A CN 115601569A
Authority
CN
China
Prior art keywords
image
feature
point
matching
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211268522.0A
Other languages
Chinese (zh)
Inventor
王正兵
冯旭刚
章义忠
吴玉秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui University of Technology AHUT
Original Assignee
Anhui University of Technology AHUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui University of Technology AHUT filed Critical Anhui University of Technology AHUT
Priority to CN202211268522.0A priority Critical patent/CN115601569A/en
Publication of CN115601569A publication Critical patent/CN115601569A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image matching, and provides a heterogeneous image optimization matching method and system based on improved PIIFD. The method comprises the following steps: acquiring corner points on the image contour in the reference image and the image to be matched as feature points by using a curvature scale space algorithm; calculating the main direction of each feature point and the curvature radius of the corresponding contour, and selecting a rectangular area along the main direction by taking the feature point as the center to construct a PIIFD feature descriptor; calculating the similarity of PIIFD feature descriptors between corresponding feature points in the reference image and the image to be matched to obtain an initial matching result; and acquiring a local transformation matrix, performing iterative optimization on the local transformation matrix, and eliminating abnormal matching points in the initial matching result based on the optimized local transformation matrix to obtain a final matching result. The method has the advantage of low calculation cost in the scale change problem processing, and effectively improves the accuracy of matching the heterogeneous images, particularly the non-rigid heterogeneous images.

Description

Different-source image optimization matching method and system based on improved PIIFD
Technical Field
The invention relates to the technical field of image matching, in particular to a heterogeneous image optimization matching method and system based on improved PIIFD.
Background
The different-source image matching method based on the feature matching mainly comprises the following steps: firstly, respectively extracting the features of a reference image and an image to be matched; secondly, describing the extracted features, and obtaining a plurality of matching point pairs through matching; and finally, filtering the abnormal matching point pairs to obtain the required matching result.
In the feature extraction stage, the factors of large gray difference of corresponding areas among the heterogeneous images and the factors of scale change among the heterogeneous images are considered, and an SI-PIIFD algorithm is mostly adopted. Which takes into account the grayscale differences between multimodal retinal images, the grayscale differences between the heterogeneous images are overcome by the partial grayscale invariant feature descriptors (i.e., PIIFD descriptors). Meanwhile, pixel-level matching of the heterogeneous images under different sizes is achieved by respectively calculating PIIFD descriptors of each feature point on a plurality of given scales, and therefore the influence of scale change among the heterogeneous images is solved. However, in the process of solving the scale change of the heterogeneous image, the method of respectively calculating the PIIFD descriptor of each feature point on a plurality of scales greatly increases the calculation amount of the whole algorithm, so that the whole matching process is long in time consumption and low in efficiency.
In the abnormal point filtering stage, a global transformation model is adopted to represent the transformation relation between the image pairs for abnormal matching point pair filtering. However, for non-rigid heterogeneous images, it is difficult to accurately describe the transformation relationship between the images by using a single global transformation model, so that the matching result is not ideal.
Disclosure of Invention
The invention aims to provide a heterogeneous image optimization matching method and system based on improved PIIFD (image information fusion) so as to solve the technical problems that the calculation cost is high when the scale change between images is processed in the early stage and the accuracy is low when the images are matched in the later stage in the existing heterogeneous image matching algorithm.
In order to achieve the above purpose, the invention provides the following technical scheme:
a heterogeneous image optimization matching method based on improved PIIFD comprises the following steps:
respectively extracting image outlines of a reference image and an image to be matched, and acquiring corner points on each image outline as feature points c by using a curvature scale space algorithm i
Calculating each of the feature points c i Radius of curvature at the image contour
Figure BDA0003894106520000021
And respectively calculating each characteristic point c in the reference image and the image to be matched by utilizing the maximum moment in the local area i In the main direction of the feature point c, and further in the feature point c i Selecting a rectangular area 4Fr along its main direction for the center i ×4Fr i Constructing PIIFD feature descriptors; wherein, K i Is a characteristic point c i Corresponding contour curvature, F is the magnification factor;
calculating the similarity of PIIFD feature descriptors between the reference image and the corresponding feature points in the image to be matched so as to perform bidirectional feature matching, and obtaining an initial matching result;
equally dividing the reference image into H multiplied by H grids, and setting circular areas S with equal radius by taking the center of each grid as the center of a circle i And obtaining the circular region S i Each characteristic point of
Figure BDA0003894106520000022
And the corresponding matched characteristic points in the image to be matched
Figure BDA0003894106520000023
To form a matching point pair set
Figure BDA0003894106520000024
Then calculating to obtain a local transformation matrix based on the matching point pair set; wherein the content of the first and second substances,
Figure BDA0003894106520000025
is a circular region S i The total number of feature points within the feature points,
Figure BDA0003894106520000026
any matching point pair;
moving the vector around the center of a circle
Figure BDA0003894106520000027
For each circle center region S in incremental manner i The circle centers of the local transformation matrixes are updated iteratively until the Euclidean distance between the circle centers of two adjacent updating processes is smaller than a preset distance or the iteration times is larger than the preset times so as to obtain H multiplied by H optimized local transformation matrixes; wherein e is j Matching error, u, for the jth matching point pair in the set of matching point pairs j From the current center of a circle to a characteristic point
Figure BDA0003894106520000028
The vector of (a) is calculated,
Figure BDA0003894106520000029
is a normalization factor;
and selecting a corresponding optimized local transformation matrix to screen out abnormal matching point pairs in the initial matching result based on a nearest neighbor algorithm so as to obtain a final matching result.
Further, the image outlines of the reference image and the image to be matched are respectively extracted, and the angular points on each image outline are obtained by utilizing a curvature scale space algorithm to serve as feature points c i The method comprises the following steps:
respectively convolving the reference image and the image to be matched by using a Log-Gabor filter, and respectively calculating the phase consistency of each pixel point in the reference image and the image to be matched in each direction based on a Kovesi improved algorithm:
Figure BDA0003894106520000031
wherein o and n are respectively a direction mark and a scale mark of the Log-Gabor filter, PC (x, y, theta) o ) For phase consistency in the direction o of the image coordinates (x, y), theta o Is a correspondence of direction oAngle, W o As weight coefficients for frequency spreading, A no Magnitude, Δ Φ, of convolution of the Log-Gabor filter with the image at scale n no Is a phase difference function, T is the noise threshold, and ε is a infinite constant;
calculating the maximum moment of phase consistency in each direction at each pixel point in the reference image and the image to be matched, and forming each maximum moment graph corresponding to the maximum moment graph;
after each maximum moment image is subjected to non-maximum value suppression, respectively acquiring the image contours in the reference image and the image to be matched through a contour tracking algorithm;
and extracting corner points on each image contour as feature points by adopting a curvature scale space algorithm.
Further, the maximum moment in the local area is used for respectively calculating each feature point c in the reference image and the image to be matched i Comprises:
selecting the characteristic point c i Adjacent feature points c i-1 、c i+1 (ii) a Wherein, the characteristic point c i Is noted as (x) i ,y i ) Characteristic point c i-1 Is noted as (x) i-1 ,y i-1 ) Characteristic point c i+1 Is noted as (x) i+1 ,y i+1 );
By passing
Figure BDA0003894106520000032
Determining a feature point c i A local area taking l as the side length as the center;
by passing
Figure BDA0003894106520000033
Calculating the feature point c i A principal direction vector of (a); wherein u is i And v i Is a vector v i Value of element (1), M l Is the total number of pixel points in the local region, v t Is a characteristic point c i Vector, w, to the t-th pixel point in the local region t Is v is t A corresponding weight equal to the maximum moment at the pixel point;
by passing
Figure BDA0003894106520000041
Confirming the feature point c i The main direction of (a).
Further, the calculating the similarity of the PIIFD feature descriptors between corresponding feature points in the reference image and the image to be matched for bidirectional feature matching includes:
based on
Figure BDA0003894106520000042
Calculating the similarity of PIIFD feature descriptors between the corresponding feature point pairs;
wherein d is 1 PIIFD feature descriptors for feature points of said pair of feature points located in said reference image, d 2 And the PIIFD feature descriptors of the feature points in the image to be matched in the feature point pairs.
Further, the reference image is equally divided into H multiplied by H grids, and circular areas S with equal radius are arranged by taking the center of each grid as the center of a circle i The method comprises the following steps:
setting a circular region S with the length of the mesh as a radius i
A heterogeneous image optimized matching system based on improved PIIFD, comprising:
a characteristic point extraction module for respectively extracting the image profiles of the reference image and the image to be matched and acquiring the corner points on each image profile as characteristic points c by using a curvature scale space algorithm i
A size invariance processing module for calculating each of the feature points c i Radius of curvature at the image contour
Figure BDA0003894106520000043
And respectively calculating each characteristic point c in the reference image and the image to be matched by utilizing the maximum moment in the local area i Main direction of (A), goWith the characteristic point c i Selecting a rectangular area 4Fr along its main direction for the center i ×4Fr i Constructing PIIFD feature descriptors; wherein, K i Is a characteristic point c i The corresponding contour curvature, F is the magnification factor;
the pre-matching module is used for calculating the similarity of PIIFD feature descriptors between corresponding feature points in the reference image and the image to be matched so as to carry out bidirectional feature matching and obtain an initial matching result;
an initialization module for dividing the reference image into H × H grids, and setting circular areas S with equal radius by taking the center of each grid as the center of a circle i And obtaining the circular region S i Each characteristic point of
Figure BDA0003894106520000044
And the corresponding matched characteristic points in the image to be matched
Figure BDA0003894106520000045
To form matching point pair sets
Figure BDA0003894106520000051
Then calculating to obtain a local transformation matrix based on the matching point pair set; wherein the content of the first and second substances,
Figure BDA0003894106520000052
is a circular region S i The total number of feature points within the feature points,
Figure BDA0003894106520000053
any matching point pair;
an iterative optimization module for moving the vector around the center of the circle
Figure BDA0003894106520000054
For each circle center region S in incremental manner i The local transformation matrix is iteratively updated until the Euclidean distance between the circle centers of two adjacent updates is smaller than a preset distance or the iteration times is larger than the preset times so as to update the local transformation matrix iterativelyObtaining H multiplied by H optimized local transformation matrixes; wherein e is j Matching error, u, for the jth matching point pair in the set of matching point pairs j To the feature point for the current circle center
Figure BDA0003894106520000055
The vector of (a) is determined,
Figure BDA0003894106520000056
is a normalization factor;
and the matching module is used for screening the abnormal matching point pairs in the initial matching result by selecting the corresponding optimized local transformation matrix based on a nearest neighbor algorithm to obtain a final matching result.
Further, the method comprises the following steps:
a phase consistency calculation module, configured to perform convolution on the reference image and the image to be matched respectively by using a Log-Gabor filter, and calculate phase consistency of each pixel point in the reference image and the image to be matched in each direction based on a Kovesi improved algorithm:
Figure BDA0003894106520000057
wherein o and n are respectively the direction mark and the scale mark of the Log-Gabor filter, PC (x, y, theta) o ) For phase consistency in the direction o of the image coordinates (x, y), theta o Is the corresponding angle of direction o, W o As weight coefficients for frequency spreading, A no Magnitude, Δ Φ, of convolution of the Log-Gabor filter with the image at scale n no Is a phase difference function, T is the noise threshold, and ε is a infinite constant;
a maximum moment map obtaining module, configured to calculate a maximum moment of phase consistency in each direction at each pixel point in the reference image and the image to be matched, and form each maximum moment map corresponding to the maximum moment map;
the image contour extraction module is used for respectively obtaining the image contours in the reference image and the image to be matched through a contour tracking algorithm after carrying out non-maximum value inhibition on each maximum moment image;
and the corner point extraction module is used for extracting the corner points on the image contours as feature points by adopting a curvature scale space algorithm.
Further, the method comprises the following steps:
a neighboring point selection module for selecting the feature point c i Adjacent feature points c i-1 、c i+1 (ii) a Wherein, the characteristic point c i Is noted as (x) i ,y i ) Characteristic point c i-1 Is noted as (x) i-1 ,y i-1 ) Characteristic point c i+1 Is noted as (x) i+1 ,y i+1 );
A local region determination module for passing
Figure BDA0003894106520000061
Determining a feature point c i A local area with l as the side length as the center;
a principal direction vector calculation module for passing
Figure BDA0003894106520000062
Calculating the feature point c i A principal direction vector of; wherein u is i And v i Is a vector v i Value of element(s), M l Is the total number of pixels in the local area, v t Is a characteristic point c i Vector, w, to the t-th pixel point in the local region t Is v is t A corresponding weight of (a), the weight being equal to a maximum moment at the pixel point;
a main direction calculation module for passing
Figure BDA0003894106520000063
Confirming the feature point c i Of the main direction of the light beam.
Further, the method comprises the following steps:
a similarity calculation module for calculating a similarity based on
Figure BDA0003894106520000064
Calculating the similarity of PIIFD feature descriptors between the corresponding feature point pairs;
wherein d is 1 PIIFD feature descriptors for feature points of said pair of feature points located in said reference image, d 2 And the PIIFD feature descriptors of the feature points in the image to be matched in the feature point pairs.
Has the advantages that:
according to the technical scheme, the invention provides the heterogeneous image optimization matching method based on the improved PIIFD, so as to solve the defects in heterogeneous image matching in the prior art.
In the feature extraction stage, aiming at the defect of high calculation cost when the problem of scale change between images is solved in the existing heterogeneous image matching, the PIIFD feature descriptor is constructed by introducing the curvature radius. Firstly, image contours of a reference image and an image to be matched are respectively obtained, and the corner points in each image contour are used as feature points to improve the defect that the consistency of the feature points extracted in the existing SI-PIIFD algorithm is not high, so that the number of matched feature point pairs in the final matching result is small. Secondly, considering the corresponding relation between the curvature radius of the image contour corner point and the image scale, calculating each characteristic point c i Radius of curvature r on image contour i And its main direction; at this time, the feature point c will be used i A rectangular area 4Fr centered along its main direction i ×4Fr i Determining the description range of PIIFD feature descriptors; namely, the variation of the number of pixels in the PIIFD feature descriptor description range brought by the scale variation of the different-source image is introduced into a pair of PIIFD feature descriptors corresponding to a pair of feature points in the reference image and the image to be matched. At the moment, only one PIIFD feature descriptor is required to be constructed for one feature point, so that the scale problem in the matching of heterogeneous images can be solved. Compared with the prior art that a plurality of PIIFD feature descriptors in different description ranges need to be constructed for one feature point in a reference image or an image to be matched, and the PIIFD feature descriptors are traversed to find the PIIFD feature descriptors matched with the two images in the description ranges, the calculation cost is greatly reduced.Thereby improving the efficiency of image matching.
In the stage of filtering the abnormal matching point pairs, the distortion characteristics of the non-rigid heterogeneous image are considered, the reference image is divided into a plurality of grids, and a local transformation matrix is constructed on the basis of each grid so as to carry out consistency check on the initial matching result. Meanwhile, in order to improve matching accuracy, an initialized local transformation matrix is constructed by taking a circular area formed by taking the center of the grid as a circle center as a local area (corresponding to the local transformation matrix), and a circle center moving vector is constructed by considering matching errors among characteristic point pairs in the circular area so as to update the circular area, thereby realizing optimization of the local transformation matrix. At the moment, the matching accuracy of the characteristic point pairs in the corresponding circular area is improved through each optimized local transformation matrix, and further the matching accuracy of the characteristic point pairs in the whole heterogeneous image matching process is realized.
It should be understood that all combinations of the foregoing concepts and additional concepts described in greater detail below can be considered as part of the inventive subject matter of this disclosure unless such concepts are mutually inconsistent.
The foregoing and other aspects, embodiments and features of the present teachings will be more fully understood from the following description taken in conjunction with the accompanying drawings. Additional aspects of the present invention, such as features and/or advantages of exemplary embodiments, will be apparent from the description which follows, or may be learned by practice of the specific embodiments according to the teachings of the present invention.
Drawings
The drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures may be represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. Embodiments of various aspects of the present invention will now be described, by way of example, with reference to the accompanying drawings, in which:
fig. 1 is a flowchart of a heterogeneous image optimization matching method based on improved PIIFD according to this embodiment;
FIG. 2 is a flow chart of the feature point collection of FIG. 1;
FIG. 3 is a flow chart of the main direction of the feature points identified in FIG. 1;
fig. 4 is a flowchart of solving the similarity of the PIIFD feature descriptors between pairs of feature points in fig. 1.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the invention without inventive step, are within the scope of protection of the invention. Unless defined otherwise, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this invention belongs.
The use of "first," "second," and similar terms in the description and claims of the present application do not denote any order, quantity, or importance, but rather the terms are used to distinguish one element from another. Similarly, the singular forms "a," "an," or "the" do not denote a limitation of quantity, but rather denote the presence of at least one, unless the context clearly dictates otherwise. The terms "comprises," "comprising," or the like, mean that the elements or items listed before "comprises" or "comprising" encompass the features, integers, steps, operations, elements, and/or components listed after "comprising" or "comprising," and do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. "upper", "lower", "left", "right", and the like are used only to indicate relative positional relationships, and when the absolute position of the object being described changes, the relative positional relationships may also change accordingly.
In the existing SI-PIIFD algorithm for matching heterogeneous images, although the matching abnormity caused by the problem of image scale change between a reference image and an image to be matched can be solved, a plurality of PIIFD feature descriptors need to be constructed on different scales for each feature point in the process, so that the calculation cost is high. And a global transformation model is adopted to represent the transformation relation among characteristic point pairs in the image so as to filter abnormal matching point pairs, so that the technical defect of low accuracy exists in the matching of heterogeneous images, particularly the matching of non-rigid heterogeneous images. Based on this, the present embodiment aims to provide a heterogeneous image optimization matching method and system based on improved PIIFD to overcome the above-mentioned defects existing in the existing heterogeneous image matching.
The following describes a method for matching an optimized heterogeneous image based on improved PIIFD in detail with reference to the accompanying drawings.
As shown in fig. 1, the method includes:
step S102, respectively extracting image contours of a reference image and an image to be matched, and acquiring corner points on each image contour as feature points c by using a curvature scale space algorithm i
The Harris algorithm is adopted to detect the characteristic points in the heterogeneous images when the heterogeneous images are matched by the conventional SI-PIIFD algorithm, so that the consistency of the detection results is not high, and the characteristic point pairs in the final matching result are fewer. Although there is a method for improving the consistency of the detection result by using the contour corner points as the feature points, the contour of the image is detected by directly adopting the Canny algorithm, so that the defect of gray difference is introduced again while the problem of low consistency of the detection result is overcome. Therefore, as shown in fig. 2, the step S102 specifically solves the two defects of gray level difference and low consistency of the detection result at the same time through the following processes.
Step S102.2, respectively convolving the reference image and the image to be matched by using a Log-Gabor filter, and respectively calculating the phase consistency of each pixel point in the reference image and the image to be matched in each direction based on a Kovesi improved algorithm:
Figure BDA0003894106520000091
wherein o and n are respectively the direction mark and the scale mark of the Log-Gabor filter, PC (x, y, theta) o ) For phase consistency in the direction o of the image coordinates (x, y), theta o Is the corresponding angle of direction o, W o As weight coefficients for frequency spreading, A no Magnitude, Δ Φ, of convolution of the Log-Gabor filter with the image at scale n no Is a phase difference function, T is the noise threshold, and ε is a finite constant.
And step S102.4, calculating the maximum moment of phase consistency in each direction at each pixel point in the reference image and the image to be matched, and forming each maximum moment graph corresponding to the maximum moment graph.
And step S102.6, after carrying out non-maximum value suppression on each maximum moment diagram, respectively obtaining the image contours in the reference image and the image to be matched through a contour tracking algorithm.
And S102.8, extracting corner points on each image contour as feature points by adopting a curvature scale space algorithm.
As can be seen from steps S102.2 to S102.8, in this embodiment, a phase consistency maximum moment map is obtained by calculating a phase consistency model of an input image to represent the position of the contour point in the reference image and the image to be matched, and the accuracy of the obtained image contour is improved by extracting the image contour using a non-maximum suppression algorithm. Therefore, the method has the advantage of high consistency of the detection result of the feature points, and can effectively reduce the influence of the gray difference among the different source images on the follow-up feature point matching detection.
Step S104, calculating each characteristic point c i Radius of curvature at the image contour
Figure BDA0003894106520000101
And respectively calculating each characteristic point c in the reference image and the image to be matched by utilizing the maximum moment in the local area i In the main direction of the feature point c, and further in the feature point c i Selecting a rectangular area 4Fr along its main direction for the center i ×4Fr i Constructing PIIFD feature descriptors; wherein,K i Is a characteristic point c i Corresponding contour curvature, F is the magnification factor.
In the step, the consistency relation between the curvature radius and the image scale change is considered, so that when constructing the PIIFD feature descriptor, the description range of the PIIFD feature descriptor is determined as follows: a rectangular region 4Fr having a positive correlation with the radius of curvature in the main direction thereof with the feature point as the center i ×4Fr i . Therefore, pixel level change caused by image scale is introduced into the PIIFD feature descriptor, and only one PIIFD feature descriptor needs to be constructed for a pair of feature points in a reference image and an image to be matched, so that the calculation amount for solving the influence of the image scale change is greatly reduced.
In this embodiment, the range of the amplification factor F is 1 to 4, and when the value of the amplification factor F is 2, the best matching effect can be obtained.
Specifically, as shown in fig. 3, any one of the feature points c i Is determined by the following steps:
step S104.2, selecting and connecting the characteristic point c i Adjacent feature points c i-1 、c i+1 (ii) a Wherein, the characteristic point c i Is noted as (x) i ,y i ) Characteristic point c i-1 Is noted as (x) i-1 ,y i-1 ) Characteristic point c i+1 Is noted as (x) i+1 ,y i+1 );
Step S104.4, by
Figure BDA0003894106520000111
Determining a feature point c i A local area with l as the side length as the center;
step S104.6, by
Figure BDA0003894106520000112
Calculating the feature point c i A principal direction vector of; wherein u is i And v i Is a vector v i Value of element(s), M l Is the total number of pixels in the local area, v t Is a characteristic point c i To what is shownThe vector, w, of the t-th pixel point in the local area t Is v is t A corresponding weight equal to the maximum moment at the pixel point;
step S104.8, by
Figure BDA0003894106520000113
Confirming the feature point c i The main direction of (a).
And S106, calculating the similarity of PIIFD feature descriptors between the reference image and the corresponding feature point pairs in the image to be matched so as to perform bidirectional feature matching, and obtaining an initial matching result.
As shown in fig. 4, the similarity calculation of the PIIFD feature descriptors between feature point pairs is performed based on the following process:
step S106.2 based on
Figure BDA0003894106520000114
Calculating the similarity of PIIFD feature descriptors between the corresponding feature points;
wherein d is 1 PIIFD feature descriptors for feature points of said pair of feature points located in said reference image, d 2 And aligning PIIFD feature descriptors of the feature points in the image to be matched with the feature points.
Step S108, dividing the reference image into H multiplied by H grids equally, and setting circular areas S with equal radius by taking the center of each grid as the center of a circle i And obtaining the circular region S i Each characteristic point of
Figure BDA0003894106520000121
And the corresponding matched characteristic points in the image to be matched
Figure BDA0003894106520000122
To form a matching point pair set
Figure BDA0003894106520000123
Then calculating to obtain a local transformation matrix based on the matching point pair set; wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003894106520000124
is a circular region S i The total number of feature points within the feature points,
Figure BDA0003894106520000125
is any one of the matching point pairs.
In this step, a circular area S is set with the length of the mesh as a radius i . The initialization setting of any circle center, a circular area corresponding to the circle center (namely a local area corresponding to a local conversion matrix), a matching point pair set corresponding to the circular area and a local conversion matrix corresponding to the matching point pair set is sequentially realized through the steps.
Step S110, moving the vector by the center of a circle
Figure BDA0003894106520000126
For each circle center region S in incremental manner i Performing iterative updating on the circle centers of the local transformation matrixes until the Euclidean distance between the circle centers of two adjacent updating times is smaller than a preset distance or the iteration times is larger than the preset times so as to obtain H multiplied by H optimized local transformation matrixes; wherein e is j Matching error, u, for the jth matching point pair in the set of matching point pairs j To the feature point for the current circle center
Figure BDA0003894106520000127
The vector of (a) is determined,
Figure BDA0003894106520000128
is a normalization factor.
In order to make the consistency check more accurate, the matching error of each characteristic point pair is introduced through the step to optimize each initialization setting result, so as to finally obtain H × H optimized local transformation matrixes.
In specific implementation, the local transformation matrix is iteratively updated until the Euclidean distance between the centers of circles is less than 5 or the iteration times is more than 4 when two adjacent updates are performed, so as to obtain H × H optimized local transformation matrices.
And step S112, selecting a corresponding optimized local transformation matrix to screen out abnormal matching point pairs in the initial matching result based on a nearest neighbor algorithm to obtain a final matching result.
Specifically, based on the above steps, H × H optimized local transformation matrices and corresponding circle centers thereof can be obtained, the distance from each matching point in the reference image to each circle center is calculated, the circle center corresponding to each matching point is found according to the nearest neighbor principle, and consistency check is performed by using the corresponding optimized local transformation matrices to remove the point pairs with larger matching errors.
As can be seen from the above process, the method for matching heterogeneous images provided by this embodiment introduces a phase consistency model and a non-maximum suppression algorithm to extract an image contour, so that the method has the advantage of high consistency of the feature point detection results, and can effectively reduce the influence of the gray level difference between heterogeneous images on the subsequent feature point matching detection. And the contour curvature of the feature points is recorded in the process of extracting the feature points, and then the corresponding curvature radius is calculated to determine the description range of the PIIFD feature descriptor, so that the influence of image scale change between the heterogeneous images can be overcome by only calculating one PIIFD feature descriptor for each feature point. Meanwhile, consistency detection is further more accurate by calculating a plurality of local transformation models for representing transformation relations between image pairs.
The programs described above may be run on a processor or may also be stored in memory (or referred to as computer-readable storage media), which includes both non-transitory and non-transitory, removable and non-removable media, that implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Disks (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media such as modulated data signals and carrier waves.
These computer programs may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks, and corresponding steps may be implemented by different modules.
The embodiment also provides a heterogeneous image optimization matching system based on the improved PIIFD. The system comprises:
a characteristic point extraction module for respectively extracting the image profiles of the reference image and the image to be matched and obtaining the corner points on each image profile as characteristic points c by utilizing a curvature scale space algorithm i
A size invariance processing module for calculating each of the feature points c i Radius of curvature at the image contour
Figure BDA0003894106520000141
And respectively calculating each characteristic point c in the reference image and the image to be matched by utilizing the maximum moment in the local area i In the main direction of the feature point c, and further in the feature point c i Selecting a rectangular area 4Fr for the center along its main direction i ×4Fr i Constructing PIIFD feature descriptors; wherein, K i Is a characteristic point c i Corresponding contour curvature, F is the magnification factor;
the pre-matching module is used for calculating the similarity of PIIFD feature descriptors between the reference image and the corresponding feature points in the image to be matched so as to carry out bidirectional feature matching and obtain an initial matching result;
an initialization module for dividing the reference image into H × H grids, and setting circular areas S with equal radius using the center of each grid as the center of circle i And obtaining the circular region S i Each characteristic point of
Figure BDA0003894106520000142
And the corresponding matched characteristic points in the image to be matched
Figure BDA0003894106520000143
To form matching point pair sets
Figure BDA0003894106520000144
Then calculating to obtain a local transformation matrix based on the matching point pair set; wherein the content of the first and second substances,
Figure BDA0003894106520000145
is a circular region S i The total number of feature points within the feature points,
Figure BDA0003894106520000146
any matching point pair;
an iterative optimization module for moving the vector around the center of the circle
Figure BDA0003894106520000147
For each of said circle-center regions S in increments i The circle centers of the local transformation matrixes are updated iteratively until the Euclidean distance between the circle centers of two adjacent updating processes is smaller than a preset distance or the iteration times is larger than the preset times so as to obtain H multiplied by H optimized local transformation matrixes; wherein e is j Matching error, u, for the jth matching point pair in the set of matching point pairs j To the feature point for the current circle center
Figure BDA0003894106520000148
The vector of (a) is determined,
Figure BDA0003894106520000149
is a normalization factor;
and the matching module is used for screening the abnormal matching point pairs in the initial matching result by selecting the corresponding optimized local transformation matrix based on a nearest neighbor algorithm to obtain a final matching result.
The system is used for implementing the steps of the method, and therefore, the steps have already been described, and are not described herein again.
For example, the system further comprises:
a phase consistency calculation module, configured to perform convolution on the reference image or the image to be matched respectively by using a Log-Gabor filter, and calculate phase consistency of each pixel point in the reference image or the image to be matched in each direction based on a Kovesi improved algorithm:
Figure BDA0003894106520000151
wherein o and n are respectively a direction mark and a scale mark of the Log-Gabor filter, PC (x, y, theta) o ) For phase consistency of the image coordinates (x, y) in the direction o, θ o Is the corresponding angle of direction o, W o As weight coefficients for frequency spreading, A no Magnitude, Δ Φ, of convolution of the Log-Gabor filter with the image at scale n no Is a phase difference function, T is a noise threshold, and epsilon is an infinite constant;
a maximum moment map obtaining module, configured to calculate a maximum moment of phase consistency in each direction at each pixel point in the reference image and the image to be matched, and form each maximum moment map corresponding to the maximum moment map;
the image contour extraction module is used for respectively obtaining the image contours in the reference image and the image to be matched through a contour tracking algorithm after carrying out non-maximum value inhibition on each maximum moment image;
and the corner point extraction module is used for extracting the corner points on the image contours as feature points by adopting a curvature scale space algorithm.
For example, the system further comprises:
a neighboring point selection module for selecting the feature point c i Adjacent feature points c i-1 、c i+1 (ii) a Wherein, the characteristic point c i Is noted as (x) i ,y i ) Characteristic point c i-1 Is noted as (x) i-1 ,y i-1 ) Characteristic point c i+1 Is noted as (x) i+1 ,y i+1 );
A local region determination module for passing
Figure BDA0003894106520000152
Determining a feature point c i A local area taking l as the side length as the center;
a principal direction vector calculation module for passing
Figure BDA0003894106520000153
Calculating the feature point c i A principal direction vector of; wherein u is i And v i Is a vector v i Value of element(s), M l Is the total number of pixel points in the local region, v t Is a characteristic point c i Vector, w, to the t-th pixel point in the local region t Is v is t A corresponding weight of (a), the weight being equal to a maximum moment at the pixel point;
a main direction calculation module for passing
Figure BDA0003894106520000161
Confirming the feature point c i Of the main direction of the light beam.
For example, the system further comprises:
a similarity calculation module for calculating a similarity based on
Figure BDA0003894106520000162
Calculating the similarity of PIIFD feature descriptors between the corresponding feature points;
wherein d is 1 PIIFD feature descriptors for feature points of said pair of feature points located in said reference image, d 2 Centering the characteristic points on the characteristics in the image to be matchedPIIFD feature descriptors of points.
Because the system is built based on the method, the system also has the advantage of low calculation cost in the processing process of the image scale change problem in specific implementation, and the accuracy of matching the heterogeneous image, particularly the non-rigid heterogeneous image, is effectively improved.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to be limited thereto. Those skilled in the art can make various changes and modifications without departing from the spirit and scope of the invention. Therefore, the protection scope of the present invention should be determined by the appended claims.

Claims (9)

1. A heterogeneous image optimization matching method based on improved PIIFD is characterized by comprising the following steps:
respectively extracting image outlines of a reference image and an image to be matched, and acquiring corner points on each image outline as feature points c by using a curvature scale space algorithm i
Calculating each of the feature points c i Radius of curvature at the image contour
Figure FDA0003894106510000011
And respectively calculating each characteristic point c in the reference image and the image to be matched by utilizing the maximum moment in the local area i In the main direction of the feature point c, and further in the feature point c i Selecting a rectangular area 4Fr along its main direction for the center i ×4Fr i Constructing PIIFD feature descriptors; wherein, K i Is a characteristic point c i The corresponding contour curvature, F is the magnification factor;
calculating the similarity of PIIFD feature descriptors between the reference image and the corresponding feature points in the image to be matched so as to perform bidirectional feature matching, and obtaining an initial matching result;
equally dividing the reference image into H multiplied by H grids, and setting circular areas S with equal radius by taking the center of each grid as the center of a circle i And obtaining the circular region S i Each feature point of (2)
Figure FDA0003894106510000012
And the corresponding matched characteristic points in the image to be matched
Figure FDA0003894106510000013
To form matching point pair sets
Figure FDA0003894106510000014
Then calculating to obtain a local transformation matrix based on the matching point pair set; wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003894106510000015
is a circular region S i The total number of feature points within the feature points,
Figure FDA0003894106510000016
any matching point pair;
moving the vector around the center of a circle
Figure FDA0003894106510000017
For each circle center region S in incremental manner i Performing iterative updating on the circle centers of the local transformation matrixes until the Euclidean distance between the circle centers of two adjacent updating times is smaller than a preset distance or the iteration times is larger than the preset times so as to obtain H multiplied by H optimized local transformation matrixes; wherein e is j Matching error u for the jth matching point pair in the set of matching point pairs j From the current center of a circle to a characteristic point
Figure FDA0003894106510000018
The vector of (a) is determined,
Figure FDA0003894106510000019
is a normalization factor;
and selecting the corresponding optimized local transformation matrix to screen out abnormal matching point pairs in the initial matching result based on a nearest neighbor algorithm so as to obtain a final matching result.
2. The improved PIIFD-based heterogeneous image optimization matching method according to claim 1, wherein the image contours of the reference image and the image to be matched are extracted respectively, and the corner points on each image contour are obtained by using a curvature scale space algorithm as feature points c i The method comprises the following steps:
respectively convolving the reference image and the image to be matched by using a Log-Gabor filter, and respectively calculating the phase consistency of each pixel point in the reference image and the image to be matched in each direction based on a Kovesi improved algorithm:
Figure FDA0003894106510000021
wherein o and n are respectively a direction mark and a scale mark of the Log-Gabor filter, PC (x, y, theta) o ) For phase consistency in the direction o of the image coordinates (x, y), theta o Is the corresponding angle of direction o, W o As weight coefficients for frequency spreading, A no Magnitude, Δ Φ, of convolution of the Log-Gabor filter with the image at scale n no Is a phase difference function, T is a noise threshold, and epsilon is an infinite constant;
calculating the maximum moment of phase consistency in each direction at each pixel point in the reference image and the image to be matched, and forming each maximum moment graph corresponding to the maximum moment graph;
after each maximum moment image is subjected to non-maximum value suppression, respectively acquiring the image contours in the reference image and the image to be matched through a contour tracking algorithm;
and extracting corner points on each image contour as feature points by adopting a curvature scale space algorithm.
3. The method of claim 1, wherein the maximum image size in the local region is used for optimal matching of the image based on the PIIFDMoment respectively calculates each characteristic point c in the reference image and the image to be matched i Comprises:
selecting the characteristic point c i Adjacent feature points c i-1 、c i+1 (ii) a Wherein, the characteristic point c i Is noted as (x) i ,y i ) Characteristic point c i-1 Is noted as (x) i-1 ,y i-1 ) Characteristic point c i+1 Is noted as (x) i+1 ,y i+1 );
By passing
Figure FDA0003894106510000022
Determining a feature point c i A local area with l as the side length as the center;
by passing
Figure FDA0003894106510000023
Calculating the feature point c i A principal direction vector of (a); wherein u is i And v i Is a vector v i Value of element (1), M l Is the total number of pixels in the local area, v t Is a characteristic point c i Vector, w, to the t-th pixel point in the local region t Is v is t A corresponding weight equal to the maximum moment at the pixel point;
by passing
Figure FDA0003894106510000031
Confirming the feature point c i Of the main direction of the light beam.
4. The method according to claim 1, wherein the calculating the similarity of PIIFD feature descriptors between corresponding feature points in the reference image and the image to be matched for bidirectional feature matching comprises:
based on
Figure FDA0003894106510000032
Calculating the similarity of PIIFD feature descriptors between the corresponding feature points;
wherein, d 1 PIIFD feature descriptors for feature points of said pair of feature points located in said reference image, d 2 And the PIIFD feature descriptors of the feature points in the image to be matched in the feature point pairs.
5. The method of claim 1, wherein the reference image is equally divided into H x H grids, and a circular area S with equal radius is arranged around the center of each grid as the center of a circle i The method comprises the following steps:
setting a circular region S with the length of the mesh as a radius i
6. A heterogeneous image optimization matching system based on improved PIIFD, comprising:
a characteristic point extraction module for respectively extracting the image profiles of the reference image and the image to be matched and acquiring the corner points on each image profile as characteristic points c by using a curvature scale space algorithm i
A size invariance processing module for calculating each of the feature points c i Radius of curvature at the image contour
Figure FDA0003894106510000033
And respectively calculating each characteristic point c in the reference image and the image to be matched by utilizing the maximum moment in the local area i In the main direction of the feature point c, and further in the feature point c i Selecting a rectangular area 4Fr along its main direction for the center i ×4Fr i Constructing PIIFD feature descriptors; wherein, K i Is a characteristic point c i Corresponding contour curvature, F is the magnification factor;
the pre-matching module is used for calculating the similarity of PIIFD feature descriptors between corresponding feature points in the reference image and the image to be matched so as to carry out bidirectional feature matching and obtain an initial matching result;
an initialization module for dividing the reference image into H × H grids, and setting circular areas S with equal radius using the center of each grid as the center of circle i And obtaining the circular region S i Each characteristic point of
Figure FDA0003894106510000041
And the corresponding matched characteristic points in the image to be matched
Figure FDA0003894106510000042
To form a matching point pair set
Figure FDA0003894106510000043
Then calculating to obtain a local transformation matrix based on the matching point pair set; wherein, the first and the second end of the pipe are connected with each other,
Figure FDA0003894106510000044
is a circular region S i The total number of feature points within the feature point,
Figure FDA0003894106510000045
any matching point pair;
an iterative optimization module for moving the vector around the center of the circle
Figure FDA0003894106510000046
For each of said circle-center regions S in increments i The circle centers of the local transformation matrixes are updated iteratively until the Euclidean distance between the circle centers of two adjacent updating processes is smaller than a preset distance or the iteration times is larger than the preset times so as to obtain H multiplied by H optimized local transformation matrixes; wherein e is j Matching error u for the jth matching point pair in the set of matching point pairs j From the current center of a circle to a characteristic point
Figure FDA0003894106510000047
The vector of (a) is calculated,
Figure FDA0003894106510000048
is a normalization factor;
and the matching module is used for selecting a corresponding optimized local transformation matrix to screen out abnormal matching point pairs in the initial matching result based on a nearest neighbor algorithm so as to obtain a final matching result.
7. The system of claim 6, wherein the system comprises:
a phase consistency calculation module, configured to perform convolution on the reference image and the image to be matched respectively by using a Log-Gabor filter, and calculate phase consistency of each pixel point in the reference image and the image to be matched in each direction based on a Kovesi improved algorithm:
Figure FDA0003894106510000049
wherein o and n are respectively a direction mark and a scale mark of the Log-Gabor filter, PC (x, y, theta) o ) For phase consistency in the direction o of the image coordinates (x, y), theta o Is the corresponding angle of direction o, W o As weight coefficients for frequency spreading, A no Magnitude, Δ Φ, of convolution of the Log-Gabor filter with the image at scale n no Is a phase difference function, T is a noise threshold, and epsilon is an infinite constant;
a maximum moment graph obtaining module, configured to calculate a maximum moment of phase consistency in each direction at each pixel point in the reference image and the image to be matched, and form each maximum moment graph corresponding to the maximum moment graph;
the image contour extraction module is used for respectively obtaining the image contours in the reference image and the image to be matched through a contour tracking algorithm after carrying out non-maximum value inhibition on each maximum moment image;
and the corner extraction module is used for extracting corners on each image contour as feature points by adopting a curvature scale space algorithm.
8. The system of claim 6, wherein the system comprises:
a neighboring point selection module for selecting the feature point c i Adjacent feature points c i-1 、c i+1 (ii) a Wherein, the characteristic point c i Is noted as (x) i ,y i ) Characteristic point c i-1 Is noted as (x) i-1 ,y i-1 ) Characteristic point c i+1 Is noted as (x) i+1 ,y i+1 );
A local region determination module for passing
Figure FDA0003894106510000051
Determining a feature point c i A local area with l as the side length as the center;
a principal direction vector calculation module for passing
Figure FDA0003894106510000052
Calculating the feature point c i A principal direction vector of; wherein u is i And v i Is a vector v i Value of element (1), M l Is the total number of pixel points in the local region, v t Is a characteristic point c i Vector, w, to the t-th pixel point in the local region t Is v t A corresponding weight of (a), the weight being equal to a maximum moment at the pixel point;
a main direction calculation module for passing
Figure FDA0003894106510000053
Confirming the feature point c i Of the main direction of the light beam.
9. The system of claim 6, wherein the system comprises:
a similarity calculation module for calculating a similarity based on
Figure FDA0003894106510000054
Calculating the similarity of PIIFD feature descriptors between the corresponding feature point pairs;
wherein, d 1 PIIFD feature descriptor for a feature point of said pair of feature points located in said reference image, d 2 And the PIIFD feature descriptors of the feature points in the image to be matched in the feature point pairs.
CN202211268522.0A 2022-10-17 2022-10-17 Different-source image optimization matching method and system based on improved PIIFD Pending CN115601569A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211268522.0A CN115601569A (en) 2022-10-17 2022-10-17 Different-source image optimization matching method and system based on improved PIIFD

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211268522.0A CN115601569A (en) 2022-10-17 2022-10-17 Different-source image optimization matching method and system based on improved PIIFD

Publications (1)

Publication Number Publication Date
CN115601569A true CN115601569A (en) 2023-01-13

Family

ID=84847041

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211268522.0A Pending CN115601569A (en) 2022-10-17 2022-10-17 Different-source image optimization matching method and system based on improved PIIFD

Country Status (1)

Country Link
CN (1) CN115601569A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129146A (en) * 2023-03-29 2023-05-16 中国工程物理研究院计算机应用研究所 Heterogeneous image matching method and system based on local feature consistency

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116129146A (en) * 2023-03-29 2023-05-16 中国工程物理研究院计算机应用研究所 Heterogeneous image matching method and system based on local feature consistency
CN116129146B (en) * 2023-03-29 2023-09-01 中国工程物理研究院计算机应用研究所 Heterogeneous image matching method and system based on local feature consistency

Similar Documents

Publication Publication Date Title
CN108427924B (en) Text regression detection method based on rotation sensitive characteristics
CN109784250B (en) Positioning method and device of automatic guide trolley
CN111369605B (en) Infrared and visible light image registration method and system based on edge features
CN111797744B (en) Multimode remote sensing image matching method based on co-occurrence filtering algorithm
Hong et al. A robust technique for precise registration of radar and optical satellite images
CN102169581A (en) Feature vector-based fast and high-precision robustness matching method
CN110969669B (en) Visible light and infrared camera combined calibration method based on mutual information registration
CN107240130B (en) Remote sensing image registration method, device and system
CN111091101B (en) High-precision pedestrian detection method, system and device based on one-step method
CN105427333A (en) Real-time registration method of video sequence image, system and shooting terminal
CN115471682A (en) Image matching method based on SIFT fusion ResNet50
CN115601569A (en) Different-source image optimization matching method and system based on improved PIIFD
CN114897705A (en) Unmanned aerial vehicle remote sensing image splicing method based on feature optimization
CN111950370A (en) Dynamic environment offline visual milemeter expansion method
CN113705564B (en) Pointer type instrument identification reading method
CN113673515A (en) Computer vision target detection algorithm
CN115880683B (en) Urban waterlogging ponding intelligent water level detection method based on deep learning
CN116206139A (en) Unmanned aerial vehicle image upscaling matching method based on local self-convolution
CN114998630B (en) Ground-to-air image registration method from coarse to fine
CN114004770B (en) Method and device for accurately correcting satellite space-time diagram and storage medium
CN109886988A (en) A kind of measure, system, device and the medium of Microwave Imager position error
CN110222688B (en) Instrument positioning method based on multi-level correlation filtering
Xia et al. A coarse-to-fine ghost removal scheme for HDR imaging
CN114897990A (en) Camera distortion calibration method and system based on neural network and storage medium
CN116416289B (en) Multimode image registration method, system and medium based on depth curve learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination