CN112633304B - Robust fuzzy image matching method - Google Patents

Robust fuzzy image matching method Download PDF

Info

Publication number
CN112633304B
CN112633304B CN201910898199.7A CN201910898199A CN112633304B CN 112633304 B CN112633304 B CN 112633304B CN 201910898199 A CN201910898199 A CN 201910898199A CN 112633304 B CN112633304 B CN 112633304B
Authority
CN
China
Prior art keywords
descriptor
matching
nearest neighbor
tpd
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910898199.7A
Other languages
Chinese (zh)
Other versions
CN112633304A (en
Inventor
陈月玲
夏仁波
赵吉宾
刘明洋
于彦凤
赵亮
付生鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN201910898199.7A priority Critical patent/CN112633304B/en
Publication of CN112633304A publication Critical patent/CN112633304A/en
Application granted granted Critical
Publication of CN112633304B publication Critical patent/CN112633304B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Optimization (AREA)
  • Software Systems (AREA)
  • Pure & Applied Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computational Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Algebra (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a robust fuzzy image matching method. The method comprises the following steps: two images with different blur degree are input first. Secondly, a group of scale-invariant feature transform (SIFT) points are extracted, and three scale-invariant concentric circle regions are applied to generate descriptors in order to further improve the specificity of SIFT descriptors. Third, to reduce the high-dimensional complexity of SIFT descriptors, local retention projection LPP techniques are employed to reduce the size of the descriptors. And finally, obtaining matching characteristic points by utilizing Euclidean distance similarity measurement. The method not only can reduce the data quantity, but also can improve the matching speed and the matching precision, and can be suitable for other image matching methods.

Description

Robust fuzzy image matching method
Technical Field
The invention relates to the technical field of computer vision, in particular to a robust fuzzy image matching method.
Background
Image matching is a special field of image processing, corresponding geometric relations among images are determined by extracting consistent feature points among different images of the same scene through image matching, a matched image is obtained, the image scene can be described more accurately than a single image, generally, the image matching can be carried out by adopting a method based on local feature extraction and matching, the method for extracting and matching the local features mainly considers the scale and rotation invariance of an input image, larger calculated data quantity and instantaneity are not considered, and corresponding matching point pairs can not be obtained effectively and accurately for image matching in a fuzzy scene.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a robust fuzzy image matching method, which utilizes a three-scale center unchanged circular area and an LPP technology to reduce the dimension of a descriptor, greatly improves the operation efficiency while enhancing the distinguishing property of characteristic points, and greatly improves the correct matching rate and the robustness.
The technical scheme adopted by the invention for achieving the purpose is as follows: a robust blurred image matching method comprising the steps of:
s1: inputting two original images with different blurring degrees;
s2: extracting feature points on two original images by using a Scale Invariant Feature Transform (SIFT) algorithm;
s3: respectively establishing three central circular areas with unchanged scales around the characteristic points on the two original images, and describing the characteristic points to form respective characteristic point descriptors of the two original images;
s4: the local projection mapping LPP method is adopted to reduce the dimension of the feature point descriptor for improving the arithmetic operation efficiency;
s5: and matching the feature point descriptors of the two original images after the dimension reduction, and selecting accurate matching point pairs from the two images.
The feature points are described in step S3 as directivity information specifying descriptors.
The directivity information of the specified descriptor includes:
describing each characteristic point by using 16 seed points 4×4, dividing a gradient histogram of an area where each seed point is located into 8 direction intervals between 0 ° and 360 °, and carrying out weighting operation on the gradient histogram by using a Gaussian window to generate 128-dimensional characteristic vectors;
defining a feature point descriptor LSIFT described by three scale-invariant central regions is expressed as:
PD=α 1 L 12 L 23 L 3
wherein L is i (i=1, 2, 3) is a 128-dimensional SIFT descriptor, PD is a weighted 128-dimensional descriptor, α 123 Is a preset weighting coefficient.
The step S4 of reducing feature point descriptor dimensions by applying the local projection mapping LPP method includes:
a. defining a feature point descriptor LSIFT described by three scale-invariant central regions as (x 1 ,x 2 ,…x m ),x i A feature point descriptor LSIFT representing one of the images; y is i =w T x i Representing a one-dimensional description of the transformation vector w, defining a similarity matrix S (S ij =s ji ):
b. Selecting an appropriate projection is to minimize the solution of the objective function f:
wherein D is a diagonal matrix D ii =∑ j S ij l=d-S is a laplace matrix. There are the following constraints:
Y T DY=w T XDX T w=1
c. the problem of minimizing the solution of the objective function f can be reduced to:
w T XDX T w=1
d. can be converted into a generalized eigenvalue problem:
XLX T w=λXDX T w
wherein XLX T ,XDX T Are symmetrical and semi-positive definite matrices;
e. let W be the column vector of the generalized eigenvalue λ, projection matrix W LPP =(w 0 ,w 1 ,…w l-1 ) Each vector w of PD i (i=0, 1, …, l-1) all have 128 dimensions, then the projection matrix reduces the 128-dimensional descriptor vector to l-dimensions, so the 128-dimensional descriptor is translated into:
TPD=PD·W LPP
where TPD is a local descriptor of dimension l < 128.
The descriptor matching in the step S5 includes:
computing two descriptors TPD i ,TPD j The Euclidean distance between the two is obtained by adopting a nearest neighbor and next nearest neighbor algorithm to obtain an accurate matching point pair:
D nearest neighbor /D Secondary nearest neighbor <T
Wherein D is Nearest neighbor And D Secondary nearest neighbor The nearest neighbor distance and the next nearest neighbor distance when the current pixel point is taken as an origin are respectively represented, T represents a threshold value used for matching, and the current pixel point is a matching point pair.
D described in step S5 Nearest neighbor And D Secondary nearest neighbor The calculation is carried out according to the following formula:
wherein, TPD i Descriptor representing arbitrary feature point i after dimension reduction, TPD j Descriptor, TPD, representing feature point j after dimension reduction i,m Mth dimension vector representing feature point i descriptor, TPD j,m The m-th dimension vector representing the descriptor of the feature point j, i represents the dimension after dimension reduction.
The invention has the following beneficial effects and advantages:
1. the robust fuzzy image matching method describes the characteristic points by means of the three central circular areas with unchanged scales, enhances the distinguishing property of the characteristic descriptors and improves the correct matching rate.
2. The robust fuzzy image matching method reduces the dimension of the feature point descriptor by utilizing the local projection mapping technology, improves the matching efficiency of the image on the basis of ensuring the correct matching rate, and has stronger instantaneity.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a blurred image of a structural scene of the method of the present invention;
FIG. 3 is a graph showing the effect of different parameters of a structural type image of the method of the present invention on blurred image performance;
FIG. 4 is a blurred image of a texture scene of the method of the present invention;
FIG. 5 is a graph showing the effect of different parameters of a blurred image of a texture scene on the performance of the blurred image in accordance with the method of the present invention;
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, the specific steps of a robust blurred image matching method of the present invention are as follows:
step 1: inputting two original images with different blurring degrees;
step 2: extracting feature points on two original images by using a Scale Invariant Feature Transform (SIFT) algorithm;
step 3: respectively establishing three central circular areas with unchanged scales around the characteristic points on the two original images, and describing the characteristic points to form respective characteristic point descriptors of the two original images;
describing each characteristic point by using 16 seed points 4×4, dividing a gradient histogram of an area where each seed point is located into 8 direction intervals between 0 ° and 360 °, and carrying out weighting operation on the gradient histogram by using a Gaussian window to generate 128-dimensional characteristic vectors;
defining a feature point descriptor LSIFT described by three scale-invariant central regions is expressed as:
PD=α 1 L 12 L 23 L 3
wherein L is i (i=1, 2, 3) is a 128-dimensional SIFT descriptor, PD is a weighted 128-dimensional descriptor, α 123 Is a preset weighting coefficient.
Step 4: in order to improve the algorithm operation efficiency, the dimension of the feature point descriptor is reduced by applying a local projection mapping (LPP) method;
defining a feature point descriptor LSIFT described by three scale-invariant central regions as (x 1 ,x 2 ,…x m ),x i A LSIFT representing one of the images; y is i =w T x i Representing a one-dimensional description of the transformation vector w, defining a similarity matrix S (S ij =s ji ):
b. Selecting an appropriate projection is to minimize the solution of the objective function f:
wherein D is a diagonal matrix D ii =∑ j S ij l=d-S is a laplace matrix. There are the following constraints:
Y T DY=w T XDX T w=1
c. the problem of minimizing the solution of the objective function f can be reduced to:
w T XDX T w=1
d. can be converted into a generalized eigenvalue problem:
XLX T w=λXDX T w
wherein XLX T ,XDX T Are symmetrical and semi-positive definite matrices;
e. let W be the column vector of the generalized eigenvalue λ, projection matrix W LPP =(w 0 ,w 1 ,…w l-1 ) Each w i (i=0, 1, …, l-1) all have 128 dimensions, then the projection matrix reduces the 128-dimensional descriptor vector to l-dimensions, so the 128-dimensional descriptor is translated into:
TPD=PD·W LPP
where TPD is a local descriptor l < 128 in one dimension.
And 5, matching the feature point descriptors of the two original images after the dimension reduction, and selecting accurate matching point pairs from the two images.
Computing two descriptors TPD i ,TPD j The Euclidean distance between the two is obtained by adopting a nearest neighbor and next nearest neighbor algorithm to obtain an accurate matching point pair:
D nearest neighbor /D Secondary nearest neighbor <T
Wherein D is Nearest neighbor And D Secondary nearest neighbor The nearest neighbor distance and the next nearest neighbor distance when the current pixel point is taken as an origin are respectively represented, and T represents a threshold value used for matching. Wherein:
wherein, TPD i Descriptor representing arbitrary feature point i after dimension reduction, TPD j Descriptor indicating feature point j after dimension reduction, j not including i, TPD i,m Mth dimension vector representing feature point i descriptor, TPD j,m The m-th dimension vector representing the descriptor of the feature point j, l represents the dimension after dimension reduction.
The effects of the present invention are further described below with reference to the accompanying drawings.
In order to verify the validity and the correctness of the invention, two groups of fuzzy images of a structural scene and a texture scene are adopted to carry out a matching simulation experiment. All simulation experiments were implemented under Windows XP operating system using Visual Studio2010 software.
Simulation example 1:
fig. 2 shows blurred images of six structural scenes obtained under different blur degrees, wherein the adopted image size is 800×600, wherein (a) images are reference images, and other (b) - (f) images are images to be matched respectively. Fig. 3 (a) shows the correct matching number of the structural image, the horizontal axis shows the degree of blurring, the vertical axis shows the number of correct matching points, fig. 3 (b) shows the correct matching rate of the structural image, the horizontal axis shows the degree of blurring, and the vertical axis shows the correct matching rate; as can be seen from fig. 3 (a) and fig. 3 (b), the number of correct matching points obtained by the method of the present invention under all fuzzy variation conditions is significantly higher than that of the SIFT method.
Simulation example 2:
fig. 4 shows blurred images of six texture scenes obtained under different blur degrees, wherein the adopted image size is 800×600, wherein (a) images are reference images, and other (b) - (f) images are images to be matched respectively. Fig. 5 (a) shows the correct matching number of the texture image, the horizontal axis shows the degree of blurring, the vertical axis shows the number of correct matching points, fig. 5 (b) shows the correct matching rate of the texture image, the horizontal axis shows the degree of blurring, and the vertical axis shows the correct matching rate; it can be seen from fig. 5 (a) and fig. 5 (b) that the number of correct matching points obtained by the method of the present invention under all fuzzy variation conditions is significantly higher than that of the SIFT method.
The invention can accurately match the image with fuzzy change, not only can obtain higher matching point pairs, but also has higher correct matching rate.

Claims (5)

1. A robust blurred image matching method, comprising the steps of:
s1: inputting two original images with different blurring degrees;
s2: extracting feature points on two original images by using a Scale Invariant Feature Transform (SIFT) algorithm;
s3: respectively establishing three central circular areas with unchanged scales around the characteristic points on the two original images, and describing the characteristic points to form respective characteristic point descriptors of the two original images;
s4: the local projection mapping LPP method is adopted to reduce the dimension of the feature point descriptor for improving the arithmetic operation efficiency; comprising the following steps:
a. defining a feature point descriptor LSIFT described by three scale-invariant central regions as (x 1 ,x 2 ,…x m ),x i A feature point descriptor LSIFT representing one of the images; y is i =w T x i Representing a one-dimensional description of the transformation vector w, defining a similarity matrix S (S ij =s ji ):
b. Selecting an appropriate projection is to minimize the solution of the objective function f:
wherein D is a diagonal matrix D ii =∑ j S ij l=d-S is a laplace matrix; there are the following constraints:
Y T DY=w T XDX T w=1
c. the problem of minimizing the solution of the objective function f is simplified as:
w T XDX T w=1
d. conversion to generalized eigenvalue problem:
XLX T w=λXDX T w
wherein XLX T ,XDX T Are symmetrical and semi-positive definite matrices;
e. let W be the column vector of the generalized eigenvalue λ, projection matrix W LPP =(w 0 ,w 1 ,…w l-1 ) Each vector w of PD i (i=0, 1, …, l-1) all have 128 dimensions, then the projection matrix reduces the 128-dimensional descriptor vector to l-dimensions, so the 128-dimensional descriptor is translated into:
TPD=PD·W LPP
wherein TPD is a local descriptor of dimension l < 128;
s5: and matching the feature point descriptors of the two original images after the dimension reduction, and selecting accurate matching point pairs from the two images.
2. A robust blurred image matching method as claimed in claim 1 wherein said describing feature points in step S3 is specifying directional information of descriptors.
3. A robust blurred image matching method as claimed in claim 2 wherein said directional information specifying descriptors includes:
describing each characteristic point by using 16 seed points 4×4, dividing a gradient histogram of an area where each seed point is located into 8 direction intervals between 0 ° and 360 °, and carrying out weighting operation on the gradient histogram by using a Gaussian window to generate 128-dimensional characteristic vectors;
defining a feature point descriptor LSIFT described by three scale-invariant central regions is expressed as:
PD=α 1 L 12 L 23 L 3
wherein L is i (i=1, 2, 3) is a 128-dimensional SIFT descriptor, PD is a weighted 128-dimensional descriptor, α 123 Is a preset weighting coefficient.
4. A robust blurred image matching method as claimed in claim 1 wherein said descriptor matching in step S5 comprises:
computing two descriptors TPD i ,TPD j The Euclidean distance between the two is obtained by adopting a nearest neighbor and next nearest neighbor algorithm to obtain an accurate matching point pair:
D nearest neighbor /D Secondary nearest neighbor <T
Wherein D is Nearest neighbor And D Secondary nearest neighbor The nearest neighbor distance and the next nearest neighbor distance when the current pixel point is taken as an origin are respectively represented, T represents a threshold value used for matching, and the current pixel point is a matching point pair.
5. The method of claim 4, wherein said step S5 is characterized by said step D Nearest neighbor And D Secondary nearest neighbor The calculation is carried out according to the following formula:
wherein, TPD i Descriptor representing arbitrary feature point i after dimension reduction, TPD j Descriptor, TPD, representing feature point j after dimension reduction i,m Mth dimension vector representing feature point i descriptor, TPD j,m The m-th dimension vector representing the descriptor of the feature point j, i represents the dimension after dimension reduction.
CN201910898199.7A 2019-09-23 2019-09-23 Robust fuzzy image matching method Active CN112633304B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910898199.7A CN112633304B (en) 2019-09-23 2019-09-23 Robust fuzzy image matching method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910898199.7A CN112633304B (en) 2019-09-23 2019-09-23 Robust fuzzy image matching method

Publications (2)

Publication Number Publication Date
CN112633304A CN112633304A (en) 2021-04-09
CN112633304B true CN112633304B (en) 2023-07-25

Family

ID=75282554

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910898199.7A Active CN112633304B (en) 2019-09-23 2019-09-23 Robust fuzzy image matching method

Country Status (1)

Country Link
CN (1) CN112633304B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
WO2015035462A1 (en) * 2013-09-12 2015-03-19 Reservoir Rock Technologies Pvt Ltd Point feature based 2d-3d registration
CN105654421A (en) * 2015-12-21 2016-06-08 西安电子科技大学 Projection transform image matching method based on transform invariant low-rank texture
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103400384A (en) * 2013-07-22 2013-11-20 西安电子科技大学 Large viewing angle image matching method capable of combining region matching and point matching
WO2015035462A1 (en) * 2013-09-12 2015-03-19 Reservoir Rock Technologies Pvt Ltd Point feature based 2d-3d registration
CN105654421A (en) * 2015-12-21 2016-06-08 西安电子科技大学 Projection transform image matching method based on transform invariant low-rank texture
WO2019042232A1 (en) * 2017-08-31 2019-03-07 西南交通大学 Fast and robust multimodal remote sensing image matching method and system
CN110097093A (en) * 2019-04-15 2019-08-06 河海大学 A kind of heterologous accurate matching of image method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种结合SIFT和对应尺度LTP综合特征的图像匹配算法;陈丽芳;刘一鸣;刘渊;;计算机工程与科学(第03期);全文 *
基于局部二进制模式和图变换的快速匹配算法;赵小强;岳宗达;;电子学报(第09期);全文 *

Also Published As

Publication number Publication date
CN112633304A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
US8798377B2 (en) Efficient scale-space extraction and description of interest points
CN108197644A (en) A kind of image-recognizing method and device
CN107832704B (en) Fingerprint identification method using non-rigid registration based on image field
CN113160291B (en) Change detection method based on image registration
CN104537381B (en) A kind of fuzzy image recognition method based on fuzzy invariant features
CN108830283B (en) Image feature point matching method
CN110852327A (en) Image processing method, image processing device, electronic equipment and storage medium
CN110942473A (en) Moving target tracking detection method based on characteristic point gridding matching
Lee et al. Learning rotation-equivariant features for visual correspondence
CN113313002A (en) Multi-mode remote sensing image feature extraction method based on neural network
CN114724133B (en) Text detection and model training method, device, equipment and storage medium
CN113554036A (en) Characteristic point extraction and matching method for improving ORB algorithm
CN113592030B (en) Image retrieval method and system based on complex value singular spectrum analysis
CN114358166A (en) Multi-target positioning method based on self-adaptive k-means clustering
CN111582142B (en) Image matching method and device
CN111612063A (en) Image matching method, device and equipment and computer readable storage medium
CN112633304B (en) Robust fuzzy image matching method
CN116681740A (en) Image registration method based on multi-scale Harris corner detection
CN111401485A (en) Practical texture classification method
CN110969128A (en) Method for detecting infrared ship under sea surface background based on multi-feature fusion
US11989927B2 (en) Apparatus and method for detecting keypoint based on deep learning using information change across receptive fields
CN114004770B (en) Method and device for accurately correcting satellite space-time diagram and storage medium
CN113095185B (en) Facial expression recognition method, device, equipment and storage medium
CN110197184A (en) A kind of rapid image SIFT extracting method based on Fourier transformation
Wu et al. An accurate feature point matching algorithm for automatic remote sensing image registration

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant