CN112837353A - Heterogeneous image matching method based on multi-order characteristic point-line matching - Google Patents

Heterogeneous image matching method based on multi-order characteristic point-line matching Download PDF

Info

Publication number
CN112837353A
CN112837353A CN202011591353.5A CN202011591353A CN112837353A CN 112837353 A CN112837353 A CN 112837353A CN 202011591353 A CN202011591353 A CN 202011591353A CN 112837353 A CN112837353 A CN 112837353A
Authority
CN
China
Prior art keywords
matching
point
feature
virtual line
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011591353.5A
Other languages
Chinese (zh)
Inventor
赵薇薇
丁一帆
王艳
周颖
陈雪华
王红钢
孙赜
薛晓伟
杨宇科
董文军
刘心洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Remote Sensing Information
Original Assignee
Beijing Institute of Remote Sensing Information
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Remote Sensing Information filed Critical Beijing Institute of Remote Sensing Information
Priority to CN202011591353.5A priority Critical patent/CN112837353A/en
Publication of CN112837353A publication Critical patent/CN112837353A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/35Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a heterogeneous image matching method based on multi-order characteristic point-line matching, which comprises the following specific steps: the characteristic preliminary screening and matching are used for solving the problems of image resolution, rotation, deformation and the like caused by an imaging sensor and an imaging visual angle, and comprise multi-scale characteristic detection and characteristic rapid matching; optimizing feature screening, including VLD feature calculation and KVLD feature matching screening; fine characteristic screening, including RANSAC fine characteristic screening; the characteristic uniform distribution and encryption comprises the characteristic uniform distribution of the Delaunay triangulation network and the characteristic encryption based on the GRID. The method uses the SIFT feature detection algorithm with multi-scale analysis capability, realizes the capability of detecting the image feature points in different scales, solves the problems of image resolution, rotation, deformation and the like caused by an imaging sensor and an imaging visual angle, has the advantages of high precision, strong robustness and high efficiency, and improves the feature matching precision and quality by using the KVLD technology.

Description

Heterogeneous image matching method based on multi-order characteristic point-line matching
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a heterogeneous image matching method based on multi-order characteristic point-line matching.
Background
The large-area digital ortho-image can intuitively and vividly reflect the landform and geomorphologic conditions, and plays an increasingly important role in national economy and national defense construction. Under the big background of the current digital terrestrial era, the resolution ratio of remote sensing data is higher and higher, the acquisition mode is more and more diversified, the acquisition period is shorter and shorter, the explosively increased mass remote sensing data provides huge challenges for the traditional digital ortho image production means, and the automatic generation and the quick update of the large-area high-resolution ortho image become hot problems in the fields of photogrammetry and remote sensing.
The measurement and interpretation of the remote sensing data are important application directions of the remote sensing data, and image matching is a premise and an important part of image understanding. The single remote sensing image can not provide complete information of the measuring area, and a plurality of images need to be spliced to obtain the whole result of the measuring area so as to interpret and identify the image from the global angle. Meanwhile, the utilization rate of historical images can be improved by carrying out correlation matching on the heterogeneous multi-resolution multi-view images. However, in the image correlation matching process, there are more or less mismatching problems, such as different source images, different ground features, different resolutions, different viewing angles, etc., which bring difficulties to the subsequent correction and processing of the images.
The common correlation matching of remote sensing image data usually adopts two methods of texture analysis and feature matching. The texture analysis is to calculate the statistics of energy, entropy, inverse difference matrix, correlation and the like by constructing the digital characteristics describing the texture and using the methods of autocorrelation function, gray level co-occurrence matrix, gray level run length, gray level distribution and the like, and find out the association matching relationship. However, the texture feature calculation amount is usually large, the multi-scale and positioning analysis capability is insufficient, the processing flow is simple, and the method is mostly used for the correlation matching analysis of a single texture image. Feature matching comes along with the development of image processing and machine vision technologies, and is a registration method which is the widest in application field and the highest in use frequency at present. The image is analyzed through a certain characteristic, the calculation efficiency is greatly improved compared with that of a region-based method, meanwhile, the robustness to illumination change, scale change, rotation, even shooting visual angle change and the like is good, and the scene adaptability is strong. The method generally comprises three steps of feature extraction, description and matching. Common features include point features, line features, face features, and virtual features. The point features of the image refer to points with large gradient changes in each bit direction, such as angular points, inflection points, intersection points, and the like. The line features of the image mainly include the contour lines of roads and buildings, the edge lines of rivers and the like. The line features reflect the texture information of the images, and are more suitable for registration among the images with distortion. The surface features of the image refer to the texture features of the same image in continuous large-range areas, such as grasslands, forests, lakes, buildings and the like, are mostly used in the field of multispectral image registration, and the areas can be easily distinguished and identified according to different spectral components. The virtual feature of the image is a new structural feature such as a triangle, a circle, etc. formed by expanding basic features (a point feature and a line feature).
When the method for correlating and matching the image data is oriented to a scene where the heterogeneous multi-resolution multi-view image data is correlated and applied, a common feature matching algorithm is difficult to complete the matching requirement under the condition of large sidesway (more than 45 degrees).
Disclosure of Invention
The invention discloses a heterogeneous image matching method based on multi-order feature point-line matching, aiming at the problem that the matching requirement is difficult to complete under the condition of large sidesway (more than 45 degrees) existing in the heterogeneous multi-resolution multi-view image data correlation application scene oriented image data correlation matching method.
The invention discloses a heterogeneous image matching method based on multi-order characteristic point-line matching, which comprises the following specific steps:
s1, primary screening and matching of the characteristics of the heterogeneous images, specifically comprising the following steps:
s11, channel check. Judging whether the two different-source images to be matched are single-channel images, if so, turning to the step S12, otherwise, weighting the non-single-channel images into single-channel images;
s12, resolution check. Judging whether the resolutions of the two different-source images are the same, if so, entering the step S13, otherwise, carrying out down-sampling on the high-resolution image to ensure that the resolution is the same as that of the other low-resolution image;
and S13, multi-scale feature detection. Performing Feature primary screening on the two heterogeneous images by adopting a Scale Invariant Feature Transform (SIFT) Feature detection algorithm with multi-Scale analysis capability to obtain respective initial Feature points and Feature description vectors thereof, and completing Feature primary screening of the heterogeneous images;
and S14, rapidly matching the features. And performing fast matching on the feature description vectors of the two heterogeneous images by using a K-nearest neighbor or BBF algorithm, and obtaining matched feature points between the two heterogeneous images according to a feature vector matching result to realize fast matching of the features of the heterogeneous images.
S2, feature screening optimization, which specifically comprises the following steps:
and S21, establishing a virtual line segment, and performing threshold value screening according to the geometric consistency measurement.
For the two different sources processed by the step S1One of the images is a master image, and the other is a slave image. Two feature points are selected from the main image, and are respectively PiAnd PjThe two are connected to form a line segment I (P)i,Pj) It is denoted as a virtual line segment.
Finding P from a videoiAnd PjCorresponding matched feature point P'i′And P'j′Forming a pair of matching point pairs m between two heterogeneous imagesi,i′=(Pi,P′i′) And mj,j′=(Pj,P′j′). According to antipodal geometry, by Pi、PjAnd P'i′Calculating P'j′At theoretical position Q 'on the Slave image'jThe calculation formula is as follows:
Figure BDA0002868721540000031
the symbol s (P) is used for calculating the scale of the feature point P, the symbol alpha (P) represents the main direction of the feature point P, the symbol R (alpha) represents the clockwise rotation alpha angle, and the calculation formulas of the symbol s (P), the symbol alpha (P) and the symbol R (alpha) are calculation formulas of the scale, the main direction and the rotation angle of the feature point in the SIFT feature detection algorithm.
Figure BDA0002868721540000032
Is represented by PiAnd PjAnd a vector formed by connecting the two characteristic points. Calculate PjTheoretical position Q on the main imagejThe calculation formula is as follows:
Figure BDA0002868721540000033
then, the Euclidean distance or the Manhattan distance is used as a criterion to calculate P in the main imagei、PjAnd Qj3 errors in between:
di,j=dist(Pi,Pj),
ti,j=dist(Pi,Qj),
ei,j=dist(Pi,Qj),
the function dist (x1, x2) represents the euclidean distance or manhattan distance between two points x1 and x 2. Calculating P 'from the video'i′、P′j′And Q'j3 errors in between:
d′i′,j′=dist(P′i′,P′j′),
t′i′,j′=dist(P′i′,Q′j),
e′i′,j′=dist(P′i′,Q′j),
then, a matching point pair m is calculatedi,i′=(Pi,P′i′) And mj,j′=(Pj,P′j′) Measure χ (m) of geometric consistency therebetweeni,i′,mj,j′) The calculation formula is as follows:
χ(mi,i′,mj,j′)=min(ηi,i′,j,j′j,j′,i,i′),
wherein the content of the first and second substances,
Figure BDA0002868721540000041
when x (m)i,i′,mj,j′)<χmaxThen, consider the matching point pair mi,i′And mj,j′According to geometric consistency, χmaxA geometric consistency metric threshold.
In the main image, a virtual line segment is constructed for every two feature points, and (N-1) × (N-1) virtual line segments are constructed in total, wherein N is the number of the feature points in the main image, then the constructed virtual line segments are subjected to threshold value screening through the geometric consistency measurement, virtual line segments meeting the geometric consistency measurement requirement are left, then corresponding feature points in the secondary image are found based on the feature points in the virtual line segments meeting the geometric consistency measurement requirement, and then the virtual line segments of the secondary image are constructed according to the feature points of the secondary image.
S22, an inner point circle covering the virtual line segment is constructed.
Through the calculation of step S21, virtual line segments meeting the geometric consistency measurement requirement in the master image and the slave image are obtained, and then an inner point circle covering each virtual line segment is constructed. For the virtual line segment I (P) in the main imagei,Pj) D is the virtual line segment I (P)i,Pj) At the virtual line segment I (P)i,Pj) Evenly arrange U inner point circles, and each inner point circle is marked as DuU is the index number of the inner point circle, which ranges from 1 to U. The center of each inner point circle is at the characteristic point PiAnd a feature point PjOn the formed line segment, the radius of each inner point circle is
Figure BDA0002868721540000042
Inner point circle DuHas a center coordinate of
Figure BDA0002868721540000051
By the method, the inner point circle covering all the virtual line segments meeting the requirements in the main image and the auxiliary image is constructed.
S23, calculating the statistic result of the gradient histogram of each inner point circle in the virtual line segment.
Calculating the gradient of each pixel point in the inner point circle by using a calculation formula of the gradient of the pixel points in the SIFT feature detection algorithm, and then counting the gradient value of each point in the inner point circle to obtain a gradient histogram of the inner point circle. Specifically, when histogram statistics is performed on the gradient value of each point in the inner point circle, V statistical intervals are used, and the statistical result of the gamma statistical interval in the histogram of the u-th inner point circle is recorded as (h)u,γ)γ∈{1,...,V}U represents the u-th inner point circle, and gamma represents the gamma-th statistical interval; according to the steps, histogram statistics is carried out on the gradient value of each point in the point circle of the slave image, and the corresponding statistical result (h ') is obtained through calculation'u,γ)γ∈{1,...,V}
Finally, the statistical results of the gradient histograms of all the inner point circles in the master image and the slave image are normalized respectively to meet the requirements
Figure BDA0002868721540000052
And S24, carrying out histogram statistics on the directions of all the pixels in the inner point circle to obtain the main direction of the inner point circle.
For a single inner point circle DuHistogram statistics is performed on the directions of all pixels inside the image. The pixel direction calculation mode adopts a calculation formula in an SIFT feature detection algorithm. In the histogram statistical interval, the number of all pixel directions is W, and the W statistical interval in the u-th inner point circle direction histogram is marked as (O)u,w)w∈{0,...,W-1}
For inner point circle DuFor the direction histogram of (3), the direction with the largest statistical value is expressed as:
Figure BDA0002868721540000053
will wuAs the main direction of the inner point circle. According to the above steps, the master direction w 'of the inner point circle of the slave image is calculated'u
S25, VLD feature description of the virtual line segment is performed. Through the calculation in steps S23 and S24, the inner point circle gradient histogram statistical result and the inner point circle main direction of the Virtual Line segment are obtained and used as a Virtual Line Descriptor (VLD) of the Virtual Line segment.
And S26, customizing the virtual line segment contrast and screening the virtual line segments.
After VLD feature description is carried out on each virtual line segment, the virtual line segments are screened once according to the contrast factor. The calculation formula of the virtual line segment contrast factor is as follows:
Figure BDA0002868721540000061
when the image pixel value is in the interval of [0,255], setting a contrast factor threshold value 30, and when k exceeds 30, discarding the corresponding virtual line segment. And screening the virtual line segments of the master image and the slave image according to the method.
And S27, calculating VLD characteristic distances among the virtual line segments and screening.
Calculating the VLD characteristic distance tau (l, l ') between the virtual line segments l and l' corresponding to the master image and the slave image respectively:
Figure BDA0002868721540000062
in the formula, the beta belongs to [0,1], is adjustable weight and is set by a user. When the feature distance τ (l, l') is less than a certain threshold, the match is considered to be a satisfactory pair of virtual line segment matches.
After screening, obtaining the virtual line segment I (P) meeting the requirementi1,Pj1) And l '(P'i1′,P′j1′) Further obtain the main image feature point Pi1And Pj1And corresponding slave video feature point P'i1′And P'j1′
And S28, performing feature matching and screening on a virtual line segment descriptor (KVLD) connected by the K, and screening feature points forming the virtual line segment by using the KVLD.
Suppose that the characteristic point P obtained at S27i1And P'i1′If K other feature points exist in the respective neighborhood, P is consideredi1And P'i1′Is a pair of reliable matching points, otherwise, the matching points are screened out.
The characteristic point Pi1And P'i1′The neighborhood size of (B) is recorded as B, and its value dynamically changes according to the feature point density ρ. Assuming that M potential matches exist between the main image and the auxiliary image, the minimum feature point density rho is usedminK value, image size area (I), and number of potential matches M, most appropriate search radius BKThe calculation formula of (2) is as follows:
Figure BDA0002868721540000071
at a given search radius BKThen, if there are K or more feature points within the search radius of the feature point, the feature point P is considered to be the feature point Pi1And P'i1′Are reliable matching feature points that satisfy the KVLD matching.
And S3, fine screening of characteristics. For all feature points screened using KVLD, screening was done using a random sample consensus (RANSAC) feature fine screening algorithm.
And S4, uniformly distributing and encrypting the characteristics, wherein the uniformly distributing and encrypting the characteristics comprise uniform distribution of the characteristics of the Delaunay triangulation network and characteristic encryption based on the Grid.
S41, uniformly distributing Delaunay triangulation network characteristics;
and (4) constructing the Delaunay triangulation network by using the reliable matching characteristic points obtained after screening in the step (S3) as control points, dividing the control point area into a plurality of arc bands through a plurality of arcs, and during network construction, defining that a third point can be searched only in the current arc band, entering the next arc band after successful search, and circularly constructing the network.
When the triangulation is constructed, the Delaunay triangulation is iteratively optimized by combining dual constraints of triangular unit area and angle, and the method specifically comprises the following steps:
when the area of the triangular unit is larger than an area threshold value T _ max, interpolating a gravity center point of a triangle in the triangular unit, taking an inner difference point as a matching basic unit, and matching the interpolation point through a least square method, wherein the step of least square matching is to assume that p (x, y) and q (x, y) respectively represent the gray level of a search window of a matching center at a pixel position (x, y), calculate the mutual information of the matching window and the search window, if the mutual information is smaller than the mutual information value after the previous iteration, the iteration is ended, the interpolation point matching is completed, and otherwise, the iteration is continued, and the mutual information value of the next matching window and the search window is calculated;
when the area of the triangular unit is smaller than an area threshold value T _ min, control points forming the triangular unit are removed, when the angle of the minimum angle of the triangular unit is smaller than an angle threshold value T _ theta, the long and narrow triangular unit is optimized by a point and strip line method, namely, the minimum edges are combined into one point, the control points forming the minimum edges are moved to the middle point at the same time, the middle point of the minimum edge is selected as a new control point, the middle point is regarded as a new matching basic unit, the minimum edge is further optimized by adopting a least square matching method, and the triangular network is optimized by iteration for multiple times.
S42, encrypting based on the characteristics of Grid mesh;
in order to obtain high-quality image correction and mosaic splicing capacity, feature points are locally encrypted according to Grid grids and uniformly distributed by adopting a self-adaptive non-maximum value pressing method, and the specific method comprises the following steps: and increasing the number of grid points on the boundary of a certain area between the generation of the grids, namely, giving smaller node intervals to the points on the boundary according to the size comparison between the actual engineering drawing and the observation area, so that the generated grids are tighter on the boundary.
And solving the encrypted characteristic points through S41 and S42 to ensure that the positioning precision of the encrypted characteristic points is better than 0.3 pixel.
The invention has the beneficial effects that:
(1) the method uses the SIFT feature detection algorithm with multi-scale analysis capability, realizes the capability of detecting the image feature points in different scales, solves the problems of image resolution, rotation, deformation and the like caused by an imaging sensor and an imaging visual angle, and has the advantages of high precision, strong robustness, adaptability to different scales and rotation angles, and high algorithm efficiency.
(2) The method uses the KVLD technology, solves the problem that SIFT feature detection algorithm cannot further detect feature points when repeated textures such as water, desert and the like exist in the image, can enhance the image feature description capacity of repeated textures, large inclination angles, large translation and large deformation through the KVLD technology, and improves the feature matching precision and quality.
(3) According to the invention, through the Delaunay triangulation network uniform distribution and GRID encryption technology, the feature points are solved and encrypted, so that the matching precision of the feature points reaches the sub-pixel level, the matching error is less than 0.3 pixel, and the problem that the matching is difficult to complete under the condition of a large inclination angle (more than 45 degrees) by a common feature matching algorithm is solved.
Drawings
In order to more clearly illustrate the embodiments of the present application or technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 is a general flow chart of the present invention;
FIG. 2 is a flowchart of the Delaunay triangulation network construction;
fig. 3 is a schematic diagram of a dual-constraint triangulation network optimization.
Detailed Description
The following describes a multi-level feature point-line matching-based heterogeneous image matching method in detail with reference to the accompanying drawings and embodiments.
The remote sensing image data association matching can adopt two technical methods of texture analysis and feature matching. The texture analysis is to calculate the statistics of energy, entropy, inverse difference matrix, correlation and the like by constructing the digital characteristics describing the texture and using the methods of autocorrelation function, gray level co-occurrence matrix, gray level run length, gray level distribution and the like, and find out the association matching relationship. The texture feature calculation amount is usually large, the multi-scale and positioning analysis capability is insufficient, the processing flow is simple, and the method is suitable for correlation matching analysis of single texture images; the feature matching is born along with the development of image processing and machine vision technologies, has important application value in the remote sensing image registration, high-precision correction, synchronous positioning and mapping (SLAM) drawing directions, is developed rapidly, has relatively complete flow construction, and has various technical methods for solving specific problems in the links of feature detection, precise matching, quality optimization and the like.
The correlation matching of the heterogeneous multi-resolution multi-view image data relates to the differences of imaging sensors, image resolution, imaging view angles, shooting time phases, illumination, shielding and the like, has great difficulty in analysis and processing, and a characteristic matching technical method with controllable flow and extensible functions is adopted for dealing with the data. And each matched technical link is subjected to targeted analysis, and the transmission error is effectively controlled, so that the overall robustness and usability can be met, and a reliable guarantee is provided for specific applications such as image mosaic and the like.
The invention discloses a heterogeneous image matching method based on multi-order characteristic point-line matching, and fig. 1 is a general flow chart of the technical scheme of the invention, which comprises the following specific steps:
s1, primary screening and matching of the characteristics of the heterogeneous images, and the method is used for solving the problems of image resolution, rotation, deformation and the like caused by an imaging sensor and an imaging visual angle, and specifically comprises the following steps:
s11, channel check. Judging whether the two different-source images to be matched are single-channel images, if so, turning to the step S12, otherwise, weighting the non-single-channel images into single-channel images;
s12, resolution check. Judging whether the resolutions of the two different-source images are the same, if so, entering the step S13, otherwise, carrying out down-sampling on the high-resolution image to ensure that the resolution is the same as that of the other low-resolution image;
and S13, multi-scale feature detection. Performing Feature primary screening on the two heterogeneous images by adopting a Scale Invariant Feature Transform (SIFT) Feature detection algorithm with multi-Scale analysis capability to obtain respective initial Feature points and Feature description vectors thereof, and completing Feature primary screening of the heterogeneous images;
and S14, rapidly matching the features. And performing fast matching on the feature description vectors of the two heterogeneous images by using a K-nearest neighbor or BBF algorithm, and obtaining matched feature points between the two heterogeneous images according to a feature vector matching result to realize fast matching of the features of the heterogeneous images.
And S2, feature screening optimization. In the step S1, feature preliminary screening and matching of two heterogeneous images are completed based on the SIFT feature detection algorithm, and initial matching feature points are obtained, which solves the influence of factors such as image resolution, rotational deformation, illumination, etc., but when there are repetitive textures such as water and desert in the image, there are mismatching and further screening is needed. VLD (virtual Line descriptor) -based feature description (KLVD) can enhance the description capability of repeated texture and images with large inclination angle, large translation and large deformation. Here, K refers to the number of VLD feature points matched within a fixed radius around a feature point, and is also referred to as a KVLD technique, which has the advantages of greatly improving the accuracy of feature matching, further screening feature points obtained by initial matching, and improving accuracy and quality. The method specifically comprises the following steps:
and S21, establishing a virtual line segment, and performing threshold value screening according to the geometric consistency measurement.
For the two different source images processed in step S1, one of the two different source images is selected as a master image, and the other is selected as a slave image. Two feature points are selected from the main image, and are respectively PiAnd PjThe two are connected to form a line segment I (P)i,Pj) It is denoted as a virtual line segment.
Finding P from a videoiAnd PjCorresponding matched feature point P'i′And P'j′Forming a pair of matching point pairs m between two heterogeneous imagesi,i′=(Pi,P′i′) And mj,j′=(Pj,P′j′). According to antipodal geometry, by Pi、PjAnd P'i′Calculating P'j′At theoretical position Q 'on the Slave image'jThe calculation formula is as follows:
Figure BDA0002868721540000111
the specific calculation formulas of the symbol s (P) and the symbol alpha (P) represent the main direction of the feature point P, and the symbol R (alpha) represents the clockwise rotation alpha angle, and are the calculation formulas of the scale, the main direction and the rotation angle of the feature point in the SIFT feature detection algorithm.
Figure BDA0002868721540000112
Is represented by PiAnd PjAnd a vector formed by connecting the two characteristic points. Calculate PjTheoretical position Q on the main imagejThe calculation formula is as follows:
Figure BDA0002868721540000113
then, the Euclidean distance or the Manhattan distance is used as a criterion to calculate P in the main imagei、PjAnd Qj3 errors in between:
di,j=dist(Pi,Pj),
ti,j=dist(Pi,Qj),
ei,j=dist(Pi,Qj),
the function dist (x, y) represents the Euclidean distance or Manhattan distance between two points x, y. Calculating P 'from the video'i′、P′j′And Q'j3 errors in between:
d′i′,j′=dist(P′i′,P′j′),
t′i′,j′=dist(P′i′,Q′j),
e′i′,j′=dist(P′i′,Q′j),
then, a matching point pair m is calculatedi,i′=(Pi,P′i′) And mj,j′=(Pj,P′j′) Measure χ (m) of geometric consistency therebetweeni,i′,mj,j′) The calculation formula is as follows:
χ(mi,i′,mj,j′)=min(ηi,i′,j,j′j,j′,i,i′),
wherein the content of the first and second substances,
Figure BDA0002868721540000114
when x (m)i,i′,mj,j′)<χmaxThen, consider the matching point pair mi,i′And mj,j′According to geometric consistency, χmaxIs a threshold value. In practical application, χmaxA better threshold is 0.5. In this way, unsatisfactory virtual line segments I (P) can be screened outi,Pj)。
Based on the mode, in the main image, a virtual line segment is constructed for every two feature points, and (N-1) × (N-1) virtual line segments are constructed in total, wherein N is the number of the feature points in the main image, then threshold value screening is carried out on the constructed virtual line segments through the geometric consistency measurement, virtual line segments meeting the geometric consistency measurement requirement are left, then corresponding feature points in the secondary image are found based on the feature points in the virtual line segments meeting the geometric consistency measurement requirement, and then the virtual line segments of the secondary image are constructed according to the feature points of the secondary image.
S22, an inner point circle covering the virtual line segment is constructed.
Through the calculation of step S21, virtual line segments meeting the geometric consistency measurement requirement in the master image and the slave image are obtained, and then an inner point circle covering each virtual line segment is constructed. For the virtual line segment I (P) in the main imagei,Pj) D is the virtual line segment I (P)i,Pj) At the virtual line segment I (P)i,Pj) Evenly arrange U inner point circles, and each inner point circle is marked as DuU is the index number of the inner point circle, which ranges from 1 to U. The center of each inner point circle is on a line segment formed by the characteristic point Pi and the characteristic point Pj, and the radius of each inner point circle is
Figure BDA0002868721540000121
Inner point circle DuHas a center coordinate of
Figure BDA0002868721540000122
During actual testing, the optimal value of U is 10.
By the method, the inner point circle covering all the virtual line segments meeting the requirements in the main image and the auxiliary image is constructed.
S23, calculating the statistic result of the gradient histogram of each inner point circle in the virtual line segment.
Calculating the gradient of each pixel point in the inner point circle by using a calculation formula of the gradient of the pixel points in the SIFT feature detection algorithm, and then countingAnd obtaining the gradient histogram of the inner point circle by the gradient value of each point in the inner point circle. Specifically, when histogram statistics is performed on the gradient value of each point in the inner point circle, V statistical intervals are used, and the statistical result of the gamma statistical interval in the histogram of the u-th inner point circle is recorded as (h)u,γ)γ∈{1,...,V}U represents the u-th inner point circle, and gamma represents the gamma-th statistical interval; according to the steps, histogram statistics is carried out on the gradient value of each point in the point circle of the slave image, and the corresponding statistical result (h ') is obtained through calculation'u,γ)γ∈{1,...,V}
Finally, the statistical results of the gradient histograms of all the inner point circles in the master image and the slave image are normalized respectively to meet the requirements
Figure BDA0002868721540000131
And S24, carrying out histogram statistics on the directions of all the pixels in the inner point circle to obtain the main direction of the inner point circle.
For a single inner point circle DuHistogram statistics is performed on the directions of all pixels inside the image. The pixel direction calculation mode adopts a calculation formula in an SIFT feature detection algorithm. The number of the histogram statistical intervals of all the pixel directions is W, and the W statistical interval in the u-th inner point circle direction histogram is marked as (O)u,w)w∈{0,...,W-1}. The test result shows that when W>A better effect can be obtained when V is used, and the recommended value is W-24.
For inner point circle DuFor the direction histogram of (3), the direction with the largest statistical value is expressed as:
Figure BDA0002868721540000132
will wuAs the main direction of the inner point circle. According to the above steps, the master direction w 'of the inner point circle of the slave image is calculated'u
S25, VLD feature description of the virtual line segment is performed. Through the calculation in steps S23 and S24, the inner point circle gradient histogram statistical result and the inner point circle main direction of the Virtual Line segment are obtained and used as a Virtual Line Descriptor (VLD) of the Virtual Line segment.
And S26, customizing the virtual line segment contrast and screening the virtual line segments.
After VLD feature description is carried out on each virtual line segment, the virtual line segments are screened once according to the contrast factor. Because part of the virtual line segment just falls on the edge of an object in the image, the value of a part of gradient histogram statistical interval in the description of the VLD feature is very large, and the calculation of the distance of the subsequent VLD feature is influenced, so that the VLD feature needs to be screened. The calculation formula of the virtual line segment contrast factor is as follows:
Figure BDA0002868721540000141
when the image pixel value is in the interval of [0,255], setting a contrast factor threshold value 30, and when k exceeds 30, discarding the corresponding virtual line segment. And screening the virtual line segments of the master image and the slave image according to the method.
And S27, calculating VLD characteristic distances among the virtual line segments and screening.
Through calculation and screening of S23, S24 and S26, a virtual line segment I (P) meeting the requirements is obtained in the main imagei,Pj) And VLD characterization thereof. Similarly, a virtual line segment l ' (P ') satisfying the requirement in the video can be obtained 'i′,P′j′) And VLD characterization thereof. Calculating VLD characteristic distance tau (l, l') between virtual line segments corresponding to the master image and the slave image:
Figure BDA0002868721540000142
in the formula, the beta belongs to [0,1], is adjustable weight and is set by a user. When the feature distance τ (l, l') is less than a certain threshold, the match is considered a pair of satisfactory VLD matches. According to the statistics of the historical test results, when tau (l, l') is less than or equal to 0.35, the matching can be regarded as a pair of satisfactory virtual line segment matching.
After screening, obtaining the virtual line segment I (P) meeting the requirementi1,Pj1) And l '(P'i1′,P′j1′) Further obtain the main image feature point Pi1And Pj1And corresponding slave video feature point P'i1′And P'j1′
And S28, performing feature matching and screening on a virtual line segment descriptor (KVLD) connected by the K, and screening feature points forming the virtual line segment by using the KVLD.
Suppose that the characteristic point P obtained at S27i1And P'i1′If K other feature points exist in the respective neighborhood, P is consideredi1And P'i1′Is a pair of reliable matching points, otherwise, the matching points are screened out.
The characteristic point Pi1And P'i1′The neighborhood size of (B) is recorded as B, and its value dynamically changes according to the feature point density ρ. Assuming that M potential matches exist between the main image and the auxiliary image, the minimum feature point density rho is usedminK value, image size area (I), and number of potential matches M, most appropriate search radius BKThe calculation formula of (2) is as follows:
Figure BDA0002868721540000151
at a given search radius BKThen, if there are K or more feature points within the search radius of the feature point, the feature point P is considered to be the feature point Pi1And P'i1′Are reliable matching feature points that satisfy the KVLD matching. Here, ρminThe optimal value of the test is 30%, the optimal value of K is 3, and M can be input again by a user according to actual conditions.
And S3, fine screening of characteristics. For all feature points screened using KVLD, screening was done using the RANSAC feature fine screening algorithm.
Due to the imaging environment of the image and the complexity of equipment, the image has the problems of noise, deformation and the like, and a lot of errors still exist in the fast matching result obtained by utilizing the vector distance. The random sample consensus (RANSAC) algorithm is an effective method for removing noise influence and estimating a model. RANSAC estimates the model parameters using as few points as possible, and then expands the influence range of the obtained model parameters as much as possible, which can be further used to eliminate the points of the mismatch.
And S4, uniformly distributing and encrypting the characteristics, wherein the uniformly distributing and encrypting the characteristics comprise uniform distribution of the characteristics of the Delaunay triangulation network and characteristic encryption based on the Grid.
S41, uniformly distributing Delaunay triangulation network characteristics;
the Delaunay triangulation network is combined with the TPS sheet function, so that a high-precision correction can be obtained, and the correction can be used for local processing such as road misalignment, and the specific process is shown in fig. 2.
And (4) constructing the Delaunay triangulation network by using the reliable matching characteristic points obtained after screening in the step (S3) as control points, dividing the control point area into a plurality of arc bands through a plurality of arcs, and during network construction, defining that a third point can be searched only in the current arc band, entering the next arc band after successful search, and circularly constructing the network.
Since the remote sensing image has a wide observation range to the ground, the matching results are likely to be relatively dense in a region with large landscapes or building elevation deception, and the matching results are relatively sparse in a plain region, so that the sizes of the triangular units in the constructed triangular net are not uniformly distributed, and the requirements of subsequent influence on registration accuracy and the like cannot be met, therefore, when the triangular net is constructed, the Delaunay triangular net is iteratively optimized by combining the double constraints of the area and the angle of the triangular unit, and the process is shown in FIG. 3.
The area constraint for the triangle unit specifically includes:
when the area of the triangular unit is larger than an area threshold value T _ max, interpolating a gravity center point of a triangle in the triangular unit, taking an inner difference point as a matching basic unit, and matching the interpolation point through a least square method, wherein the step of least square matching is to assume that p (x, y) and q (x, y) respectively represent the gray level of a search window of a matching center at a pixel position (x, y), calculate the mutual information of the matching window and the search window, if the mutual information is smaller than the mutual information value after the previous iteration, the iteration is ended, the interpolation point matching is completed, and otherwise, the iteration is continued, and the mutual information of the next matching window and the search window is calculated;
when the area of the triangular unit is smaller than the area threshold value T _ min, the control points forming the triangular unit are removed, and the algorithm is time-consuming because small element registration is carried out on the triangular unit with the small area alone. The angle constraint is mainly because a certain triangular unit in the triangular network forms a long and narrow triangular unit due to the fact that the angle is too small, when the angle of the minimum angle of the triangular unit is smaller than an angle threshold value T _ theta, the long and narrow triangular unit is optimized through a point and strip line method, namely, the minimum edges are combined into one point, control points forming the minimum edges are moved to the middle point at the same time, the middle point of the minimum edge is selected as a new control point, the middle point is regarded as a new matching basic unit, the minimum edge is further optimized through a least square matching method, and the triangular network is optimized through iteration for multiple times.
S42, encrypting based on the characteristics of Grid mesh;
in order to obtain high-quality image correction and mosaic splicing capacity, feature points are locally encrypted according to Grid grids and uniformly distributed by adopting a self-adaptive non-maximum value pressing method, and the distribution characteristics of the feature points on image space positions are improved, so that the problems of geometric processing such as image correction caused by over-sparseness or density are solved.
In particular, by means of a network point on the encryption boundary. Because the grid density inside the calculation domain is controlled and transmitted by the boundary, if the grid of a certain calculation area is to be encrypted, the purpose is achieved by encrypting the boundary around the area to be encrypted, and the specific method is as follows:
and increasing the number of grid points on the boundary of a certain area between the generation of the grids, namely, giving smaller node intervals to the points on the boundary according to the size comparison between the actual engineering drawing and the observation area, so that the generated grids are tighter on the boundary.
And solving the encrypted characteristic points through S41 and S42 to ensure that the positioning precision of the encrypted characteristic points is better than 0.3 pixel.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that the described embodiments may be modified in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are illustrative in nature and should not be construed as limiting the scope of the invention.

Claims (4)

1. A heterogeneous image matching method based on multi-order characteristic point-line matching is characterized by comprising the following specific steps:
s1, primarily screening and matching the characteristics of the heterogeneous images;
s2, feature screening and optimizing;
s3, fine screening of characteristics, namely, finishing screening of all characteristic points screened by using KVLD by using a random sampling consistency characteristic fine screening algorithm;
and S4, uniformly distributing and encrypting the characteristics, wherein the uniformly distributing and encrypting the characteristics comprise uniform distribution of the characteristics of the Delaunay triangulation network and characteristic encryption based on the Grid.
2. The method according to claim 1, wherein the step S1 includes,
s11, checking a channel; judging whether the two different-source images to be matched are single-channel images, if so, turning to the step S12, otherwise, weighting the non-single-channel images into single-channel images;
s12, checking the resolution; judging whether the resolutions of the two different-source images are the same, if so, entering the step S13, otherwise, carrying out down-sampling on the high-resolution image to ensure that the resolution is the same as that of the other low-resolution image;
s13, multi-scale feature detection; performing feature primary screening on the two heterogeneous images by adopting a scale invariant feature transformation feature detection algorithm with multi-scale analysis capability to obtain respective initial feature points and feature description vectors thereof, and completing feature primary screening of the heterogeneous images;
s14, rapidly matching the features; and performing fast matching on the feature description vectors of the two heterogeneous images by using a K-nearest neighbor or BBF algorithm, and obtaining matched feature points between the two heterogeneous images according to a feature vector matching result to realize fast matching of the features of the heterogeneous images.
3. The method according to claim 1, wherein the step S2 includes,
s21, establishing a virtual line segment, and carrying out threshold value screening according to the geometric consistency measurement;
selecting one of the two different source images processed in step S1 as a master image and the other as a slave image; two feature points are selected from the main image, and are respectively PiAnd PjThe two are connected to form a line segment I (P)i,Pj) Recording the virtual line segment as a virtual line segment;
finding P from a videoiAnd PjCorresponding matched feature point P'i′And P'j′Forming a pair of matching point pairs m between two heterogeneous imagesi,i'=(Pi,P′i′) And mj,j'=(Pj,P′j′) (ii) a According to antipodal geometry, by Pi、PjAnd P'i′Calculating P'j′At theoretical position Q 'on the Slave image'jThe calculation formula is as follows:
Figure FDA0002868721530000021
the symbol s (P) is used for calculating the scale of the feature point P, the symbol alpha (P) represents the main direction of the feature point P, the symbol R (alpha) represents the clockwise rotation alpha angle, and the calculation formulas of the symbol s (P), the symbol alpha (P) and the symbol R (alpha) are calculation formulas of the scale, the main direction and the rotation angle of the feature point in the SIFT feature detection algorithm;
Figure FDA0002868721530000022
represents PiAnd PjA vector formed by connecting two characteristic points; calculate PjTheoretical position Q on the main imagejThe calculation formula is as follows:
Figure FDA0002868721530000023
then, the Euclidean distance or the Manhattan distance is used as a criterion to calculate P in the main imagei、PjAnd Qj3 errors in between:
di,j=dist(Pi,Pj),
ti,j=dist(Pi,Qj),
ei,j=dist(Pi,Qj),
the function dist (x1, x2) represents the Euclidean distance or Manhattan distance between two points x1 and x 2; calculating P 'from the video'i′、P′j′And Q'j3 errors in between:
d′i',j'=dist(P′i′,P′j′),
t′i',j'=dist(P′i′,Q′j),
e'i',j'=dist(P′i′,Q′j),
then, a matching point pair m is calculatedi,i'=(Pi,P′i′) And mj,j'=(Pj,P′j′) Measure χ (m) of geometric consistency therebetweeni,i',mj,j') The calculation formula is as follows:
χ(mi,i',mj,j')=min(ηi,i',j,j'j,j',i,i'),
wherein the content of the first and second substances,
Figure FDA0002868721530000031
when x (m)i,i',mj,j')<χmaxThen, consider the matching point pair mi,i'And mj,j'According to geometric consistency, χmaxA geometric consistency measure threshold;
in the main image, constructing a virtual line segment for every two feature points, constructing (N-1) × (N-1) virtual line segments in total, wherein N is the number of the feature points in the main image, then carrying out threshold value screening on the constructed virtual line segments through the geometric consistency measurement, leaving the virtual line segments meeting the geometric consistency measurement requirement, then finding the corresponding feature points in the secondary image based on the feature points in the virtual line segments meeting the geometric consistency measurement requirement, and then constructing the virtual line segments of the secondary image according to the feature points of the secondary image;
s22, constructing an inner point circle covering the virtual line segment;
calculating in step S21 to obtain virtual line segments in the master image and the slave image that meet the requirement of geometric consistency measurement, and then constructing an inner point circle covering each virtual line segment; for the virtual line segment I (P) in the main imagei,Pj) D is the virtual line segment I (P)i,Pj) At the virtual line segment I (P)i,Pj) Evenly arrange U inner point circles, and each inner point circle is marked as DuU is the index number of the inner point circle, and the value range of U is from 1; the center of each inner point circle is at the characteristic point PiAnd a feature point PjOn the formed line segment, the radius of each inner point circle is
Figure FDA0002868721530000032
Inner point circle DuHas a center coordinate of
Figure FDA0002868721530000033
By the method, an inner point circle covering all the virtual line segments meeting the requirements in the main image and the auxiliary image is constructed;
s23, calculating the statistic result of the gradient histogram of each inner point circle in the virtual line segment;
calculating the gradient of each pixel point in the inner point circle by using a calculation formula of the gradient of the pixel points in the SIFT feature detection algorithm, and then counting the gradient value of each point in the inner point circle to obtain a gradient histogram of the inner point circle; using V statistical intervals when histogram statistics is performed on the gradient value of each point in the inner point circleAnd the statistical result of the gamma statistical interval in the histogram of the u-th inner point circle is recorded as (h)u,γ)γ∈{1,...,V}U represents the u-th inner point circle, and gamma represents the gamma-th statistical interval; according to the steps, histogram statistics is carried out on the gradient value of each point in the point circle of the slave image, and the corresponding statistical result (h ') is obtained through calculation'u,γ)γ∈{1,...,V}
Finally, the statistical results of the gradient histograms of all the inner point circles in the master image and the slave image are normalized respectively to meet the requirements
Figure FDA0002868721530000041
S24, histogram statistics is carried out on the directions of all pixels in the inner point circle to obtain the main direction of the inner point circle;
for a single inner point circle DuPerforming histogram statistics on the directions of all pixels inside the image; the pixel direction calculation mode adopts a calculation formula in an SIFT feature detection algorithm; in the histogram statistical interval, the number of all pixel directions is W, and the W statistical interval in the u-th inner point circle direction histogram is marked as (O)u,w)w∈{0,...,W-1}
For inner point circle DuThe direction histogram of (3), the direction in which the statistical value is the largest is expressed as:
Figure FDA0002868721530000042
will wuAs the main direction of the inner point circle; according to the above steps, the master direction w 'of the inner point circle of the slave image is calculated'u
S25, performing VLD feature description of the virtual line segment; obtaining the inner point circle gradient histogram statistical result and the inner point circle main direction of the virtual line segment through the calculation of the steps S23 and S24, and using the inner point circle gradient histogram statistical result and the inner point circle main direction as a virtual line segment descriptor of the virtual line segment;
s26, self-defining the virtual line segment contrast and screening the virtual line segments;
after VLD feature description is carried out on each virtual line segment, the virtual line segments are screened for the first time according to the contrast factor; the calculation formula of the virtual line segment contrast factor is as follows:
Figure FDA0002868721530000043
when the image pixel value is in the interval of [0,255], setting a contrast factor threshold value of 30, and when k exceeds 30, discarding the corresponding virtual line segment; screening the virtual line segments of the main image and the slave image according to the method;
s27, calculating and screening VLD characteristic distances among the virtual line segments;
calculating the VLD characteristic distance tau (l, l ') between the virtual line segments l and l' corresponding to the master image and the slave image respectively:
Figure FDA0002868721530000051
in the formula, beta belongs to [0,1], is adjustable weight and is set by a user; when the characteristic distance tau (l, l') is smaller than a certain threshold value, the matching is considered as a pair of virtual line segment matching with the requirement;
after screening, obtaining the virtual line segment I (P) meeting the requirementi1,Pj1) And l '(P'i1',P′j1') Further obtain the main image feature point Pi1And Pj1And corresponding slave video feature point P'i1'And P'j1'
S28, matching and screening the characteristics of the virtual line segment descriptors connected by the K, and screening the characteristic points forming the virtual line segment by using KVLD;
suppose that the characteristic point P obtained at S27i1And P'i1'If K other feature points exist in the respective neighborhood, P is consideredi1And P'i1'If the matching points are a pair of reliable matching points, otherwise, screening out the matching points;
the characteristic point Pi1And P'i1'The neighborhood size of (B) is recorded as B, and its value is based on the feature point density ρTo change dynamically; assuming that M potential matches exist between the main image and the auxiliary image, the minimum feature point density rho is usedminK value, image size area (I), and number of potential matches M, most appropriate search radius BKThe calculation formula of (2) is as follows:
Figure FDA0002868721530000052
at a given search radius BKThen, if there are K or more feature points within the search radius of the feature point, the feature point P is considered to be the feature point Pi1And P'i1'Are reliable matching feature points that satisfy the KVLD matching.
4. The method according to claim 1, wherein the step S4 includes,
s41, uniformly distributing Delaunay triangulation network characteristics;
the reliable matching feature points obtained after screening in the step S3 are used as control points to construct a Delaunay triangulation network, the control point area is divided into a plurality of arc bands through a plurality of arcs, a third point can be searched only in the current arc band during network construction, the next arc band is entered after successful search, and the network construction is repeated in this way;
when the triangulation is constructed, the Delaunay triangulation is iteratively optimized by combining dual constraints of triangular unit area and angle, and the method specifically comprises the following steps:
when the area of the triangular unit is larger than an area threshold value T _ max, interpolating a gravity center point of a triangle in the triangular unit, taking an inner difference point as a matching basic unit, and matching the interpolation point through a least square method, wherein the step of least square matching is to assume that p (x, y) and q (x, y) respectively represent the gray level of a search window of a matching center at a pixel position (x, y), calculate the mutual information of the matching window and the search window, if the mutual information is smaller than the mutual information value after the previous iteration, the iteration is ended, the interpolation point matching is completed, and otherwise, the iteration is continued, and the mutual information value of the next matching window and the search window is calculated;
when the area of the triangular unit is smaller than an area threshold value T _ min, removing control points forming the triangular unit, when the angle of the minimum angle of the triangular unit is smaller than an angle threshold value T _ theta, optimizing the long and narrow triangular unit by a point and strip line method, namely combining the minimum edges into one point, moving the control points forming the minimum edges to the middle point at the same time, selecting the middle point of the minimum edge as a new control point, regarding the middle point as a new matching basic unit, further optimizing the control points by adopting a least square matching method, and iteratively optimizing the triangular network for multiple times;
s42, encrypting based on the characteristics of Grid mesh;
in order to obtain high-quality image correction and mosaic splicing capacity, feature points are locally encrypted according to Grid grids and uniformly distributed by adopting a self-adaptive non-maximum value pressing method, and the specific method comprises the following steps: and increasing the number of grid points on the boundary of a certain area between grid generation, namely, giving a smaller node distance to the points on the boundary according to the size comparison between the actual engineering drawing and the observation area.
CN202011591353.5A 2020-12-29 2020-12-29 Heterogeneous image matching method based on multi-order characteristic point-line matching Pending CN112837353A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011591353.5A CN112837353A (en) 2020-12-29 2020-12-29 Heterogeneous image matching method based on multi-order characteristic point-line matching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011591353.5A CN112837353A (en) 2020-12-29 2020-12-29 Heterogeneous image matching method based on multi-order characteristic point-line matching

Publications (1)

Publication Number Publication Date
CN112837353A true CN112837353A (en) 2021-05-25

Family

ID=75925134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011591353.5A Pending CN112837353A (en) 2020-12-29 2020-12-29 Heterogeneous image matching method based on multi-order characteristic point-line matching

Country Status (1)

Country Link
CN (1) CN112837353A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113624231A (en) * 2021-07-12 2021-11-09 北京自动化控制设备研究所 Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN116740374A (en) * 2022-10-31 2023-09-12 荣耀终端有限公司 Repeated texture recognition method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067370A (en) * 2017-04-12 2017-08-18 长沙全度影像科技有限公司 A kind of image split-joint method based on distortion of the mesh
CN110009688A (en) * 2019-03-19 2019-07-12 北京市遥感信息研究所 A kind of infrared remote sensing image relative radiometric calibration method, system and remote sensing platform

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067370A (en) * 2017-04-12 2017-08-18 长沙全度影像科技有限公司 A kind of image split-joint method based on distortion of the mesh
CN110009688A (en) * 2019-03-19 2019-07-12 北京市遥感信息研究所 A kind of infrared remote sensing image relative radiometric calibration method, system and remote sensing platform

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ZHE LIU等: "Virtual Line Descriptor and Semi-Local Matching Method for Reliable Feature Correspondence", 《BRITISH MACHINE VISION CONFERENCE》, pages 2 *
姜三等: "Delaunay三角网约束下的影像稳健匹配方法", 《测绘学报》, pages 1 *
朱红等: "Delaunay三角网优化下的小面元遥感影像配准算法", 《信号处理》, pages 2 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113624231A (en) * 2021-07-12 2021-11-09 北京自动化控制设备研究所 Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN113624231B (en) * 2021-07-12 2023-09-12 北京自动化控制设备研究所 Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN116740374A (en) * 2022-10-31 2023-09-12 荣耀终端有限公司 Repeated texture recognition method and device

Similar Documents

Publication Publication Date Title
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN110660023B (en) Video stitching method based on image semantic segmentation
CN112446327B (en) Remote sensing image target detection method based on non-anchor frame
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN104599258B (en) A kind of image split-joint method based on anisotropic character descriptor
CN109816706B (en) Smooth constraint and triangulation equal-proportion image pair dense matching method
CN103729654A (en) Image matching retrieval system on account of improving Scale Invariant Feature Transform (SIFT) algorithm
CN111553939B (en) Image registration algorithm of multi-view camera
CN105261014A (en) Multi-sensor remote sensing image matching method
CN103034982A (en) Image super-resolution rebuilding method based on variable focal length video sequence
CN112837353A (en) Heterogeneous image matching method based on multi-order characteristic point-line matching
Hu et al. Efficient and automatic plane detection approach for 3-D rock mass point clouds
CN112652020B (en) Visual SLAM method based on AdaLAM algorithm
JP2012103758A (en) Local feature amount calculation device and method therefor, and corresponding point search device and method therefor
CN107240130A (en) Remote Sensing Image Matching method, apparatus and system
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
Zhang et al. GPU-accelerated large-size VHR images registration via coarse-to-fine matching
CN112614167A (en) Rock slice image alignment method combining single-polarization and orthogonal-polarization images
Zhu et al. Local readjustment for high-resolution 3d reconstruction
CN110390338B (en) SAR high-precision matching method based on nonlinear guided filtering and ratio gradient
Chen et al. Hierarchical line segment matching for wide-baseline images via exploiting viewpoint robust local structure and geometric constraints
CN111127353A (en) High-dynamic image ghost removing method based on block registration and matching
CN114399547B (en) Monocular SLAM robust initialization method based on multiframe
CN113066015B (en) Multi-mode remote sensing image rotation difference correction method based on neural network
CN102609928B (en) Visual variance positioning based image mosaic method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination