CN110135455B - Image matching method, device and computer readable storage medium - Google Patents

Image matching method, device and computer readable storage medium Download PDF

Info

Publication number
CN110135455B
CN110135455B CN201910274078.5A CN201910274078A CN110135455B CN 110135455 B CN110135455 B CN 110135455B CN 201910274078 A CN201910274078 A CN 201910274078A CN 110135455 B CN110135455 B CN 110135455B
Authority
CN
China
Prior art keywords
image
matching
epipolar
point
image matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910274078.5A
Other languages
Chinese (zh)
Other versions
CN110135455A (en
Inventor
王义文
王健宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910274078.5A priority Critical patent/CN110135455B/en
Publication of CN110135455A publication Critical patent/CN110135455A/en
Priority to PCT/CN2019/102187 priority patent/WO2020206903A1/en
Application granted granted Critical
Publication of CN110135455B publication Critical patent/CN110135455B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to the technical field of artificial intelligence, and discloses an image matching method, which comprises the following steps: generating an image imaging image according to a scene image shot by the aerial camera, and performing primary image matching on the image imaging image by using a scale-invariant feature transformation method to generate a primary image matching set; generating epipolar line images based on the primary image matching set, calculating the overlapping degree between the epipolar line images, completing secondary image matching, and generating a secondary image matching set; and based on the second image matching set, establishing dense matching of all pixel points among the images, generating a third image matching set, and executing three-dimensional reconstruction to obtain a reconstructed scene image. The invention also provides an image matching device and a computer readable storage medium. The invention provides a novel image matching scheme applied to aerial dense stereoscopic scenes, and the image matching efficiency can be improved.

Description

Image matching method, device and computer readable storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to an image matching method, an image matching device, and a computer readable storage medium.
Background
Image matching refers to the process of identifying points of identical name between two or more images by a certain matching algorithm. The method is an important early step in the problems of image fusion, target identification, target change detection, computer vision and the like, and has wide application in a plurality of fields such as remote sensing, digital photogrammetry, computer vision, graphics, military application and the like. The most practical and effective method for image matching at present is to judge different points of the image by visual inspection; secondly, comparing all pixel gray values of the target area according to the principle that the essence of the image is pixels; or based on the principle of template matching, finding the same or most similar position of the sub-images in the target image and the search image, and the like.
The method can be practically used in certain fields, but the effect is strong when the method is applied to matching of dense stereoscopic scenes under aerial photography. Because the number of the aerial images is huge, and the objects of each picture are dense, the visual inspection of human eyes is overwhelming and impractical; when pixel matching is used for making difference values for a plurality of image pixels, the pixel matching is influenced by noise, quantization error, tiny illumination change, tiny translation and the like, and larger pixel difference values are generated to influence the matching effect; the template matching method is applied to dense images, a large number of matching modules are required to be generated to search in the images, timeliness is general, meanwhile, the probability of mismatching is very high due to the influence of image noise. In summary, because the dense scene under aerial photography is complex and changeable, the three-dimensional reconstruction can be performed, so that the analysis and research of users can be more effectively assisted, and the method lacks the function.
Disclosure of Invention
The invention provides an image matching method, an image matching device and a computer readable storage medium, and mainly aims to provide a novel image matching method applied to aerial dense stereoscopic scenes, so that image matching efficiency is improved.
In order to achieve the above object, the present invention provides an image matching method, including:
generating an image imaging image according to a scene image shot by the aerial camera, and performing primary image matching on the image imaging image by using a scale-invariant feature transformation method to generate a primary image matching set;
generating epipolar line images based on the primary image matching set, calculating the overlapping degree between the epipolar line images, completing secondary image matching, and generating a secondary image matching set;
and based on the second image matching set, establishing dense matching of all pixel points among the images, generating a third image matching set, and executing three-dimensional reconstruction to obtain a reconstructed scene image.
Optionally, generating an image map according to the scene image shot by the aerial camera includes:
according to the parameters recorded when the aerial photographing instrument photographs the scene images, including the low-precision position, the gesture information and the outline height of the measuring area, the overlapped scene images photographed by the aerial photographing instrument are restored to the respective positions by using a model formula of object imaging under the aerial photographing, and n image imaging images are generated, wherein the model formula of object imaging is as follows:
sm=KR[I-C]M,
Wherein s is a scale factor, M is an image point coordinate, M is an object point coordinate, K is a parameter matrix in the aerial photographing tool, R is a rotation matrix, C is a projection center position vector, and I is a 3-order identity matrix.
Optionally, the performing primary image matching on the image imaging map by using a scale invariant feature transform method to generate a primary image matching set includes:
converting an image set formed by the n image imaging images into a corresponding undirected graph edge set E;
in the undirected graph edge set E, image matching is performed by adopting a scale invariant feature transform algorithm, and in the image matching, an image (I i 、I j ) If I i 、I j The number of matching points of the two images is smaller than a threshold value N 1 Will (I) i 、I j ) Reject from set E, if I i 、I j The number of matching points of the two images is greater than a threshold N 1 Then the imaging graph pair is reserved, and the imaging graph pair is generatedAn image pair, the primary image matching set is generated.
Optionally, generating a epipolar line image based on the primary image matching set and calculating the overlapping degree between the epipolar line images, completing the secondary image matching, and generating a secondary image matching set, including:
(a) The scale-invariant feature transform algorithm is utilized for the saidThe method comprises the steps of extracting point characteristics of an image pair, obtaining uniformly distributed high-precision homonymous points, and then estimating a basic matrix by using a random sample association algorithm (RANSAC) strategy to obtain the basic matrix;
(b) Determining homonymy epipolar lines corresponding to each group of homonymy points by using the basic matrix;
(c) Based on the principle that the epipolar line must intersect at the epipolar point, a minimum is adoptedDetermination by the square methodNuclear point coordinates of an image pair, generating quick mapping of nuclear lines between images according to the nuclear point coordinates, resampling the nuclear lines along the direction of the nuclear lines by adopting a bilinear interpolation method, and completing nuclear line image production and matching regeneration>An image pair, the second image matching set is generated.
Optionally, the (c) includes:
based on the basic matrix, constructing a rotation matrix and a projection matrix, dividing the rotation matrix into x, y and z axes, which are respectivelyWherein (1)>The expression pattern of (2) is as follows:
and obtain projection matrix->
Wherein, camera parameters for camera left, +.>Camera parameters, t, for camera right left 、t right The components of camera parameters in x, y, z axes for camera left and camera right, respectively;
according to the principle that epipolar lines are intersected with epipolar points, the calculation result of a projection matrix is adopted to determine the epipolar point coordinates (x p ,y p );
Deriving a central projection according to a collineation conditional equation, and calculating an included angle relation between two adjacent epipolar lines when the epipolar lines are sampled to finish the determination of each epipolar line on a central projection image; using the generated basis matrix, each image pair can be determined Corresponding epipolar lines are respectively according to +.>The epipolar coordinate of each epipolar of the image is used for determining the epipolar line of the point, and the epipolar line correspondence between the same image pair is completed to obtain +.>The epipolar equation for (a) is:
wherein, (x) p ,y p ) Is the above calculated epipolar coordinate, (x) base ,y base ) The reference coordinates of the center projection image are obtained by the same methodIs a epipolar equation;
based on the epipolar equation, each image pair is establishedAfter the epipolar line mapping, according to the resampling rule of the bilinear interpolation method, generating epipolar line images, calculating the overlapping degree, and obtaining the epipolar line imagesOverlap is less than threshold N 2 Is discarded, generating->And obtaining the second image matching set by using the image pair.
Optionally, the establishing dense matching of all pixels between images based on the second image matching set includes:
in the second image matching set E 2 Based on the above, the corner detection algorithm of the minimum same-value segmentation absorption core is used for respectively extractingForms a matching point set of the angular points, and combines with the epipolar geometric constraint and dynamic programming algorithm to establishDense matching of all pixels between images.
Optionally, the establishing dense matching of all pixels between the images based on the second image matching set, and generating a third image matching set, includes:
Performing Gaussian smoothing on an input image, traversing each pixel point in the image, judging whether the pixel point is an edge point by utilizing a Sobel operator, and judging path loss L according to the principle of minimizing a loss function of a global energy equation if the pixel point is the edge point r Whether (p, d) is minimized or not, the decision principle is as follows:
wherein C (p, d) is the local loss of the path,
l is the last loss of path r Min (p-r) is the minimum loss of one step loss on the path, from which it can be determined whether the point isCorner points, if the detected two corner points are adjacent, L is removed r (p, d) larger corner points, thereby detecting corner points of pairs of images in said second image matching set;
for the followingEach point in the set of image corner points is +.>Searching corner points matched with the corresponding search area in the image, and obtaining an intersection set of two matched point sets called an initial matched point set K 1 At the initial matching point set K 1 In->The method comprises the steps of searching matching points in corresponding search areas, calculating the similarity between the point and each candidate matching point in the search areas, selecting the candidate matching point with the highest similarity as the matching point, and obtaining a matching point set K of the corner points 2
Matching point set K using corner points 2 Obtaining according to the limit geometric constraint relationGenerating a limit corresponding relation set K according to polar line corresponding relation of the image 3 By gray scale pair K 3 The inner polar lines are segmented, each polar line is divided into a plurality of gray segments, and the generated gray segment set is K 4 And establishing a corresponding relation between gray level segments by using a dynamic programming algorithm, and establishing a corresponding relation between pixel points between gray level segments corresponding to the province by using a linear interpolation method, so that dense matching of all pixel points between images is realized, and the third image matching set is obtained.
Optionally, the performing three-dimensional reconstruction, obtaining the reconstructed scene image includes:
after dense matching, the depth of field of the scene is calculated by using the corresponding relation existing in all the pixel points in the third image matching set, the scene is reconstructed by using preset 3Dmax software, and the three-dimensional geometric information of the scene space is restored, so that the reconstructed scene image is obtained.
In addition, in order to achieve the above object, the present invention also provides an image matching device, which includes a memory and a processor, wherein the memory stores an image matching program that can be executed on the processor, and the image matching program implements the steps of the image matching method when executed by the processor.
In addition, to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon an image matching program executable by one or more processors to implement the steps of the image matching method as described above.
The image matching method, the device and the computer readable storage medium provided by the invention are used for generating an image imaging image according to a scene image shot by an aerial camera, performing primary image matching on the image imaging image by utilizing a scale invariant feature transformation method, generating a primary image matching set, generating a epipolar line image based on the primary image matching set, calculating the overlapping degree between the epipolar line images, completing secondary image matching, generating a secondary image matching set, establishing dense matching of all pixels between images based on the secondary image matching set, generating a third image matching set, and performing three-dimensional reconstruction to obtain a reconstructed scene image. The invention improves the image matching efficiency, and can carry out three-dimensional reconstruction on the images of the dense scene under aerial photography, thereby being capable of more effectively helping the user to carry out analysis and research.
Drawings
FIG. 1 is a flowchart of an image matching method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating an internal structure of an image matching apparatus according to an embodiment of the present invention;
fig. 3 is a schematic block diagram of an image matching process in an image matching apparatus according to an embodiment of the invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the descriptions of "first," "second," etc. are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature.
Further, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present invention.
The invention provides an image matching method. Referring to fig. 1, a flowchart of an image matching method according to an embodiment of the invention is shown. The method may be performed by an apparatus, which may be implemented in software and/or hardware.
In this embodiment, the image matching method includes:
s10, generating an image imaging image according to a scene image shot by the aerial photography instrument, and performing primary image matching on the image imaging image by using a scale-invariant feature transformation method to generate a primary image matching set.
Scene images shot by aerial photographing instruments such as unmanned aerial vehicles, helicopters and other flight control systems are large in number of images, wide in view angle, particularly large in number and concentration of buildings. Therefore, the invention firstly restores the image sets with overlapping positions to reconstruct the imaging diagram of the object.
According to the preferred embodiment of the invention, according to the low-precision position and gesture information recorded when the aerial photographing instrument photographs the images of the scene required, the outline height of the measuring area and other parameters, the overlapped images of the scene are restored to the respective positions by using a model formula of the object imaging under the aerial photographing, and an image imaging image is generated.
In a preferred embodiment of the present invention, the model formula of the object imaging is as follows:
sm=KR[I-C]M,
wherein s is a scale factor; m is an image point coordinate, M is an object point coordinate (the object point and the image point are respectively an object space position and an image space position in optical imaging); k is a parameter matrix in the aerial photographing tool and consists of focal length and image principal point coordinates; r is a rotation matrix, and can be converted into an approximate value according to yaw (yaw), pitch (pitch) and roll (roll) recorded by a system of the aerial photographing tool; c is a projection center position vector, and can be directly obtained by approximation of longitude (longitude), latitude (latitudes) and altitude (altitudes) recorded by a GPS of an aerial photographing tool; i is a 3-order identity matrix.
The imaging image of n images can be obtained by using the model formula of object imaging.
The generated n image imaging graphs are collectively called an image set, and the image set is converted into a corresponding undirected graph edge set E: g n =(V n ,E n ) Wherein V is n Called the vertex set, E n Referred to as an edge set (a graph is a widely used data structure in which nodes are referred to as vertices and the relationship between two vertices can be represented by an even pair, referred to as an edge. In the undirected graph edge set E, E n Representing the number of edges as nE, each table representing an image pair, E n Representing nE image pairs, the subsequent image matching process need only be performed between the nE image pairs. If the relation among the images is not considered and an exhaustive traversal strategy is adopted for image matching, the total matching number isTypically n (n-1)/2 will be much greater than nE. Therefore, the method for constructing the image relationship undirected graph limits the range of image matching, can avoid blind image matching, and leads the total image matching calculation complexity to be equal to O (n 2 ) The number is reduced to O (n), so that the matching calculation efficiency is improved; meanwhile, the interference of the non-associated image pair can be effectively eliminated, the mismatching generated by the non-overlapping image is fundamentally avoided, and the matching accuracy and the reconstruction robustness are improved.
In the undirected graph edge set E, a Scale-invariant feature transform (Scale-invariant feature transform, SIFT) algorithm is adopted for image matching. In image matching, if I i 、I j The matching points of the two images are less and less than the threshold value N 1 Indicating less overlap or weaker correlation, will (I i 、I j ) And (3) removing the set E. If I i 、I j The number of matching points of the two images is greater than a threshold N 1 Then the imaging graph pair is reserved, and the imaging graph pair is generatedAltogether n 1 E image pairs to generate primary image matchingAlbum E 1
And S20, generating a epipolar line image based on the primary image matching set, calculating the overlapping degree between the epipolar line images, completing secondary image matching, and generating a secondary image matching set.
The step S10 is just to filter out the images without repetition or with small repetition, and for the images with certain repetition, the invention continues to carry out matching filtering by using a epipolar line image method.
The epipolar line image is a method for changing a search range from an imaging diagram of a two-dimensional plane to a one-dimensional straight line in the matching process. Specifically, in dense stereoscopic photography, a plane formed by a photographing base line and any ground point is called a epipolar plane, and an intersection line of the epipolar plane and an image plane is called a epipolar line. On the stereo pair, the homonymous image points are positioned on homonymous epipolar lines, and the image points on the homonymous epipolar lines are in one-to-one correspondence. Thus, if a homonym epipolar pair can be determined on a stereo pair, then the search and match of a two-dimensional image can be translated into a search and match along the epipolar line using the properties of the homonym epipolar pair described above. The epipolar line image eliminates the upper and lower parallax between the stereoscopic images, reduces the searching range, reduces the matching calculation amount and improves the matching accuracy, thereby having important significance for dense stereoscopic image matching.
The preferred embodiment of the invention discloses a epipolar line image making and matching method, which is used for generating epipolar line images and calculating the overlapping degree between the epipolar line images, and comprises the following steps: (a) Using SIFT algorithm to saidThe image pair performs point feature extraction, obtains uniformly distributed high-precision homonymous points, and then performs basic matrix estimation by using a random sample consensus (RANSAC) strategy to obtain n 1 A base matrix of E pairs of images; (b) Determining homonymy epipolar lines corresponding to each group of homonymy points by using the basic matrix; (c) According to the principle that the epipolar line must intersect at the epipolar point, the least square method is adopted to determine +.>Nuclear point coordinates of an image pair, generating quick mapping of nuclear lines between images according to the nuclear point coordinates, resampling the nuclear lines along the direction of the nuclear lines by adopting a bilinear interpolation method, and completing nuclear line image production and matching regeneration>Altogether n 2 E image pairs, generating a second image matching set.
The epipolar line image manufacturing method based on the basic matrix can avoid the problems of iterative calculation, initial value and the like in the relative relation resolving process, and can also have good resolving precision under the condition that aerial photographing adopts a large angle view angle. The following is a specific step of the step (c):
(1) And (3) determining nuclear point coordinates:
Based on the basic matrix, constructing a rotation matrix and a projection matrix, dividing the rotation matrix into x, y and z axes, which are respectivelyWherein (1)>The expression pattern of (2) is as follows:
and obtain projection matrix->
Wherein, camera ginseng for camera leftCount (n)/(l)>Camera parameters, t, for camera right left 、t right The components of camera parameters in x, y, z axes for camera left and camera right, respectively;
according to the principle that epipolar lines are intersected with epipolar points, the calculation result of a projection matrix is adopted to determine the epipolar point coordinates (x p ,y p )。
(2) Performing epipolar line mapping:
the purpose of the mapping is to perform epipolar image correction directly on the central projection image. The specific steps of mapping are: deriving a central projection according to a collineation conditional equation, and calculating an included angle relation between two adjacent epipolar lines when the epipolar lines are sampled to finish the determination of each epipolar line on a central projection image; using the generated basis matrix, each image pair can be determinedCorresponding epipolar lines are respectively according to +.>The epipolar coordinate of each epipolar of the image is used for determining the epipolar line of the point, and the epipolar line correspondence between the same image pair is completed to obtain +.>The epipolar equation for (a) is:
wherein, (x) p ,y p ) Is the above calculated epipolar coordinate, (x) base ,y base ) The reference coordinates of the center projection image are obtained by the same method Is a epipolar equation for (1).
(3) Generating a second image matching set:
based on the epipolar equation, each image pair is establishedAfter the epipolar line mapping, generating epipolar line images and calculating the overlapping degree according to the resampling rule of the bilinear interpolation method, and enabling the overlapping degree of the epipolar line images to be smaller than a threshold value N 2 Is discarded, generating->Altogether n 2 E image pairs to obtain a second image matching set E 2
The generated second image matching set E 2 Although most of the stereo matching similarity problems are solved and the image matching standard is achieved, for dense stereo environments such as battlefield supervision, disaster condition search and rescue and the like, the geometric structure of the surface of the three-dimensional object needs to be recovered from the two-dimensional image, so that further processing is required, and the following S30 is executed.
S30, based on the second image matching set, establishing dense matching of all pixel points among images, and realizing three-dimensional reconstruction.
The preferred embodiment of the invention adopts a dense matching algorithm in the second image matching set E 2 Based on the above, the corner detection algorithm of the minimum same-value segmentation absorption core is used for respectively extractingForming a matching point set of the corner points; the method of combining epipolar geometric constraint, dynamic programming algorithm and the like is used for establishing +. >Dense matching of all pixels between images. The method comprises the following specific steps:
(one) detecting the second image matching set E 2 Corner points of the middle image.
The corner point is the point with the largest local curvature change on the contour line of the image, and contains the imageImportant information in the image set E 2 The detail matching of the middle image has great significance. The preferred embodiment of the invention adopts a minimum equal value segmentation absorbing core method (smallest uni-value segment assimilating nucleus, SUSAN) to detect the image corner points: performing Gaussian smoothing on the input image; traversing each pixel point in the image, and judging whether the pixel point is an edge point or not by utilizing a Sobel operator (which is a discrete first-order difference operator and is used for calculating the approximate value of the first-order gradient of the image brightness function); if the path loss L is the edge point, judging the path loss L of the corner point according to the loss function minimization principle of the global energy equation r Whether (p, d) is minimized or not, the decision principle is as follows:
where C (p, d) is the local loss of the path,l is the last loss of path r Min (p-r) is the minimum loss of one-step loss on the path, so that whether the point is a corner point can be judged, and redundant corner points are removed; further judging that if the detected two corner points are adjacent, removing L r (p, d) larger corner points. Through the steps, the second image matching set E can be detected 2 Corner points of the middle image pair.
(II) pairAnd (5) automatically matching the corner points of the image pair to obtain a matching point set.
The automatic matching of the angular points can effectively distinguish the difference between the image pairs according to the difference of the angular points, and is an effective means for achieving accurate matching. The automatic matching of the corner points can be divided into the following steps: (1) for the followingEach point in the set of image corner points is +.>Corner points matched with the corresponding search area in the image are searched, and similarly, for +.>Each point in the image corner set is in the same search method>Searching their corresponding matching points in the image, and designating the intersection of the two matching point sets as an initial matching point set K 1 The method comprises the steps of carrying out a first treatment on the surface of the (2) At the initial matching point set K 1 In->The image points are concentrated with each other, matching points are searched in the corresponding search area, the similarity between the matching points and each candidate matching point in the search area is calculated, and the candidate matching point with the highest similarity is selected as the matching point. In the preferred embodiment of the invention, the similarity calculation method adopts a gradient magnitude similarity method: if the gradient of one pixel point is g and the gradient of the other pixel point matched with the gradient approximately follows normal distribution, the similarity l of the two pixels g The method comprises the following steps:
where d (x) is a density function and k is a density coefficient. Obtaining a matching point set K through similarity calculation 2
(III) according to the matching point set K 2 Build up ofDense matching of all pixels between images.
In a preferred embodiment of the present invention, the dense matching includes: (1) for matching point set K 2 According to the limit tableWhat constraint relation is obtainedPolar line correspondence of the image to obtain a limit correspondence set K 3 . The term "limit geometric constraint relationship" means that if l and l 'are two corresponding epipolar lines in the left and right images, the corresponding point p on the epipolar line l in the left image must be on the epipolar line l' in the right image. (2) From the generated limit-corresponding relation set K 3 By gray scale pair K 3 The inner polar lines are segmented, each polar line is divided into a plurality of gray level segments, and gray level values of pixel points on each segment are similar. The formula for gray level segmentation is as follows:
the physical meaning of the above formula is to divide the continuous points of gray values within a certain range into segments. Wherein I (x) t ,y t ) Is pixel (x) t ,y t ) Gray values of (2); w is the number of pixel points on a certain gray level segment, namely the length of the gray level segment; t is a certain threshold value, the smaller the value is, the fewer the number of pixel points divided into a certain gray level segment is, the more the number of gray level segments is, and through experimental research, the best matching effect is achieved when T is 3, and the generated gray level segment set is K 4 The method comprises the steps of carrying out a first treatment on the surface of the (3) The dynamic programming algorithm (an optimization method for searching the optimal matching path) is utilized to establish the corresponding relation between gray segments, the linear interpolation method is utilized to establish the corresponding relation between all pixel points between the gray segments corresponding to the gray segments, so that dense matching of all pixel points between images is realized, and a third image matching set E is obtained 3
After dense matching, the third image matching set E 3 The corresponding relation exists among all the pixel points, so that the depth of field of the scene can be calculated, the scene is reconstructed by using preset 3Dmax software, the three-dimensional geometric information of the scene space is restored, and the reconstructed image is obtained.
The invention also provides an image matching device. Referring to fig. 2, an internal structure of an image matching apparatus according to an embodiment of the invention is shown.
In this embodiment, the image matching apparatus 1 may be a PC (Personal Computer ), or may be a terminal device such as a smart phone, a tablet computer, or a portable computer. The image matching device 1 at least comprises a memory 11, a processor 12, a communication bus 13, and a network interface 14.
The memory 11 includes at least one type of readable storage medium including flash memory, a hard disk, a multimedia card, a card memory (e.g., SD or DX memory, etc.), a magnetic memory, a magnetic disk, an optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the image matching device 1, for example a hard disk of the image matching device 1. The memory 11 may also be an external storage device of the image matching apparatus 1 in other embodiments, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the image matching apparatus 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the image matching apparatus 1. The memory 11 may be used not only for storing application software installed in the image matching apparatus 1 and various types of data, for example, codes of the image matching program 01, but also for temporarily storing data that has been output or is to be output.
The processor 12 may be a central processing unit (Central Processing Unit, CPU), controller, microcontroller, microprocessor or other data processing chip in some embodiments for executing program code or processing data stored in the memory 11, such as executing the image matching program 01, etc.
The communication bus 13 is used to enable connection communication between these components.
The network interface 14 may optionally comprise a standard wired interface, a wireless interface (e.g. WI-FI interface), typically used to establish a communication connection between the apparatus 1 and other electronic devices.
Optionally, the device 1 may further comprise a user interface, which may comprise a Display (Display), an input unit such as a Keyboard (Keyboard), and a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch, or the like. The display may also be referred to as a display screen or a display unit, as appropriate, for displaying information processed in the image matching device 1 and for displaying a visual user interface.
Fig. 2 shows only the image matching apparatus 1 with the components 11-14 and the image matching program 01, it will be understood by those skilled in the art that the structure shown in fig. 1 does not constitute a limitation of the image matching apparatus 1, and may include fewer or more components than shown, or may combine certain components, or may be arranged with different components.
In the embodiment of the apparatus 1 shown in fig. 2, the memory 11 stores an image matching program 01; the processor 12 executes the image matching program 01 stored in the memory 11 to realize the following steps:
firstly, generating an image imaging image according to a scene image shot by an aerial camera, and performing primary image matching on the image imaging image by using a scale-invariant feature transformation method to generate a primary image matching set.
Scene images shot by aerial photographing instruments such as unmanned aerial vehicles, helicopters and other flight control systems are large in number of images, wide in view angle, particularly large in number and concentration of buildings. Therefore, the invention firstly restores the image sets with overlapping positions to reconstruct the imaging diagram of the object.
According to the preferred embodiment of the invention, according to the low-precision position and gesture information recorded when the aerial photographing instrument photographs the images of the scene required, the outline height of the measuring area and other parameters, the overlapped images of the scene are restored to the respective positions by using a model formula of the object imaging under the aerial photographing, and an image imaging image is generated.
In a preferred embodiment of the present invention, the model formula of the object imaging is as follows:
sm=KR[I-C]M,
wherein s is a scale factor; m is an image point coordinate, M is an object point coordinate (the object point and the image point are respectively an object space position and an image space position in optical imaging); k is a parameter matrix in the aerial photographing tool and consists of focal length and image principal point coordinates; r is a rotation matrix, and can be converted into an approximate value according to yaw (yaw), pitch (pitch) and roll (roll) recorded by a system of the aerial photographing tool; c is a projection center position vector, and can be directly obtained by approximation of longitude (longitude), latitude (latitudes) and altitude (altitudes) recorded by a GPS of an aerial photographing tool; i is a 3-order identity matrix.
The imaging image of n images can be obtained by using the model formula of object imaging.
The generated n image imaging graphs are collectively called an image set, and the image set is converted into a corresponding undirected graph edge set E: g n =(V n ,E n ) Wherein V is n Called the vertex set, E n Referred to as an edge set (a graph is a widely used data structure in which nodes are referred to as vertices and the relationship between two vertices can be represented by an even pair, referred to as an edge. In the undirected graph edge set E, E n Representing the number of edges as nE, each table representing an image pair, E n Representing nE image pairs, the subsequent image matching process need only be performed between the nE image pairs. If the relation among the images is not considered and an exhaustive traversal strategy is adopted for image matching, the total matching number isTypically n (n-1)/2 will be much greater than nE. Therefore, the method for constructing the image relationship undirected graph limits the range of image matching, can avoid blind image matching, and leads the total image matching calculation complexity to be equal to O (n 2 ) The number is reduced to O (n), so that the matching calculation efficiency is improved; meanwhile, the interference of the non-associated image pair can be effectively eliminated, the mismatching generated by the non-overlapping image is fundamentally avoided, and the matching accuracy and the reconstruction robustness are improved.
In the absence ofAnd (3) in the graph edge set E, adopting a Scale-invariant feature transform (Scale-invariant feature transform, SIFT) algorithm to carry out image matching. In image matching, if I i 、I j The matching points of the two images are less and less than the threshold value N 1 Indicating less overlap or weaker correlation, will (I i 、I j ) And (3) removing the set E. If I i 、I j The number of matching points of the two images is greater than a threshold N 1 Then the imaging graph pair is reserved, and the imaging graph pair is generatedAltogether n 1 E image pairs to generate a primary image matching set E 1
And secondly, generating a epipolar line image based on the primary image matching set, calculating the overlapping degree between the epipolar line images, completing secondary image matching, and generating a secondary image matching set.
The first step just filters out the images without repetition or with small repetition, and for the images with certain repetition, the invention continues to carry out matching filtration by using the epipolar line image method.
The epipolar line image is a method for changing a search range from an imaging diagram of a two-dimensional plane to a one-dimensional straight line in the matching process. Specifically, in dense stereoscopic photography, a plane formed by a photographing base line and any ground point is called a epipolar plane, and an intersection line of the epipolar plane and an image plane is called a epipolar line. On the stereo pair, the homonymous image points are positioned on homonymous epipolar lines, and the image points on the homonymous epipolar lines are in one-to-one correspondence. Thus, if a homonym epipolar pair can be determined on a stereo pair, then the search and match of a two-dimensional image can be translated into a search and match along the epipolar line using the properties of the homonym epipolar pair described above. The epipolar line image eliminates the upper and lower parallax between the stereoscopic images, reduces the searching range, reduces the matching calculation amount and improves the matching accuracy, thereby having important significance for dense stereoscopic image matching.
The preferred embodiment of the invention discloses a epipolar line image making and matching method for generating epipolar line images and calculating the overlapping degree between the epipolar line images,the method comprises the following steps: (a) Using SIFT algorithm to saidThe image pair performs point feature extraction, obtains uniformly distributed high-precision homonymous points, and then performs basic matrix estimation by using a random sample consensus (RANSAC) strategy to obtain n 1 A base matrix of E pairs of images; (b) Determining homonymy epipolar lines corresponding to each group of homonymy points by using the basic matrix; (c) According to the principle that the epipolar line must intersect at the epipolar point, the least square method is adopted to determine +.>Nuclear point coordinates of an image pair, generating quick mapping of nuclear lines between images according to the nuclear point coordinates, resampling the nuclear lines along the direction of the nuclear lines by adopting a bilinear interpolation method, and completing nuclear line image production and matching regeneration>Altogether n 2 E image pairs, generating a second image matching set.
The epipolar line image manufacturing method based on the basic matrix can avoid the problems of iterative calculation, initial value and the like in the relative relation resolving process, and can also have good resolving precision under the condition that aerial photographing adopts a large angle view angle. The following is a specific step of the step (c):
(1) And (3) determining nuclear point coordinates:
Based on the basic matrix, constructing a rotation matrix and a projection matrix, dividing the rotation matrix into x, y and z axes, which are respectivelyWherein (1)>The expression pattern of (2) is as follows:
and obtain projection matrix->
Wherein, camera parameters for camera left, +.>Camera parameters, t, for camera right left 、t right The components of camera parameters in x, y, z axes for camera left and camera right, respectively;
according to the principle that epipolar lines are intersected with epipolar points, the calculation result of a projection matrix is adopted to determine the epipolar point coordinates (x p ,y p )。
(2) Performing epipolar line mapping:
the purpose of the mapping is to perform epipolar image correction directly on the central projection image. The specific steps of mapping are: deriving a central projection according to a collineation conditional equation, and calculating an included angle relation between two adjacent epipolar lines when the epipolar lines are sampled to finish the determination of each epipolar line on a central projection image; using the generated basis matrix, each image pair can be determinedCorresponding epipolar lines are respectively according to +.>The epipolar coordinate of each epipolar of the image is used for determining the epipolar line of the point, and the epipolar line correspondence between the same image pair is completed to obtain +.>The epipolar equation for (a) is:
wherein, (x) p ,y p ) Is the above calculated epipolar coordinate, (x) base ,y base ) The reference coordinates of the center projection image are obtained by the same method Is a epipolar equation for (1).
(3) Generating a second image matching set:
based on the epipolar equation, each image pair is establishedAfter the epipolar line mapping, generating epipolar line images and calculating the overlapping degree according to the resampling rule of the bilinear interpolation method, and enabling the overlapping degree of the epipolar line images to be smaller than a threshold value N 2 Is discarded, generating->Altogether n 2 E image pairs to obtain a second image matching set E 2
The generated second image matching set E 2 Although most of the stereo matching similarity problems are solved, and the image matching standard is achieved, for dense stereo environments such as battlefield supervision, disaster condition search and rescue and the like, the geometric structure of the surface of the three-dimensional object needs to be recovered from the two-dimensional image, so that further processing is needed, and the following third step is executed.
And thirdly, based on the second image matching set, establishing dense matching of all pixel points among images, and realizing three-dimensional reconstruction.
The preferred embodiment of the invention adopts a dense matching algorithm in the second image matching set E 2 Based on the above, the corner detection algorithm of the minimum same-value segmentation absorption core is utilized to respectivelyExtraction ofForming a matching point set of the corner points; the method of combining epipolar geometric constraint, dynamic programming algorithm and the like is used for establishing +. >Dense matching of all pixels between images. The method comprises the following specific steps:
(one) detecting the second image matching set E 2 Corner points of the middle image.
The corner point is the point with the largest local curvature change on the contour line of the image, and contains important information in the image, so that the image set E 2 The detail matching of the middle image has great significance. The preferred embodiment of the invention adopts a minimum equal value segmentation absorbing core method (smallest uni-value segment assimilating nucleus, SUSAN) to detect the image corner points: performing Gaussian smoothing on the input image; traversing each pixel point in the image, and judging whether the pixel point is an edge point or not by utilizing a Sobel operator (which is a discrete first-order difference operator and is used for calculating the approximate value of the first-order gradient of the image brightness function); if the path loss L is the edge point, judging the path loss L of the corner point according to the loss function minimization principle of the global energy equation r Whether (p, d) is minimized or not, the decision principle is as follows:
where C (p, d) is the local loss of the path,l is the last loss of path r Min (p-r) is the minimum loss of one-step loss on the path, so that whether the point is a corner point can be judged, and redundant corner points are removed; further judging that if the detected two corner points are adjacent, removing L r (p, d) larger corner points. Through the steps, the second image matching set E can be detected 2 Middle imageCorner points of pairs.
(II) pairAnd (5) automatically matching the corner points of the image pair to obtain a matching point set.
The automatic matching of the angular points can effectively distinguish the difference between the image pairs according to the difference of the angular points, and is an effective means for achieving accurate matching. The automatic matching of the corner points can be divided into the following steps: (1) for the followingEach point in the set of image corner points is +.>Corner points matched with the corresponding search area in the image are searched, and similarly, for +.>Each point in the image corner set is in the same search method>Searching their corresponding matching points in the image, and designating the intersection of the two matching point sets as an initial matching point set K 1 The method comprises the steps of carrying out a first treatment on the surface of the (2) At the initial matching point set K 1 In->The image points are concentrated with each other, matching points are searched in the corresponding search area, the similarity between the matching points and each candidate matching point in the search area is calculated, and the candidate matching point with the highest similarity is selected as the matching point. In the preferred embodiment of the invention, the similarity calculation method adopts a gradient magnitude similarity method: if the gradient of one pixel point is g and the gradient of the other pixel point matched with the gradient approximately follows normal distribution, the similarity l of the two pixels g The method comprises the following steps:
where d (x) is a density function and k is a density coefficient. Obtaining a matching point set K through similarity calculation 2
(III) according to the matching point set K 2 Build up ofDense matching of all pixels between images.
In a preferred embodiment of the present invention, the dense matching includes: (1) for matching point set K 2 Obtaining from the limit geometric constraint relationPolar line correspondence of the image to obtain a limit correspondence set K 3 . The term "limit geometric constraint relationship" means that if l and l 'are two corresponding epipolar lines in the left and right images, the corresponding point p on the epipolar line l in the left image must be on the epipolar line l' in the right image. (2) From the generated limit-corresponding relation set K 3 By gray scale pair K 3 The inner polar lines are segmented, each polar line is divided into a plurality of gray level segments, and gray level values of pixel points on each segment are similar. The formula for gray level segmentation is as follows:
the physical meaning of the above formula is to divide the continuous points of gray values within a certain range into segments. Wherein I (x) t ,y t ) Is pixel (x) t ,y t ) Gray values of (2); w is the number of pixel points on a certain gray level segment, namely the length of the gray level segment; t is a certain threshold value, the smaller the value is, the fewer the number of pixel points divided into a certain gray level segment is, the more the number of gray level segments is, and through experimental research, the best matching effect is achieved when T is 3, and the generated gray level segment set is K 4 The method comprises the steps of carrying out a first treatment on the surface of the (3) Using dynamic programming algorithms (which is a way to find the best matchDiameter optimization method), and a linear interpolation method is utilized to establish the corresponding relation between gray segments and the corresponding relation between pixel points between gray segments, thereby realizing dense matching of all pixel points between images and obtaining a third image matching set E 3
After dense matching, the third image matching set E 3 The corresponding relation exists among all the pixel points, so that the depth of field of the scene can be calculated, the scene is reconstructed by using preset 3Dmax software, the three-dimensional geometric information of the scene space is restored, and the reconstructed image is obtained.
Optionally, in other embodiments, the image matching program may be further divided into one or more modules, where one or more modules are stored in the memory 11 and executed by one or more processors (the processor 12 in this embodiment) to perform the present invention, and the modules refer to a series of instruction segments of a computer program capable of performing specific functions for describing the execution of the image matching program in the image matching device.
For example, referring to fig. 3, a program module diagram of an image matching program in an embodiment of the image matching apparatus according to the present invention is shown, in which the image matching program may be divided into a primary matching module 10, a secondary matching module 20, a tertiary matching module 30 and a reconstruction module 40, and the exemplary embodiment is as follows:
The primary matching module 10 is configured to: generating an image imaging image according to a scene image shot by the aerial camera, and performing primary image matching on the image imaging image by using a scale invariant feature transformation method to generate a primary image matching set.
The secondary matching module 20 is configured to: and generating a epipolar line image based on the primary image matching set, calculating the overlapping degree between the epipolar line images, completing secondary image matching, and generating a secondary image matching set.
The cubic matching module 30 is configured to: and establishing dense matching of all pixel points among the images based on the second image matching set, and generating a third image matching set.
The reconstruction module 40 is configured to: and executing three-dimensional reconstruction according to the image matching set to obtain a reconstructed scene image.
The functions or operation steps implemented when the program modules such as the primary matching module 10, the secondary matching module 20, the tertiary matching module 30, and the reconstruction module 40 are executed are substantially the same as those of the foregoing embodiments, and will not be described herein.
In addition, an embodiment of the present invention further provides a computer readable storage medium, where an image matching program is stored, and the image matching program may be executed by one or more processors to implement the following operations:
Generating an image imaging image according to a scene image shot by the aerial camera, and performing primary image matching on the image imaging image by using a scale-invariant feature transformation method to generate a primary image matching set;
generating epipolar line images based on the primary image matching set, calculating the overlapping degree between the epipolar line images, completing secondary image matching, and generating a secondary image matching set;
and based on the second image matching set, establishing dense matching of all pixel points among the images, generating a third image matching set, and executing three-dimensional reconstruction to obtain a reconstructed scene image.
The computer-readable storage medium of the present invention is substantially the same as the above-described embodiments of the image matching apparatus and method, and will not be described in detail herein.
It should be noted that, the foregoing reference numerals of the embodiments of the present invention are merely for describing the embodiments, and do not represent the advantages and disadvantages of the embodiments. And the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, apparatus, article, or method that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, apparatus, article, or method. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, apparatus, article or method that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. An image matching method, comprising:
according to the parameters recorded when the aerial photographing instrument photographs the scene images, including the low-precision position, the gesture information and the outline height of the area, the model formula of the object imaging under the aerial photographing is utilized to restore the respective positions of the scene images photographed by the aerial photographing instrument and having overlapping, and the generation is performed And (3) imaging the image, wherein the model formula of the object imaging is as follows:
wherein s is a scale factor, M is an image point coordinate, M is an object point coordinate, K is a parameter matrix in the aerial photographing tool, R is a rotation matrix, and C is a projection centerPosition vector, I is 3-order identity matrix, theThe image set formed by the image imaging pictures is converted into a corresponding undirected picture edge set +.>
At the undirected graph edge setIn the method, a scale-invariant feature transformation algorithm is adopted to carry out image matching, and in the image matching, the image is subjected to +.>If->The number of matching points of the two images is smaller than a threshold value +.>Will->Culling from the undirected graph edge set if +.>The number of matching points of the two images is greater than a threshold value +.>Then the imaging graph pair is reserved, and the imaging graph pair is generatedImage pairs, generating a primary image matching set;
the scale-invariant feature transform algorithm is utilized for the saidImage pairExtracting point characteristics, obtaining uniformly distributed high-precision homonymous points, estimating a basic matrix by using a random sample consensus (RANSAC) strategy to obtain a basic matrix, determining homonymous epipolar lines corresponding to each group of homonymous points by using the basic matrix, and determining ++ ∈by using a least square method according to the principle that epipolar lines are intersected with epipolar points >Nuclear point coordinates of an image pair, generating quick mapping of nuclear lines between images according to the nuclear point coordinates, resampling the nuclear lines along the direction of the nuclear lines by adopting a bilinear interpolation method, and completing nuclear line image production and matching regeneration>Generating a second image matching set by the image pair;
and based on the second image matching set, establishing dense matching of all pixel points among the images, generating a third image matching set, and executing three-dimensional reconstruction to obtain a reconstructed scene image.
2. The image matching method as claimed in claim 1, wherein the determination is made by a least square method based on the principle that epipolar lines must intersect at epipolar pointsNuclear point coordinates of an image pair, generating quick mapping of nuclear lines between images according to the nuclear point coordinates, resampling the nuclear lines along the direction of the nuclear lines by adopting a bilinear interpolation method, and completing nuclear line image production and matching regeneration>An image pair, generating a second image matching set, comprising:
constructing a rotation matrix and a projection matrix based on the basic matrix, and dividing the rotation matrix intoShafts, each ofWherein->The expression pattern of (2) is as follows:
and obtain projection matrix +.>
Wherein,,/>camera parameters for camera left, +. >Camera parameters for camera right, +.>The camera parameters of camera left and camera right, respectively, are +.>A component of the shaft;
according to the principle that epipolar lines are intersected with epipolar points, the epipolar point coordinates of the image are determined by a least square method according to the calculation result of a projection matrix
Deriving a central projection according to a collineation conditional equation, and calculating an included angle relation between two adjacent epipolar lines when the epipolar lines are sampled to finish the determination of each epipolar line on a central projection image;
using the generated basis matrix, each image pair can be determinedCorresponding epipolar lines are respectively according to +.>、/>The epipolar coordinates of each epipolar of the image are used for determining the epipolar line of each epipolar image, and the epipolar line correspondence between the same image pair is completed to obtain +.>The epipolar equation for (a) is:
wherein,is the calculated epipolar coordinates, +.>For the reference coordinates of the center projection image, the +.>Is a epipolar equation;
based on the epipolar equation, each image pair is establishedAfter the epipolar line mapping of (2), generating epipolar line images and calculating the overlapping degree according to the resampling rule of the bilinear interpolation method, and enabling the overlapping degree of the epipolar line images to be smaller than a threshold value +.>Is discarded, generating->And obtaining the second image matching set by using the image pair.
3. The image matching method as set forth in claim 2, wherein said establishing a dense match of all pixels between images based on said second image matching set comprises:
based on the second image matching set, respectively extracting by using a corner detection algorithm of the minimum same-value segmentation absorption kernelForming a matching point set of the angular points, and combining with epipolar geometric constraint and dynamic programming algorithm to establish +.>Dense matching of all pixels between images.
4. The image matching method according to claim 2, wherein said creating a third image matching set based on said second image matching set to create a dense match of all pixels between images comprises:
performing Gaussian smoothing on an input image, traversing each pixel point in the image, judging whether the pixel point is an edge point by utilizing a Sobel operator, and judging path loss according to the principle of minimizing a loss function of a global energy equation if the pixel point is the edge pointWhether minimized or not, the decision principle is as follows:
wherein,as a local loss of the path,
for the last loss of path, +.>For the minimum loss of the one-step loss on the path, it can be determined whether the point is a corner point, if the two detected corner points are adjacent, the +. >The larger corner points are detected, so that the corner points of the image pairs in the second image matching set are detected;
for the followingEach point in the set of image corner points is +.>Searching corner points matched with the corresponding search area in the image, and obtaining an intersection set of two matched point sets, namely an initial matched point set +.>In the initial set of matching points->In->The images are the corner points which are mutually concentrated, the matching points are searched in the corresponding search area,calculating the similarity between the point and each candidate matching point in the search area, selecting the candidate matching point with the highest similarity as the matching point to obtain a matching point set of corner points +.>
Matching point set using corner pointsObtaining +.>Polar line correspondence of images, generating limit correspondence set +.>According to gray scale pair->The inner polar lines are segmented, each polar line is divided into a plurality of gray segments, and the generated gray segment set is +.>And establishing a corresponding relation between gray level segments by using a dynamic programming algorithm, and establishing a corresponding relation between pixel points between gray level segments corresponding to the province by using a linear interpolation method, so that dense matching of all pixel points between images is realized, and the third image matching set is obtained.
5. The image matching method according to any one of claims 1 to 4, wherein performing three-dimensional reconstruction to obtain a reconstructed scene image comprises:
after dense matching, the depth of field of the scene is calculated by using the corresponding relation existing in all the pixel points in the third image matching set, the scene is reconstructed by using preset 3Dmax software, and the three-dimensional geometric information of the scene space is restored, so that the reconstructed scene image is obtained.
6. An image matching apparatus, comprising a memory and a processor, wherein the memory has stored thereon an image matching program executable on the processor, the image matching program when executed by the processor performing the steps of the image matching method according to any one of claims 1 to 5.
7. A computer-readable storage medium, having stored thereon an image matching program executable by one or more processors to implement the steps of the image matching method of any of claims 1 to 5.
CN201910274078.5A 2019-04-08 2019-04-08 Image matching method, device and computer readable storage medium Active CN110135455B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910274078.5A CN110135455B (en) 2019-04-08 2019-04-08 Image matching method, device and computer readable storage medium
PCT/CN2019/102187 WO2020206903A1 (en) 2019-04-08 2019-08-23 Image matching method and device, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910274078.5A CN110135455B (en) 2019-04-08 2019-04-08 Image matching method, device and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110135455A CN110135455A (en) 2019-08-16
CN110135455B true CN110135455B (en) 2024-04-12

Family

ID=67569487

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910274078.5A Active CN110135455B (en) 2019-04-08 2019-04-08 Image matching method, device and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110135455B (en)
WO (1) WO2020206903A1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135455B (en) * 2019-04-08 2024-04-12 平安科技(深圳)有限公司 Image matching method, device and computer readable storage medium
CN111046906B (en) * 2019-10-31 2023-10-31 中国资源卫星应用中心 Reliable encryption matching method and system for planar feature points
CN111160377A (en) * 2020-03-07 2020-05-15 深圳移动互联研究院有限公司 Image acquisition system with key mechanism and evidence-based method thereof
CN112233228B (en) * 2020-10-28 2024-02-20 五邑大学 Unmanned aerial vehicle-based urban three-dimensional reconstruction method, device and storage medium
CN112446951B (en) * 2020-11-06 2024-03-26 杭州易现先进科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer storage medium
CN112381864A (en) * 2020-12-08 2021-02-19 兰州交通大学 Multi-source multi-scale high-resolution remote sensing image automatic registration technology based on antipodal geometry
CN112509109A (en) * 2020-12-10 2021-03-16 上海影创信息科技有限公司 Single-view illumination estimation method based on neural network model
CN112866504B (en) * 2021-01-28 2023-06-09 武汉博雅弘拓科技有限公司 Air three encryption method and system
CN113096168B (en) * 2021-03-17 2024-04-02 西安交通大学 Optical remote sensing image registration method and system combining SIFT points and control line pairs
CN113741510A (en) * 2021-07-30 2021-12-03 深圳创动科技有限公司 Routing inspection path planning method and device and storage medium
CN114140575A (en) * 2021-10-21 2022-03-04 北京航空航天大学 Three-dimensional model construction method, device and equipment
CN113963132A (en) * 2021-11-15 2022-01-21 广东电网有限责任公司 Three-dimensional distribution reconstruction method of plasma and related device
CN114332349B (en) * 2021-11-17 2023-11-03 浙江视觉智能创新中心有限公司 Binocular structured light edge reconstruction method, system and storage medium
CN113867410B (en) * 2021-11-17 2023-11-03 武汉大势智慧科技有限公司 Unmanned aerial vehicle aerial photographing data acquisition mode identification method and system
CN115063460B (en) * 2021-12-24 2024-06-25 山东建筑大学 High-precision self-adaptive homonymous pixel interpolation and optimization method
CN114419116B (en) * 2022-01-11 2024-04-09 江苏省测绘研究所 Remote sensing image registration method and system based on network matching
CN114758151B (en) * 2022-03-21 2024-05-24 辽宁工程技术大学 Sequence image dense matching method combining line characteristics and triangular mesh constraint
CN114972536B (en) * 2022-05-26 2023-05-09 中国人民解放军战略支援部队信息工程大学 Positioning and calibrating method for aviation area array swing scanning type camera
CN114742869B (en) * 2022-06-15 2022-08-16 西安交通大学医学院第一附属医院 Brain neurosurgery registration method based on pattern recognition and electronic equipment
CN115661368B (en) * 2022-12-14 2023-04-11 海纳云物联科技有限公司 Image matching method, device, server and storage medium
CN116596844B (en) * 2023-04-06 2024-03-29 北京四维远见信息技术有限公司 Aviation quality inspection method, device, equipment and storage medium
CN116612067B (en) * 2023-04-06 2024-02-23 北京四维远见信息技术有限公司 Method, apparatus, device and computer readable storage medium for checking aviation quality
CN116597184B (en) * 2023-07-11 2023-09-22 中国人民解放军63921部队 Least square image matching method
CN117664087B (en) * 2024-01-31 2024-04-02 中国人民解放军战略支援部队航天工程大学 Method, system and equipment for generating vertical orbit circular scanning type satellite image epipolar line

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015135323A1 (en) * 2014-03-14 2015-09-17 华为技术有限公司 Camera tracking method and device
CN105847750A (en) * 2016-04-13 2016-08-10 中测新图(北京)遥感技术有限责任公司 Geo-coding based unmanned aerial vehicle video image real time presenting method and apparatus
CN107492127A (en) * 2017-09-18 2017-12-19 丁志宇 Light-field camera parameter calibration method, device, storage medium and computer equipment
CN108759788A (en) * 2018-03-19 2018-11-06 深圳飞马机器人科技有限公司 Unmanned plane image positioning and orientation method and unmanned plane

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2460187C2 (en) * 2008-02-01 2012-08-27 Рокстек Аб Transition frame with inbuilt pressing device
CN104751451B (en) * 2015-03-05 2017-07-28 同济大学 Point off density cloud extracting method based on unmanned plane low latitude high resolution image
CN106023086B (en) * 2016-07-06 2019-02-22 中国电子科技集团公司第二十八研究所 A kind of aerial images and geodata joining method based on ORB characteristic matching
CN110135455B (en) * 2019-04-08 2024-04-12 平安科技(深圳)有限公司 Image matching method, device and computer readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015135323A1 (en) * 2014-03-14 2015-09-17 华为技术有限公司 Camera tracking method and device
CN105847750A (en) * 2016-04-13 2016-08-10 中测新图(北京)遥感技术有限责任公司 Geo-coding based unmanned aerial vehicle video image real time presenting method and apparatus
CN107492127A (en) * 2017-09-18 2017-12-19 丁志宇 Light-field camera parameter calibration method, device, storage medium and computer equipment
CN108759788A (en) * 2018-03-19 2018-11-06 深圳飞马机器人科技有限公司 Unmanned plane image positioning and orientation method and unmanned plane

Also Published As

Publication number Publication date
CN110135455A (en) 2019-08-16
WO2020206903A1 (en) 2020-10-15

Similar Documents

Publication Publication Date Title
CN110135455B (en) Image matching method, device and computer readable storage medium
CN111815757B (en) Large member three-dimensional reconstruction method based on image sequence
CN107705333B (en) Space positioning method and device based on binocular camera
CN108876836B (en) Depth estimation method, device and system and computer readable storage medium
CN109683699B (en) Method and device for realizing augmented reality based on deep learning and mobile terminal
US10225473B2 (en) Threshold determination in a RANSAC algorithm
CN110176032B (en) Three-dimensional reconstruction method and device
CN111401266B (en) Method, equipment, computer equipment and readable storage medium for positioning picture corner points
CN108648194B (en) Three-dimensional target identification segmentation and pose measurement method and device based on CAD model
CN107329962B (en) Image retrieval database generation method, and method and device for enhancing reality
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
EP3274964B1 (en) Automatic connection of images using visual features
CN114119864A (en) Positioning method and device based on three-dimensional reconstruction and point cloud matching
CN114219855A (en) Point cloud normal vector estimation method and device, computer equipment and storage medium
WO2021244161A1 (en) Model generation method and apparatus based on multi-view panoramic image
CN115035235A (en) Three-dimensional reconstruction method and device
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
EP3185212A1 (en) Dynamic particle filter parameterization
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
CN113822996B (en) Pose estimation method and device for robot, electronic device and storage medium
CN113592015B (en) Method and device for positioning and training feature matching network
CN117132737B (en) Three-dimensional building model construction method, system and equipment
CN113298871A (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
Budianti et al. Background blurring and removal for 3d modelling of cultural heritage objects
CN110135474A (en) A kind of oblique aerial image matching method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant