WO2020206903A1 - Image matching method and device, and computer readable storage medium - Google Patents

Image matching method and device, and computer readable storage medium Download PDF

Info

Publication number
WO2020206903A1
WO2020206903A1 PCT/CN2019/102187 CN2019102187W WO2020206903A1 WO 2020206903 A1 WO2020206903 A1 WO 2020206903A1 CN 2019102187 W CN2019102187 W CN 2019102187W WO 2020206903 A1 WO2020206903 A1 WO 2020206903A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matching
image matching
epipolar
images
Prior art date
Application number
PCT/CN2019/102187
Other languages
French (fr)
Chinese (zh)
Inventor
王义文
王健宗
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020206903A1 publication Critical patent/WO2020206903A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Definitions

  • This application relates to the field of computer technology, and in particular to an image matching method, device and computer-readable storage medium.
  • Image matching refers to the process of identifying points with the same name between two or more images through a certain matching algorithm. It is an important preliminary step in image fusion, target recognition, target change detection, computer vision and other problems. It has a wide range of applications in many fields such as remote sensing, digital photogrammetry, computer vision, cartography and military applications. At present, the most effective and effective method for image matching is to judge the different points of the image by visual inspection; secondly, according to the principle of the essence of the image, which is the principle of pixels, compare the gray values of all pixels in the target area; or find the target image based on the principle of template matching The same or the most similar position to the sub-image in the search image, etc.
  • the present application provides an image matching method, device, and computer-readable storage medium, the main purpose of which is to provide a new image matching method applied to dense stereo scenes under aerial photography to improve image matching efficiency.
  • an image matching method provided by this application includes:
  • the image imaging map is generated, and the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set;
  • a dense matching of all pixels between images is established, a third image matching set is generated, and three-dimensional reconstruction is performed to obtain a reconstructed scene image.
  • the present application also provides an image matching device.
  • the device includes a memory and a processor.
  • the memory stores an image matching program that can run on the processor.
  • the image matching program When executed by the processor, the steps of the image matching method described above are realized.
  • the present application also provides a computer-readable storage medium with an image matching program stored on the computer-readable storage medium, and the image matching program can be executed by one or more processors to achieve The steps of the image matching method described above.
  • the image matching method, device, and computer-readable storage medium proposed in this application generate an image imaging map based on the scene image shot by an aerial camera, and use the scale-invariant feature transformation method to perform the initial image matching on the image imaging map to generate the initial image matching set, Based on the first image matching set, generate epipolar images and calculate the degree of overlap between the epipolar images, complete the second image matching, and generate a second image matching set based on the second image matching set Establish dense matching of all pixels between images, generate a third image matching set, and perform 3D reconstruction to obtain a reconstructed scene image.
  • This application improves the efficiency of image matching, and can perform three-dimensional reconstruction of images of dense scenes under aerial photography, so as to more effectively help users to conduct analysis and research.
  • FIG. 1 is a schematic flowchart of an image matching method provided by an embodiment of this application.
  • FIG. 2 is a schematic diagram of the internal structure of an image matching device provided by an embodiment of the application.
  • Fig. 3 is a schematic diagram of modules of an image matching program in an image matching device provided by an embodiment of the application.
  • This application provides an image matching method.
  • FIG. 1 it is a schematic flowchart of an image matching method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the image matching method includes:
  • the scene images taken by aerial equipment such as drones, helicopters and other flight control systems, have a large number of images and a wide viewing angle, especially buildings, which are dense and dense. Therefore, this application first restores the overlapping image sets to their respective positions, and reconstructs the imaging map of the object.
  • the model formula for object imaging under aerial photography is used to recover overlapping scene images.
  • Position generate image imaging map.
  • model formula of the object imaging is as follows:
  • s is the scale factor
  • m is the coordinate of the image point
  • M is the coordinate of the object point (the object point and the image point are the object position and the image position in the optical imaging respectively)
  • K is the parameter matrix in the aerial photography tool, It is composed of focal length and principal point coordinates
  • R is a rotation matrix, which can be converted to approximate values according to the yaw, pitch, and roll recorded by the aerial tool’s system
  • C is the projection center
  • the position vector can be approximated directly from the longitude, latitude, and altitude recorded by the GPS of the aerial photography tool
  • I is the third-order unit matrix.
  • the imaging map of n images can be obtained.
  • G n (V n , E n), wherein, referred to as a vertex set V n, En is called an edge set (a graph is a widely used data structure.
  • the nodes in the graph are called vertices.
  • the relationship between two vertices can be represented by a pair, called an edge. If the graph represents an edge Even pairs are ordered, then the graph is called a directed graph, if the pairs representing edges are disordered, then it is called an undirected graph).
  • No set of edges E in the drawing represents the number nE E n-side is, for each table represents one image, on behalf of the E nE n-th image for subsequent image matching process performed between only one image pair nE . If the relationship between images is not considered and the exhaustive traversal strategy is used for image matching, the total number of matches is Usually n*(n-1)/2 will be much larger than nE.
  • the method of constructing the image relationship undirected graph limits the scope of image matching, can avoid blind image matching, reduce the total image matching calculation complexity from O(n 2 ) to O(n), and improve the matching calculation efficiency ; At the same time, it can effectively eliminate the interference of unrelated image pairs, fundamentally avoid mismatches caused by non-overlapping images, and improve the accuracy of matching and the robustness of reconstruction.
  • the scale-invariant feature transform (SIFT) algorithm is used for image matching.
  • SIFT scale-invariant feature transform
  • image matching if there are few matching points in the two images I i and I j , which are smaller than the threshold N 1 , it means that the overlap is small or the correlation is weak, and (I i , I j ) is removed from the set E. If the number of matching points in the two images I i and I j is greater than the threshold N 1 , then the imaging image pair is retained to generate There are n 1 E image pairs in total, and the first image matching set E 1 is generated.
  • S20 Based on the first image matching set, generate an epipolar image and calculate the degree of overlap between the epipolar images, complete the second image matching, and generate a second image matching set.
  • the epipolar image is a method of changing the search range from a two-dimensional plane image to a one-dimensional straight line during the matching process.
  • the plane formed by the shooting baseline and any ground point is called the nuclear surface
  • the intersection line between the nuclear surface and the image surface is called the nuclear line.
  • the image points with the same name must be on the epipolar line of the same name, and the image points on the epipolar line of the same name have a one-to-one correspondence.
  • an epipolar pair with the same name can be determined on a stereo image pair, then using the above-mentioned properties of the epipolar pair with the same name, the search and matching of the two-dimensional image can be transformed into the search and matching along the epipolar line.
  • the epipolar image eliminates the upper and lower parallax between the stereo images, narrows the search range, reduces the amount of matching calculations, and improves the matching accuracy, so it is of great significance for dense stereo image matching.
  • a preferred embodiment of the present application discloses a method for making and matching epipolar images for generating epipolar images and calculating the degree of overlap between the epipolar images.
  • the method includes: (a) using the SIFT algorithm to compare the After image pair point feature extraction, uniformly distributed high-precision points with the same name are obtained, the basic matrix estimation based on the RANSAC strategy is used to obtain the basic matrix of n 1 E image pairs; (b) using the basic matrix to determine each group of points with the same name Corresponding core line with the same name; (c) According to the principle that the core line must intersect at the core point, the least square method is used to determine The core point coordinates of the image pair are used to generate a quick mapping of the epipolar lines between the images according to the core point coordinates, and the epipolar line is resampled by bilinear interpolation along the epipolar line direction to complete the epipolar image production and matching regeneration There are a total of n 2 E image pairs to generate the second image matching set.
  • the epipolar image production method based on the basic matrix can avoid the problems of iterative calculation and initial value assignment when calculating the relative relationship, and it can also have good accuracy when the aerial photography uses a large angle of view.
  • a rotation matrix and a projection matrix are constructed, and the rotation matrix is divided into x, y, and z axes, each of which is among them, Is expressed as follows:
  • the least square method is used to determine the core point coordinates (x p , y p ) of the image from the calculation result of the projection matrix.
  • mapping is to directly correct the epipolar image on the central projection image.
  • the specific steps of the mapping are: derive the central projection according to the collinear condition equation, calculate the angle relationship between two adjacent epipolar lines when the epipolar lines are sampled, and complete the determination of each epipolar line on the central projection image; use The basic matrix generated above can determine each image pair The corresponding core lines are based on The nuclear point coordinates of each nuclear image in the image determine the epipolar line of the point, and complete the epipolar line correspondence between the same image pair to obtain The epipolar equation is:
  • (x p , y p ) are the coordinates of the core point calculated above, (x base , y base ) are the reference coordinates of the center projection image, and the same goes for The epipolar equation.
  • each image pair is established After the epipolar line mapping, according to the resampling rule of the bilinear interpolation method, the epipolar line image is generated and the overlap degree is calculated. The image pairs whose overlap degree of the epipolar line image is less than the threshold N 2 are discarded to generate There are n 2 E image pairs in total, and the second image matching set E 2 is obtained .
  • the two-dimensional image restores the geometric structure of the three-dimensional object surface, so further processing is required, and the following S30 is executed.
  • the preferred embodiment of the present application adopts a dense matching algorithm, and on the basis of the second image matching set E 2 , the corner detection algorithm of the least equivalent segmentation absorbing core is used to extract respectively
  • the corner points of the corner points form a set of matching points of the corner points; combined with the epipolar geometric constraints, dynamic programming algorithms and other methods to establish Dense matching of all pixels between images. Specific steps are as follows:
  • the second image matching set of corner points in an image in E 2 (a) detection is the second image matching set of corner points in an image in E 2 (a) detection.
  • the corner point is the point where the local curvature changes the most on the contour of the image. It contains important information in the image, so it is of great significance for the detail matching of the image in the image set E 2 .
  • the preferred embodiment of the present application adopts the smallest uni-value segment assimilating nucleus (SUSAN) method to detect image corners: Gaussian smoothing is performed on the input image; each pixel in the image is traversed, and Sobel is used first The operator (it is a discrete first-order difference operator, used to calculate the approximate value of a gradient of the image brightness function) to determine whether the pixel is an edge point; if it is an edge point, the loss function according to the global energy equation is the smallest To determine whether the path loss L r (p, d) of the corner point is minimized, the determination principle is as follows:
  • C(p,d) is the local loss of the path
  • L r ,min(pr) is the minimum loss of the previous step of the path, from which it can be determined whether the point is a corner point, and redundant corner points are removed; further judge if the two corners are detected If the points are adjacent, the corner points with larger L r (p,d) are removed.
  • the automatic matching of corner points can effectively distinguish the difference between image pairs based on the similarities and differences of the corner points, which is an effective means to achieve precise matching.
  • the corner point automatic matching can be divided into these steps: 1For Each point in the set of image corners is in Find matching corner points in the corresponding search area in the image, similarly, for Each point in the set of image corners is searched in the same way Find their corresponding matching points in the image, and call the intersection of these two matching point sets the initial matching point set K 1 ; 2In the initial matching point set K 1 , the pair is The corner point where the diagonal points of the image are concentrated with each other, find the matching point in the corresponding search area, calculate the similarity between this point and each candidate matching point in the search area, and select the candidate matching point with the greatest similarity as its match point.
  • the similarity calculation method adopts the gradient size similarity method: if the gradient size of a pixel is g, and the gradient of another pixel that matches it approximately obeys the normal distribution, then the two The similarity l g of each pixel is:
  • d(x) is the density function and k is the density coefficient.
  • the dense matching includes: 1 Obtain the matching point set K 2 according to the limit geometric constraint relationship
  • the polar line correspondence of the image is obtained, and the limit correspondence set K 3 is obtained .
  • the so-called limit geometric constraint relationship means that if l and l'are the two corresponding epipolar lines in the left and right images, the corresponding point of the point p on the epipolar line l in the left image in the right image must be on the epipolar line l' on.
  • the polar lines in K 3 are segmented according to gray levels. Each polar line is divided into several gray segments, and the gray values of pixels on each segment are similar. .
  • the formula for gray scale segmentation is as follows:
  • the physical meaning of the above formula is to divide the continuous points of gray value in a certain range into one section.
  • I (x t , y t ) is the gray value of the pixel (x t , y t );
  • w is the number of pixels on a gray segment, that is, the length of the gray segment;
  • T is For a certain threshold, the smaller the value, the fewer pixels are divided into a certain gray-scale segment, and the more gray-scale segments.
  • the matching effect is the best when T is set to 3.
  • the gray-level segment set of is K 4 ; 3Using dynamic programming algorithm (an optimization method to find the best matching path) to establish the correspondence between gray-level segments, and using linear interpolation to save the corresponding gray-level The corresponding relationship between the pixels is established between the segments, so that the dense matching of all the pixels between the images is realized, and the third image matching set E 3 is obtained .
  • the preset 3Dmax software can be used to reconstruct the scene to restore the three-dimensional geometry of the scene space Information to get the reconstructed image.
  • the application also provides an image matching device.
  • FIG. 2 it is a schematic diagram of the internal structure of an image matching device provided by an embodiment of this application.
  • the image matching device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer.
  • the image matching device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the image matching device 1 in some embodiments, such as a hard disk of the image matching device 1.
  • the memory 11 may also be an external storage device of the image matching device 1, such as a plug-in hard disk, a smart media card (SMC), and a secure digital (Secure Digital, SD card, Flash Card, etc.
  • the memory 11 may also include both an internal storage unit of the image matching device 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the image matching device 1, such as the code of the image matching program 01, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, such as execution of image matching program 01, etc.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, such as execution of image matching program 01, etc.
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the device 1 and other electronic devices.
  • the device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light emitting diode) touch device, etc.
  • the display can also be called a display screen or a display unit as appropriate, and is used to display the information processed in the image matching device 1 and to display a visualized user interface.
  • FIG. 2 only shows the image matching device 1 with components 11-14 and the image matching program 01.
  • FIG. 1 does not constitute a limitation on the image matching device 1, and may include Fewer or more components than shown, or some combination of components, or different component arrangement.
  • the image matching program 01 is stored in the memory 11; when the processor 12 executes the image matching program 01 stored in the memory 11, the following steps are implemented:
  • the first step is to generate an image imaging map according to the scene image taken by the aerial camera, and use the scale-invariant feature transformation method to perform the initial image matching on the image imaging map to generate the initial image matching set.
  • the scene images captured by aerial equipment such as drones, helicopters and other flight control systems, have a large number of images and a wide viewing angle, especially buildings, which are dense in number. Therefore, this application first restores the overlapping image sets to their respective positions, and reconstructs the imaging map of the object.
  • the model formula for object imaging under aerial photography is used to recover overlapping scene images.
  • Position generate image imaging map.
  • model formula of the object imaging is as follows:
  • s is the scale factor
  • m is the coordinate of the image point
  • M is the coordinate of the object point (the object point and the image point are the object position and the image position in the optical imaging respectively)
  • K is the parameter matrix in the aerial photography tool, It is composed of focal length and principal point coordinates
  • R is a rotation matrix, which can be converted to approximate values according to the yaw, pitch, and roll recorded by the aerial tool’s system
  • C is the projection center
  • the position vector can be approximated directly from the longitude, latitude, and altitude recorded by the GPS of the aerial photography tool
  • I is the third-order unit matrix.
  • the imaging map of n images can be obtained.
  • G n (V n , E n), wherein, referred to as a vertex set V n, En is called an edge set (a graph is a widely used data structure.
  • the nodes in the graph are called vertices.
  • the relationship between two vertices can be represented by a pair, called an edge. If the graph represents an edge Even pairs are ordered, then the graph is called a directed graph, if the pairs representing edges are disordered, then it is called an undirected graph).
  • No set of edges E in the drawing represents the number nE E n-side is, for each table represents one image, on behalf of the E nE n-th image for subsequent image matching process performed between only one image pair nE . If the relationship between images is not considered and the exhaustive traversal strategy is used for image matching, the total number of matches is Usually n*(n-1)/2 will be much larger than nE.
  • the method of constructing the image relationship undirected graph limits the scope of image matching, can avoid blind image matching, reduce the total image matching calculation complexity from O(n 2 ) to O(n), and improve the matching calculation efficiency ; At the same time, it can effectively eliminate the interference of unrelated image pairs, fundamentally avoid mismatches caused by non-overlapping images, and improve the accuracy of matching and the robustness of reconstruction.
  • the scale-invariant feature transform (SIFT) algorithm is used for image matching.
  • SIFT scale-invariant feature transform
  • image matching if there are few matching points in the two images I i and I j , which are smaller than the threshold N 1 , it means that the overlap is small or the correlation is weak, and (I i , I j ) is removed from the set E. If the number of matching points in the two images I i and I j is greater than the threshold N 1 , then the imaging image pair is retained to generate There are n 1 E image pairs in total, and the first image matching set E 1 is generated.
  • the second step is to generate an epipolar image based on the initial image matching set and calculate the degree of overlap between the epipolar images to complete the second image matching and generate a second image matching set.
  • the above-mentioned first step is only to filter out images with no repetition or low repetition.
  • this application continues to use the epipolar image method to perform matching filtering.
  • the epipolar image is a method of changing the search range from a two-dimensional plane image to a one-dimensional straight line during the matching process.
  • the plane formed by the shooting baseline and any ground point is called the nuclear surface
  • the intersection line between the nuclear surface and the image surface is called the nuclear line.
  • the image points with the same name must be on the epipolar line of the same name, and the image points on the epipolar line of the same name have a one-to-one correspondence.
  • an epipolar pair with the same name can be determined on a stereo image pair, then using the above-mentioned properties of the epipolar pair with the same name, the search and matching of the two-dimensional image can be transformed into the search and matching along the epipolar line.
  • the epipolar image eliminates the upper and lower parallax between the stereo images, narrows the search range, reduces the amount of matching calculations, and improves the matching accuracy, so it is of great significance for dense stereo image matching.
  • a preferred embodiment of the present application discloses a method for making and matching epipolar images for generating epipolar images and calculating the degree of overlap between the epipolar images.
  • the method includes: (a) using the SIFT algorithm to compare the After image pair point feature extraction, uniformly distributed high-precision points with the same name are obtained, the basic matrix estimation based on the RANSAC strategy is used to obtain the basic matrix of n 1 E image pairs; (b) using the basic matrix to determine each group of points with the same name Corresponding core line with the same name; (c) According to the principle that the core line must intersect at the core point, the least square method is used to determine The core point coordinates of the image pair are used to generate a quick mapping of the epipolar lines between the images according to the core point coordinates, and the epipolar line is resampled by bilinear interpolation along the epipolar line direction to complete the epipolar image production and matching regeneration There are a total of n 2 E image pairs to generate the second image matching set.
  • the epipolar image production method based on the basic matrix can avoid the problems of iterative calculation and initial value assignment when calculating the relative relationship, and it can also have good accuracy when the aerial photography uses a large angle of view.
  • a rotation matrix and a projection matrix are constructed, and the rotation matrix is divided into x, y, and z axes, each of which is among them, Is expressed as follows:
  • the least square method is used to determine the core point coordinates (x p , y p ) of the image from the calculation result of the projection matrix.
  • mapping is to directly correct the epipolar image on the central projection image.
  • the specific steps of the mapping are: derive the central projection according to the collinear condition equation, calculate the angle relationship between two adjacent epipolar lines when the epipolar lines are sampled, and complete the determination of each epipolar line on the central projection image; use The basic matrix generated above can determine each image pair The corresponding core lines are based on The nuclear point coordinates of each nuclear image in the image determine the epipolar line of the point, and complete the epipolar line correspondence between the same image pair to obtain The epipolar equation is:
  • (x p , y p ) are the coordinates of the core point calculated above, (x base , y base ) are the reference coordinates of the center projection image, and the same goes for The epipolar equation.
  • each image pair is established After the epipolar line mapping, according to the resampling rule of the bilinear interpolation method, the epipolar line image is generated and the overlap degree is calculated. The image pairs whose overlap degree of the epipolar line image is less than the threshold N 2 are discarded to generate There are n 2 E image pairs in total, and the second image matching set E 2 is obtained .
  • the two-dimensional image restores the geometric structure of the three-dimensional object surface, so further processing is required, and the following third step is performed.
  • the preferred embodiment of the present application adopts a dense matching algorithm, and on the basis of the second image matching set E 2 , the corner detection algorithm of the least equivalent segmentation absorbing core is used to extract respectively
  • the corner points of the corner points form a set of matching points of the corner points; combined with the epipolar geometric constraints, dynamic programming algorithms and other methods to establish Dense matching of all pixels between images. Specific steps are as follows:
  • the second image matching set of corner points in an image in E 2 (a) detection is the second image matching set of corner points in an image in E 2 (a) detection.
  • the corner point is the point where the local curvature changes the most on the contour of the image. It contains important information in the image, so it is of great significance for the detail matching of the image in the image set E 2 .
  • the preferred embodiment of the present application adopts the smallest uni-value segment assimilating nucleus (SUSAN) method to detect image corners: Gaussian smoothing is performed on the input image; each pixel in the image is traversed, and Sobel is used first The operator (it is a discrete first-order difference operator, used to calculate the approximate value of a gradient of the image brightness function) to determine whether the pixel is an edge point; if it is an edge point, the loss function according to the global energy equation is the smallest To determine whether the path loss L r (p, d) of the corner point is minimized, the determination principle is as follows:
  • C(p,d) is the local loss of the path
  • L r , min(pr) is the minimum loss of the previous step of the path, from which it can be determined whether the point is a corner point, and redundant corner points are removed; further judge if the two corners are detected If the points are adjacent, the corner points with larger L r (p,d) are removed.
  • the automatic matching of corner points can effectively distinguish the difference between image pairs based on the similarities and differences of the corner points, which is an effective means to achieve precise matching.
  • the corner point automatic matching can be divided into these steps: 1For Each point in the set of image corners is in Find matching corner points in the corresponding search area in the image, similarly, for Each point in the set of image corners is searched in the same way Find their corresponding matching points in the image, and call the intersection of these two matching point sets the initial matching point set K 1 ; 2In the initial matching point set K 1 , the pair is The corner point where the diagonal points of the image are concentrated with each other, find the matching point in the corresponding search area, calculate the similarity between this point and each candidate matching point in the search area, and select the candidate matching point with the greatest similarity as its match point.
  • the similarity calculation method adopts the gradient size similarity method: if the gradient size of a pixel is g, and the gradient of another pixel that matches it approximately obeys the normal distribution, then the two The similarity l g of each pixel is:
  • d(x) is the density function
  • k is the density coefficient
  • the dense matching includes: 1 Obtain the matching point set K 2 according to the limit geometric constraint relationship
  • the polar line correspondence of the image is obtained, and the limit correspondence set K 3 is obtained .
  • the so-called limit geometric constraint relationship means that if l and l'are the two corresponding epipolar lines in the left and right images, the corresponding point of the point p on the epipolar line l in the left image in the right image must be on the epipolar line l' on.
  • the polar lines in K 3 are segmented according to gray levels. Each polar line is divided into several gray segments, and the gray values of pixels on each segment are similar. .
  • the formula for gray scale segmentation is as follows:
  • the physical meaning of the above formula is to divide the continuous points of gray value in a certain range into one section.
  • I (x t , y t ) is the gray value of the pixel (x t , y t );
  • w is the number of pixels on a gray segment, that is, the length of the gray segment;
  • T is For a certain threshold, the smaller the value, the fewer pixels are divided into a certain gray-scale segment, and the more gray-scale segments.
  • the matching effect is the best when T is set to 3.
  • the gray-level segment set of is K 4 ; 3Using dynamic programming algorithm (an optimization method to find the best matching path) to establish the correspondence between gray-level segments, and using linear interpolation to save the corresponding gray-level The corresponding relationship between the pixels is established between the segments, so that the dense matching of all the pixels between the images is realized, and the third image matching set E 3 is obtained .
  • the preset 3Dmax software can be used to reconstruct the scene to restore the three-dimensional geometry of the scene space Information to get the reconstructed image.
  • the image matching program may also be divided into one or more modules, and the one or more modules are stored in the memory 11 and are executed by one or more processors (in this embodiment, processing The module 12) is executed to complete the application.
  • the module referred to in the application refers to a series of computer program instruction segments capable of completing specific functions, and is used to describe the execution process of the image matching program in the image matching device.
  • FIG. 3 it is a schematic diagram of the program modules of the image matching program in an embodiment of the image matching device of the present application.
  • the image matching program can be divided into a primary matching module 10, a secondary matching module 20,
  • the three-time matching module 30 and the reconstruction module 40 are exemplary:
  • the first matching module 10 is used to generate an image imaging map according to the scene image taken by the aerial camera, and use the scale-invariant feature transformation method to perform the first image matching on the image imaging map to generate a first image matching set.
  • the secondary matching module 20 is configured to generate epipolar images based on the primary image matching set and calculate the degree of overlap between the epipolar images, complete the second image matching, and generate a second image matching set.
  • the third-order matching module 30 is configured to: based on the second-order image matching set, establish dense matching of all pixels between the images, and generate a third-order image matching set.
  • the reconstruction module 40 is configured to perform three-dimensional reconstruction according to the image matching set to obtain a reconstructed scene image.
  • an embodiment of the present application also proposes a computer-readable storage medium having an image matching program stored on the computer-readable storage medium, and the image matching program can be executed by one or more processors to implement the following operations:
  • the image imaging map is generated, and the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set;
  • a dense matching of all pixels between images is established, a third image matching set is generated, and three-dimensional reconstruction is performed to obtain a reconstructed scene image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

An image matching method and device (1), and a computer readable storage medium. The method comprises: generating an image imaging picture according to a scene image captured by an aerial photography instrument, performing primary image matching on the image imaging picture by using a scale invariant feature transformation method, and generating a primary image matching set (S1); on the basis of the primary image matching set, generating epipolar line images, calculating the degree of overlap between the epipolar line images, completing secondary image matching, and generating a secondary image matching set (S2); and on the basis of the secondary image matching set, establishing dense matching of all pixel points between images, generating a third-time image matching set, and executing three-dimensional reconstruction to obtain a reconstructed scene image (S3). According to the novel image matching solution applied to a dense three-dimensional scene under aerial photography, the image matching efficiency can be improved.

Description

影像匹配方法、装置及计算机可读存储介质Image matching method, device and computer readable storage medium
本申请基于巴黎公约申明享有2019年4月8日递交的申请号为CN201910274078.5、名称为“影像匹配方法、装置及计算机可读存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。This application is based on the Paris Convention declaration that it enjoys the priority of the Chinese patent application filed on April 8, 2019 with the application number CN201910274078.5 and titled "Image matching method, device and computer readable storage medium". The whole content is incorporated in this application by reference.
技术领域Technical field
本申请涉及计算机技术领域,尤其涉及一种影像匹配方法、装置及计算机可读存储介质。This application relates to the field of computer technology, and in particular to an image matching method, device and computer-readable storage medium.
背景技术Background technique
影像匹配是指通过一定的匹配算法在两幅或多幅影像之间识别同名点的过程。它是影像融合、目标识别、目标变化检测、计算机视觉等问题中的一个重要前期步骤,在遥感、数字摄影测量、计算机视觉、地图学及军事应用等多个领域都有着广泛的应用。目前影像匹配最行而有效的方法就是通过目测,判断影像的不同点;其次,根据影像的本质就是像素的原理,比较目标区域的所有像素灰度值;或者基于模板匹配的原理,找到目标影像和搜寻影像中的子影像相同或最相似的位置等。Image matching refers to the process of identifying points with the same name between two or more images through a certain matching algorithm. It is an important preliminary step in image fusion, target recognition, target change detection, computer vision and other problems. It has a wide range of applications in many fields such as remote sensing, digital photogrammetry, computer vision, cartography and military applications. At present, the most effective and effective method for image matching is to judge the different points of the image by visual inspection; secondly, according to the principle of the essence of the image, which is the principle of pixels, compare the gray values of all pixels in the target area; or find the target image based on the principle of template matching The same or the most similar position to the sub-image in the search image, etc.
以上方法都能在某些领域中切实可行,但当应用于航拍下的密集立体场景的匹配中,效果都会差强人意。由于航拍下的影像不仅数量巨大,而且每张图片的物体也很密集,人眼目测应接不暇不切实际;而利用像素匹配在对多幅影像像素做差值时,受噪声、量化误差、微小的光照变化、极小的平移等影响,都将产生较大的像素差值,影响匹配效果;而模板匹配方法应用于密集的影像中,需生成大量的匹配模块在图中进行寻找,时效性一般,同时受影像噪声影响,误匹配的概率也非常大。综合来说,由于航拍下的密集场景复杂多变,能进行三维重建可以更有效的帮助使用人员进行分析研究,而上述方法都缺少该功能。The above methods are all feasible in some fields, but when applied to the matching of dense stereo scenes under aerial photography, the effect will be unsatisfactory. Because the number of images under aerial photography is not only huge, but the objects in each picture are also very dense, human eyes are overwhelmed and impractical; while pixel matching is used to make difference between multiple image pixels, it is subject to noise, quantization errors, and tiny Illumination changes, minimal translation and other effects will produce large pixel differences, which will affect the matching effect; while the template matching method is applied to dense images, it needs to generate a large number of matching modules to search in the picture, and the timeliness is general , At the same time, affected by image noise, the probability of mismatch is also very high. In general, because the dense scenes under aerial photography are complex and changeable, the ability to perform 3D reconstruction can more effectively help users to analyze and research, but the above methods lack this function.
发明内容Summary of the invention
本申请提供一种影像匹配方法、装置及计算机可读存储介质,其主要目的在于提供一种应用于航拍下密集立体场景的新型影像匹配方法,提升影像匹配效率。The present application provides an image matching method, device, and computer-readable storage medium, the main purpose of which is to provide a new image matching method applied to dense stereo scenes under aerial photography to improve image matching efficiency.
为实现上述目的,本申请提供的一种影像匹配方法,包括:In order to achieve the above objective, an image matching method provided by this application includes:
根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集;According to the scene image taken by the aerial camera, the image imaging map is generated, and the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set;
基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集;Based on the first image matching set, generating epipolar images and calculating the degree of overlap between the epipolar images, completing the second image matching, and generating a second image matching set;
基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集,并执行三维重建,得到重构后的场景影像。Based on the second image matching set, a dense matching of all pixels between images is established, a third image matching set is generated, and three-dimensional reconstruction is performed to obtain a reconstructed scene image.
此外,为实现上述目的,本申请还提供一种影像匹配装置,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的影像匹配程序,所述影像匹配程序被所述处理器执行时实现上述所述的影像匹配方法的步骤。In addition, in order to achieve the above-mentioned object, the present application also provides an image matching device. The device includes a memory and a processor. The memory stores an image matching program that can run on the processor. The image matching program When executed by the processor, the steps of the image matching method described above are realized.
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有影像匹配程序,所述影像匹配程序可被一个或者多个处理器执行,以实现如上所述的影像匹配方法的步骤。In addition, in order to achieve the above object, the present application also provides a computer-readable storage medium with an image matching program stored on the computer-readable storage medium, and the image matching program can be executed by one or more processors to achieve The steps of the image matching method described above.
本申请提出的影像匹配方法、装置及计算机可读存储介质根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集,基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集,基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集,并执行三维重建,得到重构后的场景影像。本申请提升了影像匹配效率,能够对航拍下的密集场景的影像进行三维重建,从而能够更有效的帮助使用人员进行分析研究。The image matching method, device, and computer-readable storage medium proposed in this application generate an image imaging map based on the scene image shot by an aerial camera, and use the scale-invariant feature transformation method to perform the initial image matching on the image imaging map to generate the initial image matching set, Based on the first image matching set, generate epipolar images and calculate the degree of overlap between the epipolar images, complete the second image matching, and generate a second image matching set based on the second image matching set Establish dense matching of all pixels between images, generate a third image matching set, and perform 3D reconstruction to obtain a reconstructed scene image. This application improves the efficiency of image matching, and can perform three-dimensional reconstruction of images of dense scenes under aerial photography, so as to more effectively help users to conduct analysis and research.
附图说明Description of the drawings
图1为本申请一实施例提供的影像匹配方法的流程示意图;FIG. 1 is a schematic flowchart of an image matching method provided by an embodiment of this application;
图2为本申请一实施例提供的影像匹配装置的内部结构示意图;2 is a schematic diagram of the internal structure of an image matching device provided by an embodiment of the application;
图3为本申请一实施例提供的影像匹配装置中影像匹配程序的模块示意 图。Fig. 3 is a schematic diagram of modules of an image matching program in an image matching device provided by an embodiment of the application.
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization, functional characteristics, and advantages of the purpose of this application will be further described in conjunction with the embodiments and with reference to the accompanying drawings.
具体实施方式detailed description
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In order to make the purpose, technical solutions, and advantages of this application clearer, the following further describes this application in detail with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are only used to explain the application, and not used to limit the application. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,所述“第一”、“第二”等的描述仅用于描述目的,而不能理解为指示或暗示其相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。The terms "first", "second", "third", "fourth", etc. (if any) in the specification and claims of this application and the above-mentioned drawings are used to distinguish similar objects, without having to use To describe a specific order or sequence. It should be understood that the data used in this way can be interchanged under appropriate circumstances so that the embodiments described herein can be implemented in an order other than the content illustrated or described herein. In addition, the descriptions of "first", "second", etc. are only for descriptive purposes, and cannot be understood as indicating or implying their relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with "first" and "second" may explicitly or implicitly include at least one of the features.
进一步地,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。Further, the terms "including" and "having" and any variations of them are intended to cover non-exclusive inclusions. For example, a process, method, system, product, or device that includes a series of steps or units is not necessarily limited to clearly listed Instead, it may include other steps or units that are not clearly listed or are inherent to these processes, methods, products, or equipment.
另外,各个实施例之间的技术方案可以相互结合,但是必须是以本领域普通技术人员能够实现为基础,当技术方案的结合出现相互矛盾或无法实现时应当认为这种技术方案的结合不存在,也不在本申请要求的保护范围之内。In addition, the technical solutions between the various embodiments can be combined with each other, but it must be based on what can be achieved by a person of ordinary skill in the art. When the combination of technical solutions is contradictory or cannot be achieved, it should be considered that such a combination of technical solutions does not exist. , Not within the scope of protection required by this application.
本申请提供一种影像匹配方法。参照图1所示,为本申请一实施例提供的影像匹配方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。This application provides an image matching method. Referring to FIG. 1, it is a schematic flowchart of an image matching method provided by an embodiment of this application. The method can be executed by a device, and the device can be implemented by software and/or hardware.
在本实施例中,影像匹配方法包括:In this embodiment, the image matching method includes:
S10、根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集。S10. Generate an image imaging map according to the scene image taken by the aerial camera, and perform an initial image matching on the image imaging map by using a scale-invariant feature transformation method to generate an initial image matching set.
由航拍仪器,如无人机、直升机等飞控系统拍摄的场景影像,影像数量多且视角宽广,特别是建筑物,数量多密集度大。所以,本申请首先将存在重叠的影像集恢复出各自的位置,重新构建出物体的成像图。The scene images taken by aerial equipment, such as drones, helicopters and other flight control systems, have a large number of images and a wide viewing angle, especially buildings, which are dense and dense. Therefore, this application first restores the overlapping image sets to their respective positions, and reconstructs the imaging map of the object.
本申请较佳实施例根据航拍仪器拍摄需场景影像时所记录的低精度位置、姿态信息及测区的概略高度等参数,利用航拍下物体成像的模型公式将存在重叠的场景影像恢复出各自的位置,生成影像成像图。In the preferred embodiment of the present application, based on the low-precision position, attitude information, and approximate height of the survey area recorded by the aerial equipment when shooting scene images, the model formula for object imaging under aerial photography is used to recover overlapping scene images. Position, generate image imaging map.
本申请较佳实施例中,所述物体成像的模型公式如下:In a preferred embodiment of the present application, the model formula of the object imaging is as follows:
sm=KR[I-C]M,sm=KR[I-C]M,
其中,s为尺度系数;m为像点坐标,M为物点坐标(所述物点、像点分别是光学成像中的物方位置和像方位置);K为航拍工具内的参数矩阵,由焦距、像主点坐标组成;R为旋转矩阵,可根据航拍工具的系统所记录的偏航(yaw)、俯仰(pitch)、侧滚(roll),然后转换得到其近似值;C为投影中心位置向量,可直接由航拍工具的GPS记录的经度(longitude)、纬度(latitude)、高(altitude)近似得到;I为3阶单位矩阵。Among them, s is the scale factor; m is the coordinate of the image point, and M is the coordinate of the object point (the object point and the image point are the object position and the image position in the optical imaging respectively); K is the parameter matrix in the aerial photography tool, It is composed of focal length and principal point coordinates; R is a rotation matrix, which can be converted to approximate values according to the yaw, pitch, and roll recorded by the aerial tool’s system; C is the projection center The position vector can be approximated directly from the longitude, latitude, and altitude recorded by the GPS of the aerial photography tool; I is the third-order unit matrix.
利用上述物体成像的模型公式可以得到n幅影像的成像图。Using the above-mentioned object imaging model formula, the imaging map of n images can be obtained.
上述生成的n幅影像成像图,统称为影像集,将所述影像集转换为对应的无向图边集合E:G n=(V n,E n),其中,V n称为顶点集,E n称为边集(图是一种广泛应用的数据结构,图中的结点称为顶点,两个顶点之间的关系可用一个偶对来表示,称为边。如果图中代表边的偶对是有序的,那么称该图为有向图,如果代表边的偶对是无序的,则称其为无向图)。无向图边集合E中,E n代表边的数量为nE,每条表代表一个影像对,则E n代表nE个影像对,后续的影像匹配处理只需在这nE个影像对之间进行。若不考虑影像间关系,采用穷举遍历策略进行影像匹配,则总匹配数为
Figure PCTCN2019102187-appb-000001
通常n*(n-1)/2会远大于nE。因此,构建影像关系无向图的方法限定了影像匹配的范围,可以避免盲目的影像匹配,使总的影像匹配计算复杂度由O(n 2)减少到O(n),提高了匹配计算效率;同时又能有效排除非关联影像对的干扰,从根本上避免了无重叠影像产生的误匹配,提高匹配的准确率和重建的稳健性。
The generation of n pieces of video imaging FIG, referred to as an image set, image set into the free edge set corresponding to FIG E: G n = (V n , E n), wherein, referred to as a vertex set V n, En is called an edge set (a graph is a widely used data structure. The nodes in the graph are called vertices. The relationship between two vertices can be represented by a pair, called an edge. If the graph represents an edge Even pairs are ordered, then the graph is called a directed graph, if the pairs representing edges are disordered, then it is called an undirected graph). No set of edges E in the drawing, represents the number nE E n-side is, for each table represents one image, on behalf of the E nE n-th image for subsequent image matching process performed between only one image pair nE . If the relationship between images is not considered and the exhaustive traversal strategy is used for image matching, the total number of matches is
Figure PCTCN2019102187-appb-000001
Usually n*(n-1)/2 will be much larger than nE. Therefore, the method of constructing the image relationship undirected graph limits the scope of image matching, can avoid blind image matching, reduce the total image matching calculation complexity from O(n 2 ) to O(n), and improve the matching calculation efficiency ; At the same time, it can effectively eliminate the interference of unrelated image pairs, fundamentally avoid mismatches caused by non-overlapping images, and improve the accuracy of matching and the robustness of reconstruction.
在无向图边集合E中,采用尺度不变特征变换(Scale-invariant feature transform,SIFT)算法进行影像匹配。在影像匹配中,若I i、I j两影像的匹配 点较少,小于阈值N 1,说明重叠较小或关联较弱,将(I i、I j)从集合E中剔除。若I i、I j两影像的匹配点数量大于阈值N 1,则保留该成像图对,生成
Figure PCTCN2019102187-appb-000002
共n 1E个影像对,产生初次影像匹配集E 1
In the undirected graph edge set E, the scale-invariant feature transform (SIFT) algorithm is used for image matching. In image matching, if there are few matching points in the two images I i and I j , which are smaller than the threshold N 1 , it means that the overlap is small or the correlation is weak, and (I i , I j ) is removed from the set E. If the number of matching points in the two images I i and I j is greater than the threshold N 1 , then the imaging image pair is retained to generate
Figure PCTCN2019102187-appb-000002
There are n 1 E image pairs in total, and the first image matching set E 1 is generated.
S20、基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集。S20: Based on the first image matching set, generate an epipolar image and calculate the degree of overlap between the epipolar images, complete the second image matching, and generate a second image matching set.
上述的S10只是过滤掉没有重复或重复度小的影像,对于具有一定重复的影像来说,本申请继续利用核线影像方法进行匹配过滤。The above S10 only filters out images with no repetition or low repetition. For images with certain repetition, this application continues to use the epipolar image method to perform matching filtering.
所述核线影像是在匹配的过程中,将搜索范围从二维平面的成像图变为一维直线的一种方法。具体来说,在密集的立体航拍中,拍摄基线与任一地面点构成的平面称为核面,核面与像面的交线称为核线。在立体像对上,同名像点一定位于同名核线上,而且同名核线对上的像点是一一对应的。因而,如果能够在立体像对上确定同名核线对,那么利用同名核线对的上述性质,就可以把二维影像的搜索和匹配转变成沿核线的搜索和匹配。核线影像消除了立体影像间的上下视差,缩小搜索范围,降低匹配计算量,提高匹配准确性,所以对于密集立体的影像匹配具有重要意义。The epipolar image is a method of changing the search range from a two-dimensional plane image to a one-dimensional straight line during the matching process. Specifically, in dense three-dimensional aerial photography, the plane formed by the shooting baseline and any ground point is called the nuclear surface, and the intersection line between the nuclear surface and the image surface is called the nuclear line. In a stereo pair, the image points with the same name must be on the epipolar line of the same name, and the image points on the epipolar line of the same name have a one-to-one correspondence. Therefore, if an epipolar pair with the same name can be determined on a stereo image pair, then using the above-mentioned properties of the epipolar pair with the same name, the search and matching of the two-dimensional image can be transformed into the search and matching along the epipolar line. The epipolar image eliminates the upper and lower parallax between the stereo images, narrows the search range, reduces the amount of matching calculations, and improves the matching accuracy, so it is of great significance for dense stereo image matching.
本申请较佳实施例揭露了一种核线影像制作和匹配方法,用于生成核线影像并计算所述核线影像之间的重叠度,该方法包括:(a)利用SIFT算法对所述
Figure PCTCN2019102187-appb-000003
影像对进行点特征提取,获取分布均匀的高精度同名点后,利用基于RANSAC策略进行基础矩阵估计,得到n 1E个影像对的基础矩阵;(b)利用所述基础矩阵确定每组同名点对应的同名核线;(c)根据核线必相交于核点的原理,采用最小二乘法确定
Figure PCTCN2019102187-appb-000004
影像对的核点坐标,根据核点坐标生成影像间核线的快速映射,并沿核线方向采用双线性内插法进行核线重采样,完成核线影像制作并匹配重新生成
Figure PCTCN2019102187-appb-000005
共n 2E个影像对,产生第二次影像匹配集。
A preferred embodiment of the present application discloses a method for making and matching epipolar images for generating epipolar images and calculating the degree of overlap between the epipolar images. The method includes: (a) using the SIFT algorithm to compare the
Figure PCTCN2019102187-appb-000003
After image pair point feature extraction, uniformly distributed high-precision points with the same name are obtained, the basic matrix estimation based on the RANSAC strategy is used to obtain the basic matrix of n 1 E image pairs; (b) using the basic matrix to determine each group of points with the same name Corresponding core line with the same name; (c) According to the principle that the core line must intersect at the core point, the least square method is used to determine
Figure PCTCN2019102187-appb-000004
The core point coordinates of the image pair are used to generate a quick mapping of the epipolar lines between the images according to the core point coordinates, and the epipolar line is resampled by bilinear interpolation along the epipolar line direction to complete the epipolar image production and matching regeneration
Figure PCTCN2019102187-appb-000005
There are a total of n 2 E image pairs to generate the second image matching set.
基于基础矩阵的核线影像制作方法可以避免相对关系解算时的迭代计算以及赋初值等问题,且在航拍采用大角度视角的情况下,也可以有很好的解算精度。以下是上述步骤(c)即的具体步骤:The epipolar image production method based on the basic matrix can avoid the problems of iterative calculation and initial value assignment when calculating the relative relationship, and it can also have good accuracy when the aerial photography uses a large angle of view. The following are the specific steps of the above step (c):
(1)确定核点坐标:(1) Determine the coordinates of the nuclear point:
基于所述基础矩阵,构建旋转矩阵和投影矩阵,将所述旋转矩阵分为x、y、z轴,各为
Figure PCTCN2019102187-appb-000006
其中,
Figure PCTCN2019102187-appb-000007
的表达方式如下:
Based on the basic matrix, a rotation matrix and a projection matrix are constructed, and the rotation matrix is divided into x, y, and z axes, each of which is
Figure PCTCN2019102187-appb-000006
among them,
Figure PCTCN2019102187-appb-000007
Is expressed as follows:
Figure PCTCN2019102187-appb-000008
并得到投影矩阵
Figure PCTCN2019102187-appb-000009
Figure PCTCN2019102187-appb-000008
And get the projection matrix
Figure PCTCN2019102187-appb-000009
Figure PCTCN2019102187-appb-000010
Figure PCTCN2019102187-appb-000010
其中,
Figure PCTCN2019102187-appb-000011
为相机left的相机参数,
Figure PCTCN2019102187-appb-000012
为相机right的相机参数,t left、t right分别是相机left和相机right的相机参数在x、y、z轴的分量;
among them,
Figure PCTCN2019102187-appb-000011
Is the camera parameter of the camera left,
Figure PCTCN2019102187-appb-000012
Are the camera parameters of camera right, t left and t right are the components of the camera parameters of camera left and camera right in the x, y, and z axes respectively;
根据核线必交于核点原理,由投影矩阵的计算结果,采用最小二乘法确定影像的核点坐标(x p,y p)。 According to the principle that the core line must intersect at the core point, the least square method is used to determine the core point coordinates (x p , y p ) of the image from the calculation result of the projection matrix.
(2)执行核线映射:(2) Perform nuclear mapping:
映射的目的是为了在中心投影影像上直接进行核线影像纠正。映射的具体步骤是:根据共线条件方程推导出中心投影,在核线采样时,计算出相邻两条核线之间的夹角关系,完成中心投影影像上每条核线的确定;利用上述生成的基础矩阵,可以确定每个影像对
Figure PCTCN2019102187-appb-000013
所对应的核线,分别根据
Figure PCTCN2019102187-appb-000014
中每个核像的核点坐标确定该点的核线,完成同影像对之间的核线对应,得到
Figure PCTCN2019102187-appb-000015
的核线方程为:
The purpose of mapping is to directly correct the epipolar image on the central projection image. The specific steps of the mapping are: derive the central projection according to the collinear condition equation, calculate the angle relationship between two adjacent epipolar lines when the epipolar lines are sampled, and complete the determination of each epipolar line on the central projection image; use The basic matrix generated above can determine each image pair
Figure PCTCN2019102187-appb-000013
The corresponding core lines are based on
Figure PCTCN2019102187-appb-000014
The nuclear point coordinates of each nuclear image in the image determine the epipolar line of the point, and complete the epipolar line correspondence between the same image pair to obtain
Figure PCTCN2019102187-appb-000015
The epipolar equation is:
Figure PCTCN2019102187-appb-000016
Figure PCTCN2019102187-appb-000016
其中,(x p,y p)是上述计算的核点坐标,(x base,y base)为中心投影影像的基准坐标,同理得到
Figure PCTCN2019102187-appb-000017
的核线方程。
Among them, (x p , y p ) are the coordinates of the core point calculated above, (x base , y base ) are the reference coordinates of the center projection image, and the same goes for
Figure PCTCN2019102187-appb-000017
The epipolar equation.
(3)产生第二次影像匹配集:(3) Generate the second image matching set:
以所述核线方程为基础,建立起每个影像对
Figure PCTCN2019102187-appb-000018
的核线映射之后,按照双线性内插法的重采样规则,生成核线影像并计算重叠度,将核线影像的重叠度小于阈值N 2的影像对丢弃,生成
Figure PCTCN2019102187-appb-000019
共n 2E个影像对,得到第二次影像匹配集E 2
Based on the epipolar equation, each image pair is established
Figure PCTCN2019102187-appb-000018
After the epipolar line mapping, according to the resampling rule of the bilinear interpolation method, the epipolar line image is generated and the overlap degree is calculated. The image pairs whose overlap degree of the epipolar line image is less than the threshold N 2 are discarded to generate
Figure PCTCN2019102187-appb-000019
There are n 2 E image pairs in total, and the second image matching set E 2 is obtained .
上述生成的第二次影像匹配集E 2虽然解决了大部分的立体匹配相似问题,达到了影像匹配的标准,但是对于战场监察、灾害情况搜救等这类密集的立体环境来说,还需要从二维影像恢复三维物体表面的几何结构,因此需进一步的处理,执行下述的S30。 Although the second image matching set E 2 generated above solves most of the similarity problems of stereo matching and meets the standard of image matching, it is still necessary for dense stereo environments such as battlefield monitoring and disaster search and rescue. The two-dimensional image restores the geometric structure of the three-dimensional object surface, so further processing is required, and the following S30 is executed.
S30、基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,实现三维重建。S30. Based on the second image matching set, establish dense matching of all pixels between the images to realize three-dimensional reconstruction.
本申请较佳实施例采用密集匹配算法,在第二次影像匹配集E 2的基础上, 利用最小同值分割吸收核的角点检测算法分别提取
Figure PCTCN2019102187-appb-000020
的角点,形成角点的匹配点集合;结合对极几何约束、动态规划算法等方法建立起
Figure PCTCN2019102187-appb-000021
影像间所有像素点的密集匹配。具体步骤如下:
The preferred embodiment of the present application adopts a dense matching algorithm, and on the basis of the second image matching set E 2 , the corner detection algorithm of the least equivalent segmentation absorbing core is used to extract respectively
Figure PCTCN2019102187-appb-000020
The corner points of the corner points form a set of matching points of the corner points; combined with the epipolar geometric constraints, dynamic programming algorithms and other methods to establish
Figure PCTCN2019102187-appb-000021
Dense matching of all pixels between images. Specific steps are as follows:
(一)检测第二次影像匹配集E 2中影像的角点。 The second image matching set of corner points in an image in E 2 (a) detection.
角点是影像的轮廓线上局部曲率变化最大的点,它含有影像中重要的信息,所以对于影像集E 2中影像的细节匹配具有很大意义。本申请较佳实施例采用最小同值分割吸收核方法(smallest uni-value segment assimilating nucleus,SUSAN)对影像角点进行检测:对输入影像进行高斯平滑;遍历影像中每个像素点,先利用Sobel算子(它是一个离散的一阶差分算子,用来计算影像亮度函数的一阶梯度之近似值)判断该像素点是否为边缘点;若是边缘点,则进一步根据全局能量方程的损失函数最小化原理,判断角点的路径损失L r(p,d)是否最小化,判定原则如下: The corner point is the point where the local curvature changes the most on the contour of the image. It contains important information in the image, so it is of great significance for the detail matching of the image in the image set E 2 . The preferred embodiment of the present application adopts the smallest uni-value segment assimilating nucleus (SUSAN) method to detect image corners: Gaussian smoothing is performed on the input image; each pixel in the image is traversed, and Sobel is used first The operator (it is a discrete first-order difference operator, used to calculate the approximate value of a gradient of the image brightness function) to determine whether the pixel is an edge point; if it is an edge point, the loss function according to the global energy equation is the smallest To determine whether the path loss L r (p, d) of the corner point is minimized, the determination principle is as follows:
Figure PCTCN2019102187-appb-000022
Figure PCTCN2019102187-appb-000022
其中C(p,d)为路径的局部损失,
Figure PCTCN2019102187-appb-000023
为路径的上一步损失,L r,min(p-r)为路径上一步损失的最小损失,由此可判定该点是否为角点,并去除冗余角点;进一步判断若被检测出的两角点相邻,则去掉L r(p,d)较大的角点。经过以上步骤,可以检测出第二次影像匹配集E 2中影像对的角点。
Where C(p,d) is the local loss of the path,
Figure PCTCN2019102187-appb-000023
Is the loss of the previous step of the path, L r ,min(pr) is the minimum loss of the previous step of the path, from which it can be determined whether the point is a corner point, and redundant corner points are removed; further judge if the two corners are detected If the points are adjacent, the corner points with larger L r (p,d) are removed. After the above steps, the corner points of the image pair in the second image matching set E 2 can be detected.
(二)对
Figure PCTCN2019102187-appb-000024
影像对的角点进行自动匹配,得到匹配点集。
(2) Yes
Figure PCTCN2019102187-appb-000024
The corner points of the image pair are automatically matched to obtain a matching point set.
角点的自动匹配可以根据角点的异同有效区分出影像对之间的差异,是达到精准匹配的一个有效手段。角点自动匹配可分为这几步:①对于
Figure PCTCN2019102187-appb-000025
影像角点集合中的每一个点,在
Figure PCTCN2019102187-appb-000026
影像中的相应搜索区域内寻找与之相匹配的角点,相似的,对于
Figure PCTCN2019102187-appb-000027
影像角点集合中的每一个点,按同样的搜索方法在
Figure PCTCN2019102187-appb-000028
影像中寻找它们的对应匹配点,将这两个匹配点集合的交集称为初始匹配点集K 1;②在所述初始匹配点集K 1中,对在
Figure PCTCN2019102187-appb-000029
影像对角点都互相集中的角点,在相应搜索区域内寻找匹配点,计算这个点与搜索区域内每个候选匹配点的相似度,选取与之相似度最大的候选匹配点为它的匹配点。本申请较佳实施例中,所述相似度的计算方法采用梯度大小相似法:若一像素点的梯度大小为g,与之 匹配的另一像素点的梯度大小近似服从正态分布,则两个像素的相似度l g为:
The automatic matching of corner points can effectively distinguish the difference between image pairs based on the similarities and differences of the corner points, which is an effective means to achieve precise matching. The corner point automatic matching can be divided into these steps: ①For
Figure PCTCN2019102187-appb-000025
Each point in the set of image corners is in
Figure PCTCN2019102187-appb-000026
Find matching corner points in the corresponding search area in the image, similarly, for
Figure PCTCN2019102187-appb-000027
Each point in the set of image corners is searched in the same way
Figure PCTCN2019102187-appb-000028
Find their corresponding matching points in the image, and call the intersection of these two matching point sets the initial matching point set K 1 ; ②In the initial matching point set K 1 , the pair is
Figure PCTCN2019102187-appb-000029
The corner point where the diagonal points of the image are concentrated with each other, find the matching point in the corresponding search area, calculate the similarity between this point and each candidate matching point in the search area, and select the candidate matching point with the greatest similarity as its match point. In a preferred embodiment of the present application, the similarity calculation method adopts the gradient size similarity method: if the gradient size of a pixel is g, and the gradient of another pixel that matches it approximately obeys the normal distribution, then the two The similarity l g of each pixel is:
Figure PCTCN2019102187-appb-000030
Figure PCTCN2019102187-appb-000030
其中,d(x)为密度函数,k为密度系数。经过相似度的计算得到匹配点集K 2Among them, d(x) is the density function and k is the density coefficient. After calculating the similarity, the matching point set K 2 is obtained .
(三)根据所述匹配点集K 2,建立起
Figure PCTCN2019102187-appb-000031
影像间所有像素点的密集匹配。
(3) According to the matching point set K 2 , establish
Figure PCTCN2019102187-appb-000031
Dense matching of all pixels between images.
本申请较佳实施例中,所述密集匹配包括:①对匹配点集K 2根据极限几何约束关系得到
Figure PCTCN2019102187-appb-000032
影像的极线对应关系,得到极限对应关系集K 3。所谓的极限几何约束关系就是指若l和l’是左右两幅影像中的两条对应极线,则左影像中极线l上的点p在右影像中的对应点一定在极线l’上。②由生成的极限对应关系集K 3,按灰度对K 3内的极线进行分段,每条极线被分为若干灰度分段,每段上的像素点的灰度值都相近。灰度分段的公式如下:
In a preferred embodiment of the present application, the dense matching includes: ① Obtain the matching point set K 2 according to the limit geometric constraint relationship
Figure PCTCN2019102187-appb-000032
The polar line correspondence of the image is obtained, and the limit correspondence set K 3 is obtained . The so-called limit geometric constraint relationship means that if l and l'are the two corresponding epipolar lines in the left and right images, the corresponding point of the point p on the epipolar line l in the left image in the right image must be on the epipolar line l' on. ②From the generated limit correspondence set K 3 , the polar lines in K 3 are segmented according to gray levels. Each polar line is divided into several gray segments, and the gray values of pixels on each segment are similar. . The formula for gray scale segmentation is as follows:
Figure PCTCN2019102187-appb-000033
Figure PCTCN2019102187-appb-000033
上述公式的物理意义是将灰度值在某一范围内的连续点分为一段。其中,I(x t,y t)为像素点(x t,y t)的灰度值;w为某一灰度分段上的像素点个数,即灰度分段的长度;T为某一阈值,取值越小,被划分为某一灰度分段上的像素点个数越少,灰度分段数就越多,经实验研究,T取3时匹配效果最好,生成的灰度分段集合为K 4;③利用动态规划算法(是一种寻找最佳匹配路径的优化方法)建立灰度分段之间的对应关系,利用线性插值的方法在省对应的灰度分段之间建立各像素点之间的对应关系,从而实现影像间所有像素点的密集匹配,得到第三次影像匹配集E 3The physical meaning of the above formula is to divide the continuous points of gray value in a certain range into one section. Among them, I (x t , y t ) is the gray value of the pixel (x t , y t ); w is the number of pixels on a gray segment, that is, the length of the gray segment; T is For a certain threshold, the smaller the value, the fewer pixels are divided into a certain gray-scale segment, and the more gray-scale segments. According to experimental research, the matching effect is the best when T is set to 3. The gray-level segment set of is K 4 ; ③Using dynamic programming algorithm (an optimization method to find the best matching path) to establish the correspondence between gray-level segments, and using linear interpolation to save the corresponding gray-level The corresponding relationship between the pixels is established between the segments, so that the dense matching of all the pixels between the images is realized, and the third image matching set E 3 is obtained .
经过密集匹配后,所述第三次影像匹配集E 3中所有像素点都存在对应关系,因此可以计算出场景的景深,利用预设的3Dmax软件对场景进行重构,恢复场景空间的三维几何信息,得到重构后的影像。 After dense matching, all pixels in the third image matching set E 3 have a corresponding relationship, so the depth of field of the scene can be calculated, and the preset 3Dmax software can be used to reconstruct the scene to restore the three-dimensional geometry of the scene space Information to get the reconstructed image.
本申请还提供一种影像匹配装置。参照图2所示,为本申请一实施例提供的影像匹配装置的内部结构示意图。The application also provides an image matching device. Referring to FIG. 2, it is a schematic diagram of the internal structure of an image matching device provided by an embodiment of this application.
在本实施例中,影像匹配装置1可以是PC(Personal Computer,个人电脑),也可以是智能手机、平板电脑、便携计算机等终端设备。该影像匹配装置1至少包括存储器11、处理器12,通信总线13,以及网络接口14。In this embodiment, the image matching device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer. The image matching device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性 存储器、磁盘、光盘等。存储器11在一些实施例中可以是影像匹配装置1的内部存储单元,例如该影像匹配装置1的硬盘。存储器11在另一些实施例中也可以是影像匹配装置1的外部存储设备,例如影像匹配装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括影像匹配装置1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于影像匹配装置1的应用软件及各类数据,例如影像匹配程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。The memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like. The memory 11 may be an internal storage unit of the image matching device 1 in some embodiments, such as a hard disk of the image matching device 1. In other embodiments, the memory 11 may also be an external storage device of the image matching device 1, such as a plug-in hard disk, a smart media card (SMC), and a secure digital (Secure Digital, SD card, Flash Card, etc. Further, the memory 11 may also include both an internal storage unit of the image matching device 1 and an external storage device. The memory 11 can be used not only to store application software and various data installed in the image matching device 1, such as the code of the image matching program 01, etc., but also to temporarily store data that has been output or will be output.
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行影像匹配程序01等。The processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor or other data processing chip in some embodiments, and is used to run the program code or processing stored in the memory 11 Data, such as execution of image matching program 01, etc.
通信总线13用于实现这些组件之间的连接通信。The communication bus 13 is used to realize the connection and communication between these components.
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该装置1与其他电子设备之间建立通信连接。The network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the device 1 and other electronic devices.
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在影像匹配装置1中处理的信息以及用于显示可视化的用户界面。Optionally, the device 1 may also include a user interface. The user interface may include a display (Display) and an input unit such as a keyboard (Keyboard). The optional user interface may also include a standard wired interface and a wireless interface. Optionally, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light emitting diode) touch device, etc. Among them, the display can also be called a display screen or a display unit as appropriate, and is used to display the information processed in the image matching device 1 and to display a visualized user interface.
图2仅示出了具有组件11-14以及影像匹配程序01的影像匹配装置1,本领域技术人员可以理解的是,图1示出的结构并不构成对影像匹配装置1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。FIG. 2 only shows the image matching device 1 with components 11-14 and the image matching program 01. Those skilled in the art can understand that the structure shown in FIG. 1 does not constitute a limitation on the image matching device 1, and may include Fewer or more components than shown, or some combination of components, or different component arrangement.
在图2所示的装置1实施例中,存储器11中存储有影像匹配程序01;处理器12执行存储器11中存储的影像匹配程序01时实现如下步骤:In the embodiment of the device 1 shown in FIG. 2, the image matching program 01 is stored in the memory 11; when the processor 12 executes the image matching program 01 stored in the memory 11, the following steps are implemented:
第一步、根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集。The first step is to generate an image imaging map according to the scene image taken by the aerial camera, and use the scale-invariant feature transformation method to perform the initial image matching on the image imaging map to generate the initial image matching set.
由航拍仪器,如无人机、直升机等飞控系统拍摄的场景影像,影像数量 多且视角宽广,特别是建筑物,数量多密集度大。所以,本申请首先将存在重叠的影像集恢复出各自的位置,重新构建出物体的成像图。The scene images captured by aerial equipment, such as drones, helicopters and other flight control systems, have a large number of images and a wide viewing angle, especially buildings, which are dense in number. Therefore, this application first restores the overlapping image sets to their respective positions, and reconstructs the imaging map of the object.
本申请较佳实施例根据航拍仪器拍摄需场景影像时所记录的低精度位置、姿态信息及测区的概略高度等参数,利用航拍下物体成像的模型公式将存在重叠的场景影像恢复出各自的位置,生成影像成像图。In the preferred embodiment of the present application, based on the low-precision position, attitude information, and approximate height of the survey area recorded by the aerial equipment when shooting scene images, the model formula for object imaging under aerial photography is used to recover overlapping scene images. Position, generate image imaging map.
本申请较佳实施例中,所述物体成像的模型公式如下:In a preferred embodiment of the present application, the model formula of the object imaging is as follows:
sm=KR[I-C]M,sm=KR[I-C]M,
其中,s为尺度系数;m为像点坐标,M为物点坐标(所述物点、像点分别是光学成像中的物方位置和像方位置);K为航拍工具内的参数矩阵,由焦距、像主点坐标组成;R为旋转矩阵,可根据航拍工具的系统所记录的偏航(yaw)、俯仰(pitch)、侧滚(roll),然后转换得到其近似值;C为投影中心位置向量,可直接由航拍工具的GPS记录的经度(longitude)、纬度(latitude)、高(altitude)近似得到;I为3阶单位矩阵。Among them, s is the scale factor; m is the coordinate of the image point, and M is the coordinate of the object point (the object point and the image point are the object position and the image position in the optical imaging respectively); K is the parameter matrix in the aerial photography tool, It is composed of focal length and principal point coordinates; R is a rotation matrix, which can be converted to approximate values according to the yaw, pitch, and roll recorded by the aerial tool’s system; C is the projection center The position vector can be approximated directly from the longitude, latitude, and altitude recorded by the GPS of the aerial photography tool; I is the third-order unit matrix.
利用上述物体成像的模型公式可以得到n幅影像的成像图。Using the above-mentioned object imaging model formula, the imaging map of n images can be obtained.
上述生成的n幅影像成像图,统称为影像集,将所述影像集转换为对应的无向图边集合E:G n=(V n,E n),其中,V n称为顶点集,E n称为边集(图是一种广泛应用的数据结构,图中的结点称为顶点,两个顶点之间的关系可用一个偶对来表示,称为边。如果图中代表边的偶对是有序的,那么称该图为有向图,如果代表边的偶对是无序的,则称其为无向图)。无向图边集合E中,E n代表边的数量为nE,每条表代表一个影像对,则E n代表nE个影像对,后续的影像匹配处理只需在这nE个影像对之间进行。若不考虑影像间关系,采用穷举遍历策略进行影像匹配,则总匹配数为
Figure PCTCN2019102187-appb-000034
通常n*(n-1)/2会远大于nE。因此,构建影像关系无向图的方法限定了影像匹配的范围,可以避免盲目的影像匹配,使总的影像匹配计算复杂度由O(n 2)减少到O(n),提高了匹配计算效率;同时又能有效排除非关联影像对的干扰,从根本上避免了无重叠影像产生的误匹配,提高匹配的准确率和重建的稳健性。
The generation of n pieces of video imaging FIG, referred to as an image set, image set into the free edge set corresponding to FIG E: G n = (V n , E n), wherein, referred to as a vertex set V n, En is called an edge set (a graph is a widely used data structure. The nodes in the graph are called vertices. The relationship between two vertices can be represented by a pair, called an edge. If the graph represents an edge Even pairs are ordered, then the graph is called a directed graph, if the pairs representing edges are disordered, then it is called an undirected graph). No set of edges E in the drawing, represents the number nE E n-side is, for each table represents one image, on behalf of the E nE n-th image for subsequent image matching process performed between only one image pair nE . If the relationship between images is not considered and the exhaustive traversal strategy is used for image matching, the total number of matches is
Figure PCTCN2019102187-appb-000034
Usually n*(n-1)/2 will be much larger than nE. Therefore, the method of constructing the image relationship undirected graph limits the scope of image matching, can avoid blind image matching, reduce the total image matching calculation complexity from O(n 2 ) to O(n), and improve the matching calculation efficiency ; At the same time, it can effectively eliminate the interference of unrelated image pairs, fundamentally avoid mismatches caused by non-overlapping images, and improve the accuracy of matching and the robustness of reconstruction.
在无向图边集合E中,采用尺度不变特征变换(Scale-invariant feature transform,SIFT)算法进行影像匹配。在影像匹配中,若I i、I j两影像的匹配点较少,小于阈值N 1,说明重叠较小或关联较弱,将(I i、I j)从集合E中剔除。 若I i、I j两影像的匹配点数量大于阈值N 1,则保留该成像图对,生成
Figure PCTCN2019102187-appb-000035
共n 1E个影像对,产生初次影像匹配集E 1
In the undirected graph edge set E, the scale-invariant feature transform (SIFT) algorithm is used for image matching. In image matching, if there are few matching points in the two images I i and I j , which are smaller than the threshold N 1 , it means that the overlap is small or the correlation is weak, and (I i , I j ) is removed from the set E. If the number of matching points in the two images I i and I j is greater than the threshold N 1 , then the imaging image pair is retained to generate
Figure PCTCN2019102187-appb-000035
There are n 1 E image pairs in total, and the first image matching set E 1 is generated.
第二步、基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集。The second step is to generate an epipolar image based on the initial image matching set and calculate the degree of overlap between the epipolar images to complete the second image matching and generate a second image matching set.
上述的第一步只是过滤掉没有重复或重复度小的影像,对于具有一定重复的影像来说,本申请继续利用核线影像方法进行匹配过滤。The above-mentioned first step is only to filter out images with no repetition or low repetition. For images with a certain degree of repetition, this application continues to use the epipolar image method to perform matching filtering.
所述核线影像是在匹配的过程中,将搜索范围从二维平面的成像图变为一维直线的一种方法。具体来说,在密集的立体航拍中,拍摄基线与任一地面点构成的平面称为核面,核面与像面的交线称为核线。在立体像对上,同名像点一定位于同名核线上,而且同名核线对上的像点是一一对应的。因而,如果能够在立体像对上确定同名核线对,那么利用同名核线对的上述性质,就可以把二维影像的搜索和匹配转变成沿核线的搜索和匹配。核线影像消除了立体影像间的上下视差,缩小搜索范围,降低匹配计算量,提高匹配准确性,所以对于密集立体的影像匹配具有重要意义。The epipolar image is a method of changing the search range from a two-dimensional plane image to a one-dimensional straight line during the matching process. Specifically, in dense three-dimensional aerial photography, the plane formed by the shooting baseline and any ground point is called the nuclear surface, and the intersection line between the nuclear surface and the image surface is called the nuclear line. In a stereo pair, the image points with the same name must be on the epipolar line of the same name, and the image points on the epipolar line of the same name have a one-to-one correspondence. Therefore, if an epipolar pair with the same name can be determined on a stereo image pair, then using the above-mentioned properties of the epipolar pair with the same name, the search and matching of the two-dimensional image can be transformed into the search and matching along the epipolar line. The epipolar image eliminates the upper and lower parallax between the stereo images, narrows the search range, reduces the amount of matching calculations, and improves the matching accuracy, so it is of great significance for dense stereo image matching.
本申请较佳实施例揭露了一种核线影像制作和匹配方法,用于生成核线影像并计算所述核线影像之间的重叠度,该方法包括:(a)利用SIFT算法对所述
Figure PCTCN2019102187-appb-000036
影像对进行点特征提取,获取分布均匀的高精度同名点后,利用基于RANSAC策略进行基础矩阵估计,得到n 1E个影像对的基础矩阵;(b)利用所述基础矩阵确定每组同名点对应的同名核线;(c)根据核线必相交于核点的原理,采用最小二乘法确定
Figure PCTCN2019102187-appb-000037
影像对的核点坐标,根据核点坐标生成影像间核线的快速映射,并沿核线方向采用双线性内插法进行核线重采样,完成核线影像制作并匹配重新生成
Figure PCTCN2019102187-appb-000038
共n 2E个影像对,产生第二次影像匹配集。
A preferred embodiment of the present application discloses a method for making and matching epipolar images for generating epipolar images and calculating the degree of overlap between the epipolar images. The method includes: (a) using the SIFT algorithm to compare the
Figure PCTCN2019102187-appb-000036
After image pair point feature extraction, uniformly distributed high-precision points with the same name are obtained, the basic matrix estimation based on the RANSAC strategy is used to obtain the basic matrix of n 1 E image pairs; (b) using the basic matrix to determine each group of points with the same name Corresponding core line with the same name; (c) According to the principle that the core line must intersect at the core point, the least square method is used to determine
Figure PCTCN2019102187-appb-000037
The core point coordinates of the image pair are used to generate a quick mapping of the epipolar lines between the images according to the core point coordinates, and the epipolar line is resampled by bilinear interpolation along the epipolar line direction to complete the epipolar image production and matching regeneration
Figure PCTCN2019102187-appb-000038
There are a total of n 2 E image pairs to generate the second image matching set.
基于基础矩阵的核线影像制作方法可以避免相对关系解算时的迭代计算以及赋初值等问题,且在航拍采用大角度视角的情况下,也可以有很好的解算精度。以下是上述步骤(c)即的具体步骤:The epipolar image production method based on the basic matrix can avoid the problems of iterative calculation and initial value assignment when calculating the relative relationship, and it can also have good accuracy when the aerial photography uses a large angle of view. The following are the specific steps of the above step (c):
(1)确定核点坐标:(1) Determine the coordinates of the nuclear point:
基于所述基础矩阵,构建旋转矩阵和投影矩阵,将所述旋转矩阵分为x、y、z轴,各为
Figure PCTCN2019102187-appb-000039
其中,
Figure PCTCN2019102187-appb-000040
的表达方式如下:
Based on the basic matrix, a rotation matrix and a projection matrix are constructed, and the rotation matrix is divided into x, y, and z axes, each of which is
Figure PCTCN2019102187-appb-000039
among them,
Figure PCTCN2019102187-appb-000040
Is expressed as follows:
Figure PCTCN2019102187-appb-000041
并得到投影矩阵
Figure PCTCN2019102187-appb-000042
Figure PCTCN2019102187-appb-000041
And get the projection matrix
Figure PCTCN2019102187-appb-000042
Figure PCTCN2019102187-appb-000043
Figure PCTCN2019102187-appb-000043
其中,
Figure PCTCN2019102187-appb-000044
为相机left的相机参数,
Figure PCTCN2019102187-appb-000045
为相机right的相机参数,t left、t right分别是相机left和相机right的相机参数在x、y、z轴的分量;
among them,
Figure PCTCN2019102187-appb-000044
Is the camera parameter of the camera left,
Figure PCTCN2019102187-appb-000045
Are the camera parameters of camera right, t left and t right are the components of the camera parameters of camera left and camera right in the x, y, and z axes respectively;
根据核线必交于核点原理,由投影矩阵的计算结果,采用最小二乘法确定影像的核点坐标(x p,y p)。 According to the principle that the core line must intersect at the core point, the least square method is used to determine the core point coordinates (x p , y p ) of the image from the calculation result of the projection matrix.
(2)执行核线映射:(2) Perform nuclear mapping:
映射的目的是为了在中心投影影像上直接进行核线影像纠正。映射的具体步骤是:根据共线条件方程推导出中心投影,在核线采样时,计算出相邻两条核线之间的夹角关系,完成中心投影影像上每条核线的确定;利用上述生成的基础矩阵,可以确定每个影像对
Figure PCTCN2019102187-appb-000046
所对应的核线,分别根据
Figure PCTCN2019102187-appb-000047
中每个核像的核点坐标确定该点的核线,完成同影像对之间的核线对应,得到
Figure PCTCN2019102187-appb-000048
的核线方程为:
The purpose of mapping is to directly correct the epipolar image on the central projection image. The specific steps of the mapping are: derive the central projection according to the collinear condition equation, calculate the angle relationship between two adjacent epipolar lines when the epipolar lines are sampled, and complete the determination of each epipolar line on the central projection image; use The basic matrix generated above can determine each image pair
Figure PCTCN2019102187-appb-000046
The corresponding core lines are based on
Figure PCTCN2019102187-appb-000047
The nuclear point coordinates of each nuclear image in the image determine the epipolar line of the point, and complete the epipolar line correspondence between the same image pair to obtain
Figure PCTCN2019102187-appb-000048
The epipolar equation is:
Figure PCTCN2019102187-appb-000049
Figure PCTCN2019102187-appb-000049
其中,(x p,y p)是上述计算的核点坐标,(x base,y base)为中心投影影像的基准坐标,同理得到
Figure PCTCN2019102187-appb-000050
的核线方程。
Among them, (x p , y p ) are the coordinates of the core point calculated above, (x base , y base ) are the reference coordinates of the center projection image, and the same goes for
Figure PCTCN2019102187-appb-000050
The epipolar equation.
(3)产生第二次影像匹配集:(3) Generate the second image matching set:
以所述核线方程为基础,建立起每个影像对
Figure PCTCN2019102187-appb-000051
的核线映射之后,按照双线性内插法的重采样规则,生成核线影像并计算重叠度,将核线影像的重叠度小于阈值N 2的影像对丢弃,生成
Figure PCTCN2019102187-appb-000052
共n 2E个影像对,得到第二次影像匹配集E 2
Based on the epipolar equation, each image pair is established
Figure PCTCN2019102187-appb-000051
After the epipolar line mapping, according to the resampling rule of the bilinear interpolation method, the epipolar line image is generated and the overlap degree is calculated. The image pairs whose overlap degree of the epipolar line image is less than the threshold N 2 are discarded to generate
Figure PCTCN2019102187-appb-000052
There are n 2 E image pairs in total, and the second image matching set E 2 is obtained .
上述生成的第二次影像匹配集E 2虽然解决了大部分的立体匹配相似问题,达到了影像匹配的标准,但是对于战场监察、灾害情况搜救等这类密集的立体环境来说,还需要从二维影像恢复三维物体表面的几何结构,因此需进一步的处理,执行下述的第三步。 Although the second image matching set E 2 generated above solves most of the similarity problems of stereo matching and meets the standard of image matching, it is still necessary for dense stereo environments such as battlefield monitoring and disaster search and rescue. The two-dimensional image restores the geometric structure of the three-dimensional object surface, so further processing is required, and the following third step is performed.
第三步、基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,实现三维重建。In the third step, based on the second image matching set, a dense matching of all pixels between images is established to realize 3D reconstruction.
本申请较佳实施例采用密集匹配算法,在第二次影像匹配集E 2的基础上,利用最小同值分割吸收核的角点检测算法分别提取
Figure PCTCN2019102187-appb-000053
的角点,形成角点的匹配点集合;结合对极几何约束、动态规划算法等方法建立起
Figure PCTCN2019102187-appb-000054
影像间所 有像素点的密集匹配。具体步骤如下:
The preferred embodiment of the present application adopts a dense matching algorithm, and on the basis of the second image matching set E 2 , the corner detection algorithm of the least equivalent segmentation absorbing core is used to extract respectively
Figure PCTCN2019102187-appb-000053
The corner points of the corner points form a set of matching points of the corner points; combined with the epipolar geometric constraints, dynamic programming algorithms and other methods to establish
Figure PCTCN2019102187-appb-000054
Dense matching of all pixels between images. Specific steps are as follows:
(一)检测第二次影像匹配集E 2中影像的角点。 The second image matching set of corner points in an image in E 2 (a) detection.
角点是影像的轮廓线上局部曲率变化最大的点,它含有影像中重要的信息,所以对于影像集E 2中影像的细节匹配具有很大意义。本申请较佳实施例采用最小同值分割吸收核方法(smallest uni-value segment assimilating nucleus,SUSAN)对影像角点进行检测:对输入影像进行高斯平滑;遍历影像中每个像素点,先利用Sobel算子(它是一个离散的一阶差分算子,用来计算影像亮度函数的一阶梯度之近似值)判断该像素点是否为边缘点;若是边缘点,则进一步根据全局能量方程的损失函数最小化原理,判断角点的路径损失L r(p,d)是否最小化,判定原则如下: The corner point is the point where the local curvature changes the most on the contour of the image. It contains important information in the image, so it is of great significance for the detail matching of the image in the image set E 2 . The preferred embodiment of the present application adopts the smallest uni-value segment assimilating nucleus (SUSAN) method to detect image corners: Gaussian smoothing is performed on the input image; each pixel in the image is traversed, and Sobel is used first The operator (it is a discrete first-order difference operator, used to calculate the approximate value of a gradient of the image brightness function) to determine whether the pixel is an edge point; if it is an edge point, the loss function according to the global energy equation is the smallest To determine whether the path loss L r (p, d) of the corner point is minimized, the determination principle is as follows:
Figure PCTCN2019102187-appb-000055
Figure PCTCN2019102187-appb-000055
其中C(p,d)为路径的局部损失,
Figure PCTCN2019102187-appb-000056
为路径的上一步损失,L r,min(p-r)为路径上一步损失的最小损失,由此可判定该点是否为角点,并去除冗余角点;进一步判断若被检测出的两角点相邻,则去掉L r(p,d)较大的角点。经过以上步骤,可以检测出第二次影像匹配集E 2中影像对的角点。
Where C(p,d) is the local loss of the path,
Figure PCTCN2019102187-appb-000056
Is the loss of the previous step of the path, L r , min(pr) is the minimum loss of the previous step of the path, from which it can be determined whether the point is a corner point, and redundant corner points are removed; further judge if the two corners are detected If the points are adjacent, the corner points with larger L r (p,d) are removed. After the above steps, the corner points of the image pair in the second image matching set E 2 can be detected.
(二)对
Figure PCTCN2019102187-appb-000057
影像对的角点进行自动匹配,得到匹配点集。
(2) Yes
Figure PCTCN2019102187-appb-000057
The corner points of the image pair are automatically matched to obtain a set of matching points.
角点的自动匹配可以根据角点的异同有效区分出影像对之间的差异,是达到精准匹配的一个有效手段。角点自动匹配可分为这几步:①对于
Figure PCTCN2019102187-appb-000058
影像角点集合中的每一个点,在
Figure PCTCN2019102187-appb-000059
影像中的相应搜索区域内寻找与之相匹配的角点,相似的,对于
Figure PCTCN2019102187-appb-000060
影像角点集合中的每一个点,按同样的搜索方法在
Figure PCTCN2019102187-appb-000061
影像中寻找它们的对应匹配点,将这两个匹配点集合的交集称为初始匹配点集K 1;②在所述初始匹配点集K 1中,对在
Figure PCTCN2019102187-appb-000062
影像对角点都互相集中的角点,在相应搜索区域内寻找匹配点,计算这个点与搜索区域内每个候选匹配点的相似度,选取与之相似度最大的候选匹配点为它的匹配点。本申请较佳实施例中,所述相似度的计算方法采用梯度大小相似法:若一像素点的梯度大小为g,与之匹配的另一像素点的梯度大小近似服从正态分布,则两个像素的相似度l g为:
The automatic matching of corner points can effectively distinguish the difference between image pairs based on the similarities and differences of the corner points, which is an effective means to achieve precise matching. The corner point automatic matching can be divided into these steps: ①For
Figure PCTCN2019102187-appb-000058
Each point in the set of image corners is in
Figure PCTCN2019102187-appb-000059
Find matching corner points in the corresponding search area in the image, similarly, for
Figure PCTCN2019102187-appb-000060
Each point in the set of image corners is searched in the same way
Figure PCTCN2019102187-appb-000061
Find their corresponding matching points in the image, and call the intersection of these two matching point sets the initial matching point set K 1 ; ②In the initial matching point set K 1 , the pair is
Figure PCTCN2019102187-appb-000062
The corner point where the diagonal points of the image are concentrated with each other, find the matching point in the corresponding search area, calculate the similarity between this point and each candidate matching point in the search area, and select the candidate matching point with the greatest similarity as its match point. In a preferred embodiment of the present application, the similarity calculation method adopts the gradient size similarity method: if the gradient size of a pixel is g, and the gradient of another pixel that matches it approximately obeys the normal distribution, then the two The similarity l g of each pixel is:
Figure PCTCN2019102187-appb-000063
Figure PCTCN2019102187-appb-000063
其中,d(x)为密度函数,k为密度系数。经过相似度的计算得到匹配点集K 2Among them, d(x) is the density function, and k is the density coefficient. After calculating the similarity, the matching point set K 2 is obtained .
(三)根据所述匹配点集K 2,建立起
Figure PCTCN2019102187-appb-000064
影像间所有像素点的密集匹配。
(3) According to the matching point set K 2 , establish
Figure PCTCN2019102187-appb-000064
Dense matching of all pixels between images.
本申请较佳实施例中,所述密集匹配包括:①对匹配点集K 2根据极限几何约束关系得到
Figure PCTCN2019102187-appb-000065
影像的极线对应关系,得到极限对应关系集K 3。所谓的极限几何约束关系就是指若l和l’是左右两幅影像中的两条对应极线,则左影像中极线l上的点p在右影像中的对应点一定在极线l’上。②由生成的极限对应关系集K 3,按灰度对K 3内的极线进行分段,每条极线被分为若干灰度分段,每段上的像素点的灰度值都相近。灰度分段的公式如下:
In a preferred embodiment of the present application, the dense matching includes: ① Obtain the matching point set K 2 according to the limit geometric constraint relationship
Figure PCTCN2019102187-appb-000065
The polar line correspondence of the image is obtained, and the limit correspondence set K 3 is obtained . The so-called limit geometric constraint relationship means that if l and l'are the two corresponding epipolar lines in the left and right images, the corresponding point of the point p on the epipolar line l in the left image in the right image must be on the epipolar line l' on. ②From the generated limit correspondence set K 3 , the polar lines in K 3 are segmented according to gray levels. Each polar line is divided into several gray segments, and the gray values of pixels on each segment are similar. . The formula for gray scale segmentation is as follows:
Figure PCTCN2019102187-appb-000066
Figure PCTCN2019102187-appb-000066
上述公式的物理意义是将灰度值在某一范围内的连续点分为一段。其中,I(x t,y t)为像素点(x t,y t)的灰度值;w为某一灰度分段上的像素点个数,即灰度分段的长度;T为某一阈值,取值越小,被划分为某一灰度分段上的像素点个数越少,灰度分段数就越多,经实验研究,T取3时匹配效果最好,生成的灰度分段集合为K 4;③利用动态规划算法(是一种寻找最佳匹配路径的优化方法)建立灰度分段之间的对应关系,利用线性插值的方法在省对应的灰度分段之间建立各像素点之间的对应关系,从而实现影像间所有像素点的密集匹配,得到第三次影像匹配集E 3The physical meaning of the above formula is to divide the continuous points of gray value in a certain range into one section. Among them, I (x t , y t ) is the gray value of the pixel (x t , y t ); w is the number of pixels on a gray segment, that is, the length of the gray segment; T is For a certain threshold, the smaller the value, the fewer pixels are divided into a certain gray-scale segment, and the more gray-scale segments. According to experimental research, the matching effect is the best when T is set to 3. The gray-level segment set of is K 4 ; ③Using dynamic programming algorithm (an optimization method to find the best matching path) to establish the correspondence between gray-level segments, and using linear interpolation to save the corresponding gray-level The corresponding relationship between the pixels is established between the segments, so that the dense matching of all the pixels between the images is realized, and the third image matching set E 3 is obtained .
经过密集匹配后,所述第三次影像匹配集E 3中所有像素点都存在对应关系,因此可以计算出场景的景深,利用预设的3Dmax软件对场景进行重构,恢复场景空间的三维几何信息,得到重构后的影像。 After dense matching, all pixels in the third image matching set E 3 have a corresponding relationship, so the depth of field of the scene can be calculated, and the preset 3Dmax software can be used to reconstruct the scene to restore the three-dimensional geometry of the scene space Information to get the reconstructed image.
可选地,在其他实施例中,影像匹配程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述影像匹配程序在影像匹配装置中的执行过程。Optionally, in other embodiments, the image matching program may also be divided into one or more modules, and the one or more modules are stored in the memory 11 and are executed by one or more processors (in this embodiment, processing The module 12) is executed to complete the application. The module referred to in the application refers to a series of computer program instruction segments capable of completing specific functions, and is used to describe the execution process of the image matching program in the image matching device.
例如,参照图3所示,为本申请影像匹配装置一实施例中的影像匹配程序的程序模块示意图,该实施例中,影像匹配程序可以被分割为初次匹配模块10、二次匹配模块20、三次匹配模块30以及重构模块40,示例性地:For example, referring to FIG. 3, it is a schematic diagram of the program modules of the image matching program in an embodiment of the image matching device of the present application. In this embodiment, the image matching program can be divided into a primary matching module 10, a secondary matching module 20, The three-time matching module 30 and the reconstruction module 40 are exemplary:
所述初次匹配模块10用于:根据航拍仪拍摄的场景影像,生成影像成像 图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集。The first matching module 10 is used to generate an image imaging map according to the scene image taken by the aerial camera, and use the scale-invariant feature transformation method to perform the first image matching on the image imaging map to generate a first image matching set.
所述二次匹配模块20用于:基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集。The secondary matching module 20 is configured to generate epipolar images based on the primary image matching set and calculate the degree of overlap between the epipolar images, complete the second image matching, and generate a second image matching set.
所述三次匹配模块30用于:基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集。The third-order matching module 30 is configured to: based on the second-order image matching set, establish dense matching of all pixels between the images, and generate a third-order image matching set.
所述重构模块40用于:根据所述影像匹配集,执行三维重建,得到重构后的场景影像。The reconstruction module 40 is configured to perform three-dimensional reconstruction according to the image matching set to obtain a reconstructed scene image.
上述初次匹配模块10、二次匹配模块20、三次匹配模块30和重构模块40等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。The functions or operation steps implemented by the program modules such as the primary matching module 10, the secondary matching module 20, the tertiary matching module 30, and the reconstruction module 40 when executed are substantially the same as those in the above embodiment, and will not be repeated here.
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有影像匹配程序,所述影像匹配程序可被一个或多个处理器执行,以实现如下操作:In addition, an embodiment of the present application also proposes a computer-readable storage medium having an image matching program stored on the computer-readable storage medium, and the image matching program can be executed by one or more processors to implement the following operations:
根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集;According to the scene image taken by the aerial camera, the image imaging map is generated, and the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set;
基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集;Based on the first image matching set, generating epipolar images and calculating the degree of overlap between the epipolar images, completing the second image matching, and generating a second image matching set;
基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集,并执行三维重建,得到重构后的场景影像。Based on the second image matching set, a dense matching of all pixels between images is established, a third image matching set is generated, and three-dimensional reconstruction is performed to obtain a reconstructed scene image.
本申请计算机可读存储介质具体实施方式与上述影像匹配装置和方法各实施例基本相同,在此不作累述。The specific implementations of the computer-readable storage medium of the present application are basically the same as the foregoing embodiments of the image matching device and method, and will not be repeated here.
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。It should be noted that the serial numbers of the above embodiments of the present application are only for description, and do not represent the advantages and disadvantages of the embodiments. And the terms "include", "include" or any other variants thereof in this article are intended to cover non-exclusive inclusion, so that a process, device, article or method including a series of elements not only includes those elements, but also includes The other elements listed may also include elements inherent to the process, device, article, or method. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, device, article or method that includes the element.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。Through the description of the above embodiments, those skilled in the art can clearly understand that the method of the above embodiments can be implemented by means of software plus the necessary general hardware platform. Of course, it can also be implemented by hardware, but in many cases the former is better.的实施方式。 Based on this understanding, the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disk, optical disk), including several instructions to make a terminal device (which can be a mobile phone, a computer, a server, or a network device, etc.) execute the method described in each embodiment of the present application.
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。The above are only preferred embodiments of this application, and do not limit the scope of this application. Any equivalent structure or equivalent process transformation made using the content of the description and drawings of this application, or directly or indirectly used in other related technical fields , The same reason is included in the scope of patent protection of this application.

Claims (20)

  1. 一种影像匹配方法,其特征在于,所述方法包括:An image matching method, characterized in that the method includes:
    根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集;According to the scene image taken by the aerial camera, the image imaging map is generated, and the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set;
    基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集;Based on the first image matching set, generating epipolar images and calculating the degree of overlap between the epipolar images, completing the second image matching, and generating a second image matching set;
    基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集,并执行三维重建,得到重构后的场景影像。Based on the second image matching set, a dense matching of all pixels between images is established, a third image matching set is generated, and three-dimensional reconstruction is performed to obtain a reconstructed scene image.
  2. 如权利要求1所述的影像匹配方法,其特征在于,根据航拍仪拍摄的场景影像,生成影像成像图,包括:5. The image matching method of claim 1, wherein generating an image imaging map according to a scene image shot by an aerial camera comprises:
    根据航拍仪器拍摄所述场景影像时所记录的参数,包括低精度位置、姿态信息及测区的概略高度,利用航拍下物体成像的模型公式将航拍仪器拍摄的存在重叠的场景影像恢复出各自的位置,生成n幅影像成像图,其中,所述物体成像的模型公式如下:According to the parameters recorded when the aerial photography instrument took the scene image, including the low-precision position, attitude information and the approximate height of the survey area, the model formula for object imaging under aerial photography is used to recover the overlapping scene images taken by the aerial photography instrument. Position, generate n image imaging maps, where the model formula of the object imaging is as follows:
    sm=KR[I-C]M,sm=KR[I-C]M,
    其中,s为尺度系数,m为像点坐标,M为物点坐标,K为航拍工具内的参数矩阵,R为旋转矩阵,C为投影中心位置向量,I为3阶单位矩阵。Among them, s is the scale factor, m is the image point coordinates, M is the object point coordinates, K is the parameter matrix in the aerial photography tool, R is the rotation matrix, C is the projection center position vector, and I is the third-order unit matrix.
  3. 如权利要求2所述的影像匹配方法,其特征在于,所述利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集,包括:3. The image matching method of claim 2, wherein the first image matching is performed on the image imaging map by using the scale-invariant feature transformation method to generate the first image matching set, comprising:
    将所述n幅影像成像图组成的影像集转换为对应的无向图边集合E;Converting the image set composed of the n image imaging images into a corresponding undirected image edge set E;
    在所述无向图边集合E中,采用尺度不变特征变换算法进行影像匹配,并在影像匹配中,对于影像(I i、I j)∈E,若I i、I j两影像的匹配点数量小于阈值N 1,则将(I i、I j)从所述无向图边集合中剔除,若I i、I j两影像的匹配点数量大于阈值N 1,则保留该成像图对,生成
    Figure PCTCN2019102187-appb-100001
    影像对,产生所述初次影像匹配集。
    In the undirected graph edge set E, the scale-invariant feature transformation algorithm is used for image matching, and in the image matching, for the image (I i , I j ) ∈ E, if the two images I i and I j match If the number of points is less than the threshold N 1 , then (I i , I j ) will be removed from the set of undirected graph edges. If the number of matching points in the two images I i and I j is greater than the threshold N 1 , the imaging image pair is retained ,generate
    Figure PCTCN2019102187-appb-100001
    Image pairs, generating the first image matching set.
  4. 如权利要求3所述的影像匹配方法,其特征在于,基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集,包括:The image matching method of claim 3, wherein based on the first image matching set, an epipolar image is generated and the degree of overlap between the epipolar images is calculated, and the second image matching is completed to generate a second Matching sets of images, including:
    (a)利用尺度不变特征变换法算法对所述
    Figure PCTCN2019102187-appb-100002
    影像对进行点特征提取,获取分布均匀的高精度同名点后,利用基于RANSAC策略进行基础矩阵估计,得到基础矩阵;
    (a) Use the scale invariant feature transformation algorithm to analyze the
    Figure PCTCN2019102187-appb-100002
    Perform point feature extraction on the image pair, after obtaining uniformly distributed high-precision points with the same name, use the RANSAC-based strategy to estimate the basic matrix to obtain the basic matrix;
    (b)利用所述基础矩阵确定每组同名点对应的同名核线;(b) Using the basic matrix to determine the core line with the same name corresponding to each group of points with the same name;
    (c)根据核线必相交于核点的原理,采用最小二乘法确定
    Figure PCTCN2019102187-appb-100003
    影像对的核点坐标,根据核点坐标生成影像间核线的快速映射,并沿核线方向采用双线性内插法进行核线重采样,完成核线影像制作并匹配重新生成
    Figure PCTCN2019102187-appb-100004
    影像对,产生所述第二次影像匹配集。
    (c) According to the principle that the core line must intersect at the core point, the least square method is used to determine
    Figure PCTCN2019102187-appb-100003
    The core point coordinates of the image pair are used to generate a quick mapping of the epipolar lines between the images according to the core point coordinates, and the epipolar line is resampled by bilinear interpolation along the epipolar line direction to complete the epipolar image production and matching regeneration
    Figure PCTCN2019102187-appb-100004
    Image pairs, generating the second image matching set.
  5. 如权利要求4所述的影像匹配方法,其特征在于,所述(c)包括:基于所述基础矩阵,构建旋转矩阵和投影矩阵,将所述旋转矩阵分为x、y、z轴,各为
    Figure PCTCN2019102187-appb-100005
    其中,
    Figure PCTCN2019102187-appb-100006
    的表达方式如下:
    The image matching method of claim 4, wherein the (c) comprises: building a rotation matrix and a projection matrix based on the basic matrix, and dividing the rotation matrix into x, y, and z axes, each for
    Figure PCTCN2019102187-appb-100005
    among them,
    Figure PCTCN2019102187-appb-100006
    Is expressed as follows:
    Figure PCTCN2019102187-appb-100007
    并得到投影矩阵
    Figure PCTCN2019102187-appb-100008
    Figure PCTCN2019102187-appb-100007
    And get the projection matrix
    Figure PCTCN2019102187-appb-100008
    Figure PCTCN2019102187-appb-100009
    Figure PCTCN2019102187-appb-100009
    其中,
    Figure PCTCN2019102187-appb-100010
    为相机left的相机参数,
    Figure PCTCN2019102187-appb-100011
    为相机right的相机参数,t left、t right分别是相机left和相机right的相机参数在x、y、z轴的分量;
    among them,
    Figure PCTCN2019102187-appb-100010
    Is the camera parameter of the camera left,
    Figure PCTCN2019102187-appb-100011
    Are the camera parameters of camera right, t left and t right are the components of the camera parameters of camera left and camera right in the x, y, and z axes respectively;
    根据核线必交于核点原理,由投影矩阵的计算结果,采用最小二乘法确定影像的核点坐标(x p,y p); According to the principle that the core line must intersect at the core point, the least square method is used to determine the core point coordinates (x p , y p ) of the image from the calculation result of the projection matrix;
    根据共线条件方程推导出中心投影,在核线采样时,计算出相邻两条核线之间的夹角关系,完成中心投影影像上每条核线的确定;The central projection is derived according to the collinear condition equation. When the epipolar lines are sampled, the angle relationship between two adjacent epipolar lines is calculated to complete the determination of each epipolar line on the central projection image;
    利用上述生成的基础矩阵,可以确定每个影像对
    Figure PCTCN2019102187-appb-100012
    所对应的核线,分别根据
    Figure PCTCN2019102187-appb-100013
    中每个核像的核点坐标确定该点的核线,完成同影像对之间的核线对应,得到
    Figure PCTCN2019102187-appb-100014
    的核线方程为:
    Using the basic matrix generated above, each image pair can be determined
    Figure PCTCN2019102187-appb-100012
    The corresponding core lines are based on
    Figure PCTCN2019102187-appb-100013
    The nuclear point coordinates of each nuclear image in the image determine the epipolar line of the point, and complete the epipolar line correspondence between the same image pair to obtain
    Figure PCTCN2019102187-appb-100014
    The epipolar equation is:
    Figure PCTCN2019102187-appb-100015
    Figure PCTCN2019102187-appb-100015
    其中,(x p,y p)是上述计算的核点坐标,(x base,y base)为中心投影影像的基准坐标,同理得到
    Figure PCTCN2019102187-appb-100016
    的核线方程;
    Among them, (x p , y p ) are the coordinates of the core point calculated above, (x base , y base ) are the reference coordinates of the center projection image, and the same goes for
    Figure PCTCN2019102187-appb-100016
    The epipolar equation;
    以所述核线方程为基础,建立起每个影像对
    Figure PCTCN2019102187-appb-100017
    的核线映射之后,按照双线性内插法的重采样规则,生成核线影像并计算重叠度,将核线影像的重叠度小于阈值N 2的影像对丢弃,生成
    Figure PCTCN2019102187-appb-100018
    影像对,得到所述第二次影像匹配集。
    Based on the epipolar equation, each image pair is established
    Figure PCTCN2019102187-appb-100017
    After the epipolar line mapping, according to the resampling rule of the bilinear interpolation method, the epipolar line image is generated and the overlap degree is calculated. The image pairs whose overlap degree of the epipolar line image is less than the threshold N 2 are discarded to generate
    Figure PCTCN2019102187-appb-100018
    Image pair to obtain the second image matching set.
  6. 如权利要求5所述的影像匹配方法,其特征在于,所述基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,包括:8. The image matching method of claim 5, wherein the establishing a dense match of all pixels between images based on the second image matching set comprises:
    在所述第二次影像匹配集的基础上,利用最小同值分割吸收核的角点检测算法分别提取
    Figure PCTCN2019102187-appb-100019
    的角点,形成角点的匹配点集合,结合对极几何约束、动态规划算法建立起
    Figure PCTCN2019102187-appb-100020
    影像间所有像素点的密集匹配。
    On the basis of the second image matching set, the corner detection algorithm of the least equivalent segmentation absorbing core is used to extract the
    Figure PCTCN2019102187-appb-100019
    The corner points of the corner points form a set of matching points of the corner points, combined with epipolar geometric constraints and dynamic programming algorithms to establish
    Figure PCTCN2019102187-appb-100020
    Dense matching of all pixels between images.
  7. 如权利要求5所述的影像匹配方法,其特征在于,所述基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集,包括:5. The image matching method of claim 5, wherein the establishing a dense matching of all pixels between images based on the second image matching set to generate a third image matching set comprises:
    对输入影像进行高斯平滑,遍历影像中每个像素点,利用Sobel算子判断该像素点是否为边缘点,若是边缘点,则进一步根据全局能量方程的损失函数最小化原理,判断路径损失L r(p,d)是否最小化,判定原则如下: Perform Gaussian smoothing on the input image, traverse each pixel in the image, and use the Sobel operator to determine whether the pixel is an edge point. If it is an edge point, further determine the path loss L r according to the principle of minimizing the loss function of the global energy equation (p,d) Whether to minimize, the judgment principle is as follows:
    Figure PCTCN2019102187-appb-100021
    Figure PCTCN2019102187-appb-100021
    其中,C(p,d)为路径的局部损失,Among them, C(p,d) is the local loss of the path,
    Figure PCTCN2019102187-appb-100022
    为路径的上一步损失,L r,min(p-r)为路径上一步损失的最小损失,由此可判定该点是否为角点,若被检测出的两角点相邻,则去掉L r(p,d)较大的角点,从而检测出第二次影像匹配集中影像对的角点;
    Figure PCTCN2019102187-appb-100022
    Is the loss of the previous step of the path, L r ,min(pr) is the minimum loss of the previous step of the path, which can be used to determine whether the point is a corner point. If the detected two corner points are adjacent, remove L r ( p, d) Larger corner points to detect the corner points of the image pair in the second image matching set;
    对于
    Figure PCTCN2019102187-appb-100023
    影像角点集合中的每一个点,在
    Figure PCTCN2019102187-appb-100024
    影像中的相应搜索区域内寻找与之相匹配的角点,将得到的两个匹配点集合的交集称为初始匹配点集K 1,在所述初始匹配点集K 1中,对在
    Figure PCTCN2019102187-appb-100025
    影像对角点都互相集中的角点,在相应搜索区域内寻找匹配点,计算这个点与搜索区域内每个候选匹配点的相似度,选取与之相似度最大的候选匹配点为它的匹配点,得到角点的匹配点集合K 2
    for
    Figure PCTCN2019102187-appb-100023
    Each point in the set of image corners is in
    Figure PCTCN2019102187-appb-100024
    Find the matching corner point in the corresponding search area in the image, and call the intersection of the two matching point sets obtained as the initial matching point set K 1. In the initial matching point set K 1 , the pair is
    Figure PCTCN2019102187-appb-100025
    The corner point where the diagonal points of the image are concentrated with each other, find the matching point in the corresponding search area, calculate the similarity between this point and each candidate matching point in the search area, and select the candidate matching point with the greatest similarity as its match Point, get the matching point set K 2 of the corner point;
    利用角点的匹配点集合K 2,根据极限几何约束关系得到
    Figure PCTCN2019102187-appb-100026
    影像的极线对应关系,生成极限对应关系集K 3,按灰度对K 3内的极线进行分段,每条极线被分为若干灰度分段,生成的灰度分段集合为K 4,利用动态规划算法建立灰度分段之间的对应关系,利用线性插值的方法在省对应的灰度分段之间建立各像素点之间的对应关系,从而实现影像间所有像素点的密集匹配,得到所述第三次影像匹配集。
    Utilize the matching point set K 2 of the corner points, and get
    Figure PCTCN2019102187-appb-100026
    Polar correspondence of images, generating a relationship between the limit set K 3, according to the gradation of the electrode line in the segmented K 3, each of the electrode lines are divided into several segments gradation, the gradation segment set is generated K 4 , use the dynamic programming algorithm to establish the correspondence between the gray-scale segments, and use the linear interpolation method to establish the corresponding relationship between the pixels between the corresponding gray-scale segments, so as to realize all the pixels between the images To obtain the third image matching set.
  8. 如权利要求1至7中任意一项所述的影像匹配方法,其特征在于,所述 执行三维重建,得到重构后的场景影像包括:The image matching method according to any one of claims 1 to 7, wherein the performing three-dimensional reconstruction to obtain the reconstructed scene image comprises:
    经过密集匹配后,利用所述第三次影像匹配集中所有像素点存在的对应关系,计算出场景的景深,利用预设的3Dmax软件对场景进行重构,恢复场景空间的三维几何信息,得到重构后的场景影像。After dense matching, the corresponding relationship between all pixels in the third image matching set is used to calculate the depth of field of the scene, and the preset 3Dmax software is used to reconstruct the scene to restore the three-dimensional geometric information of the scene space. The constructed scene image.
  9. 一种影像匹配装置,其特征在于,所述装置包括存储器和处理器,所述存储器上存储有在所述处理器上运行的影像匹配程序,所述影像匹配程序被所述处理器执行时实现如下步骤:An image matching device, characterized in that the device includes a memory and a processor, the memory stores an image matching program running on the processor, and the image matching program is executed when the processor is executed The following steps:
    根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集;According to the scene image taken by the aerial camera, the image imaging map is generated, and the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set;
    基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集;Based on the first image matching set, generating epipolar images and calculating the degree of overlap between the epipolar images, completing the second image matching, and generating a second image matching set;
    基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集,并执行三维重建,得到重构后的场景影像。Based on the second image matching set, a dense matching of all pixels between images is established, a third image matching set is generated, and three-dimensional reconstruction is performed to obtain a reconstructed scene image.
  10. 如权利要求9所述的影像匹配装置,其特征在于,根据航拍仪拍摄的场景影像,生成影像成像图,包括:9. The image matching device of claim 9, wherein generating an image imaging map according to a scene image captured by an aerial camera comprises:
    根据航拍仪器拍摄所述场景影像时所记录的参数,包括低精度位置、姿态信息及测区的概略高度,利用航拍下物体成像的模型公式将航拍仪器拍摄的存在重叠的场景影像恢复出各自的位置,生成n幅影像成像图,其中,所述物体成像的模型公式如下:According to the parameters recorded when the aerial photography instrument took the scene image, including the low-precision position, attitude information and the approximate height of the survey area, the model formula for object imaging under aerial photography is used to recover the overlapping scene images taken by the aerial photography instrument. Position, generate n image imaging maps, where the model formula of the object imaging is as follows:
    sm=KR[I-C]M,sm=KR[I-C]M,
    其中,s为尺度系数,m为像点坐标,M为物点坐标,K为航拍工具内的参数矩阵,R为旋转矩阵,C为投影中心位置向量,I为3阶单位矩阵。Among them, s is the scale factor, m is the image point coordinates, M is the object point coordinates, K is the parameter matrix in the aerial photography tool, R is the rotation matrix, C is the projection center position vector, and I is the third-order unit matrix.
  11. 如权利要求10所述的影像匹配装置,其特征在于,所述利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集,包括:10. The image matching device of claim 10, wherein the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set, comprising:
    将所述n幅影像成像图组成的影像集转换为对应的无向图边集合E;Converting the image set composed of the n image imaging images into a corresponding undirected image edge set E;
    在所述无向图边集合E中,采用尺度不变特征变换算法进行影像匹配,并在影像匹配中,对于影像(I i、I j)∈E,若I i、I j两影像的匹配点数量小于阈值N 1,则将(I i、I j)从所述无向图边集合中剔除,若I i、I j两影像的匹配点数量大于阈值N 1,则保留该成像图对,生成
    Figure PCTCN2019102187-appb-100027
    影像对,产生所述初次影像匹配集。
    In the undirected graph edge set E, the scale-invariant feature transformation algorithm is used for image matching, and in the image matching, for the image (I i , I j ) ∈ E, if the two images I i and I j match If the number of points is less than the threshold N 1 , then (I i , I j ) will be removed from the set of undirected graph edges. If the number of matching points in the two images I i and I j is greater than the threshold N 1 , the imaging image pair is retained ,generate
    Figure PCTCN2019102187-appb-100027
    Image pairs, generating the first image matching set.
  12. 如权利要求11所述的影像匹配装置,其特征在于,基于所述初次影 像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集,包括:The image matching device of claim 11, wherein based on the first image matching set, an epipolar image is generated and the degree of overlap between the epipolar images is calculated, and the second image matching is completed to generate a second Matching sets of images, including:
    (a)利用尺度不变特征变换法算法对所述
    Figure PCTCN2019102187-appb-100028
    影像对进行点特征提取,获取分布均匀的高精度同名点后,利用基于RANSAC策略进行基础矩阵估计,得到基础矩阵;
    (a) Use the scale invariant feature transformation algorithm to analyze the
    Figure PCTCN2019102187-appb-100028
    Perform point feature extraction on the image pair, after obtaining uniformly distributed high-precision points with the same name, use the RANSAC-based strategy to estimate the basic matrix to obtain the basic matrix;
    (b)利用所述基础矩阵确定每组同名点对应的同名核线;(b) Using the basic matrix to determine the core line with the same name corresponding to each group of points with the same name;
    (c)根据核线必相交于核点的原理,采用最小二乘法确定
    Figure PCTCN2019102187-appb-100029
    影像对的核点坐标,根据核点坐标生成影像间核线的快速映射,并沿核线方向采用双线性内插法进行核线重采样,完成核线影像制作并匹配重新生成
    Figure PCTCN2019102187-appb-100030
    影像对,产生所述第二次影像匹配集。
    (c) According to the principle that the core line must intersect at the core point, the least square method is used to determine
    Figure PCTCN2019102187-appb-100029
    The core point coordinates of the image pair are used to generate a quick mapping of the epipolar lines between the images according to the core point coordinates, and the epipolar line is resampled by bilinear interpolation along the epipolar line direction to complete the epipolar image production and matching regeneration
    Figure PCTCN2019102187-appb-100030
    Image pairs, generating the second image matching set.
  13. 如权利要求12所述的影像匹配装置,其特征在于,所述(c)包括:基于所述基础矩阵,构建旋转矩阵和投影矩阵,将所述旋转矩阵分为x、y、z轴,各为
    Figure PCTCN2019102187-appb-100031
    其中,
    Figure PCTCN2019102187-appb-100032
    的表达方式如下:
    The image matching device of claim 12, wherein the (c) comprises: building a rotation matrix and a projection matrix based on the basic matrix, and dividing the rotation matrix into x, y, and z axes, each for
    Figure PCTCN2019102187-appb-100031
    among them,
    Figure PCTCN2019102187-appb-100032
    Is expressed as follows:
    Figure PCTCN2019102187-appb-100033
    并得到投影矩阵
    Figure PCTCN2019102187-appb-100034
    Figure PCTCN2019102187-appb-100033
    And get the projection matrix
    Figure PCTCN2019102187-appb-100034
    Figure PCTCN2019102187-appb-100035
    Figure PCTCN2019102187-appb-100035
    其中,
    Figure PCTCN2019102187-appb-100036
    为相机left的相机参数,
    Figure PCTCN2019102187-appb-100037
    为相机right的相机参数,t left、t right分别是相机left和相机right的相机参数在x、y、z轴的分量;
    among them,
    Figure PCTCN2019102187-appb-100036
    Is the camera parameter of the camera left,
    Figure PCTCN2019102187-appb-100037
    Are the camera parameters of camera right, t left and t right are the components of the camera parameters of camera left and camera right in the x, y, and z axes respectively;
    根据核线必交于核点原理,由投影矩阵的计算结果,采用最小二乘法确定影像的核点坐标(x p,y p); According to the principle that the core line must intersect at the core point, the least square method is used to determine the core point coordinates (x p , y p ) of the image from the calculation result of the projection matrix;
    根据共线条件方程推导出中心投影,在核线采样时,计算出相邻两条核线之间的夹角关系,完成中心投影影像上每条核线的确定;The central projection is derived according to the collinear condition equation. When the epipolar lines are sampled, the angle relationship between two adjacent epipolar lines is calculated to complete the determination of each epipolar line on the central projection image;
    利用上述生成的基础矩阵,可以确定每个影像对
    Figure PCTCN2019102187-appb-100038
    所对应的核线,分别根据
    Figure PCTCN2019102187-appb-100039
    中每个核像的核点坐标确定该点的核线,完成同影像对之间的核线对应,得到
    Figure PCTCN2019102187-appb-100040
    的核线方程为:
    Using the basic matrix generated above, each image pair can be determined
    Figure PCTCN2019102187-appb-100038
    The corresponding core lines are based on
    Figure PCTCN2019102187-appb-100039
    The nuclear point coordinates of each nuclear image in the image determine the epipolar line of the point, and complete the epipolar line correspondence between the same image pair to obtain
    Figure PCTCN2019102187-appb-100040
    The epipolar equation is:
    Figure PCTCN2019102187-appb-100041
    Figure PCTCN2019102187-appb-100041
    其中,(x p,y p)是上述计算的核点坐标,(x base,y base)为中心投影影像的基准坐标,同理得到
    Figure PCTCN2019102187-appb-100042
    的核线方程;
    Among them, (x p , y p ) are the coordinates of the core point calculated above, (x base , y base ) are the reference coordinates of the center projection image, and the same goes for
    Figure PCTCN2019102187-appb-100042
    The epipolar equation;
    以所述核线方程为基础,建立起每个影像对
    Figure PCTCN2019102187-appb-100043
    的核线映射之后,按 照双线性内插法的重采样规则,生成核线影像并计算重叠度,将核线影像的重叠度小于阈值N 2的影像对丢弃,生成
    Figure PCTCN2019102187-appb-100044
    影像对,得到所述第二次影像匹配集。
    Based on the epipolar equation, each image pair is established
    Figure PCTCN2019102187-appb-100043
    After the epipolar line mapping, according to the resampling rule of the bilinear interpolation method, the epipolar line image is generated and the overlap degree is calculated. The image pairs whose overlap degree of the epipolar line image is less than the threshold N 2 are discarded to generate
    Figure PCTCN2019102187-appb-100044
    Image pair to obtain the second image matching set.
  14. 如权利要求13所述的影像匹配装置,其特征在于,所述基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,包括:The image matching device of claim 13, wherein the establishing a dense match of all pixels between the images based on the second image matching set comprises:
    在所述第二次影像匹配集的基础上,利用最小同值分割吸收核的角点检测算法分别提取
    Figure PCTCN2019102187-appb-100045
    的角点,形成角点的匹配点集合,结合对极几何约束、动态规划算法建立起
    Figure PCTCN2019102187-appb-100046
    影像间所有像素点的密集匹配。
    On the basis of the second image matching set, the corner detection algorithm of the least equivalent segmentation absorbing core is used to extract the
    Figure PCTCN2019102187-appb-100045
    The corner points of the corner points form a set of matching points of the corner points, combined with epipolar geometric constraints and dynamic programming algorithms to establish
    Figure PCTCN2019102187-appb-100046
    Dense matching of all pixels between images.
  15. 如权利要求9至14中任意一项所述的影像匹配装置,其特征在于,所述执行三维重建,得到重构后的场景影像包括:The image matching device according to any one of claims 9 to 14, wherein the performing three-dimensional reconstruction to obtain the reconstructed scene image comprises:
    经过密集匹配后,利用所述第三次影像匹配集中所有像素点存在的对应关系,计算出场景的景深,利用预设的3Dmax软件对场景进行重构,恢复场景空间的三维几何信息,得到重构后的场景影像。After dense matching, the corresponding relationship between all pixels in the third image matching set is used to calculate the depth of field of the scene, and the preset 3Dmax software is used to reconstruct the scene to restore the three-dimensional geometric information of the scene space. The constructed scene image.
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有影像匹配程序,所述影像匹配程序被处理器执行时实现如下步骤:A computer-readable storage medium, characterized in that an image matching program is stored on the computer-readable storage medium, and when the image matching program is executed by a processor, the following steps are implemented:
    根据航拍仪拍摄的场景影像,生成影像成像图,利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集;According to the scene image taken by the aerial camera, the image imaging map is generated, and the first image matching is performed on the image imaging map using the scale-invariant feature transformation method to generate the first image matching set;
    基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集;Based on the first image matching set, generating epipolar images and calculating the degree of overlap between the epipolar images, completing the second image matching, and generating a second image matching set;
    基于所述的第二次影像匹配集,建立影像间所有像素点的密集匹配,生成第三次影像匹配集,并执行三维重建,得到重构后的场景影像。Based on the second image matching set, a dense matching of all pixels between images is established, a third image matching set is generated, and three-dimensional reconstruction is performed to obtain a reconstructed scene image.
  17. 如权利要求16所述的计算机可读存储介质,其特征在于,根据航拍仪拍摄的场景影像,生成影像成像图,包括:15. The computer-readable storage medium of claim 16, wherein generating an image imaging map according to a scene image shot by an aerial camera comprises:
    根据航拍仪器拍摄所述场景影像时所记录的参数,包括低精度位置、姿态信息及测区的概略高度,利用航拍下物体成像的模型公式将航拍仪器拍摄的存在重叠的场景影像恢复出各自的位置,生成n幅影像成像图,其中,所述物体成像的模型公式如下:According to the parameters recorded when the aerial photography instrument took the scene image, including the low-precision position, attitude information and the approximate height of the survey area, the model formula for object imaging under aerial photography is used to recover the overlapping scene images taken by the aerial photography instrument. Position, generate n image imaging maps, where the model formula of the object imaging is as follows:
    sm=KR[I-C]M,sm=KR[I-C]M,
    其中,s为尺度系数,m为像点坐标,M为物点坐标,K为航拍工具内的参数矩阵,R为旋转矩阵,C为投影中心位置向量,I为3阶单位矩阵。Among them, s is the scale factor, m is the image point coordinates, M is the object point coordinates, K is the parameter matrix in the aerial photography tool, R is the rotation matrix, C is the projection center position vector, and I is the third-order unit matrix.
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,所述利用尺度不变特征变换法对影像成像图进行初次影像匹配,生成初次影像匹配集,包括:17. The computer-readable storage medium according to claim 17, wherein said using a scale-invariant feature transformation method to perform an initial image matching on an image imaging map to generate an initial image matching set comprises:
    将所述n幅影像成像图组成的影像集转换为对应的无向图边集合E;Converting the image set composed of the n image imaging images into a corresponding undirected image edge set E;
    在所述无向图边集合E中,采用尺度不变特征变换算法进行影像匹配,并在影像匹配中,对于影像(I i、I j)∈E,若I i、I j两影像的匹配点数量小于阈值N 1,则将(I i、I j)从所述无向图边集合中剔除,若I i、I j两影像的匹配点数量大于阈值N 1,则保留该成像图对,生成
    Figure PCTCN2019102187-appb-100047
    影像对,产生所述初次影像匹配集。
    In the undirected graph edge set E, the scale-invariant feature transformation algorithm is used for image matching, and in the image matching, for the image (I i , I j ) ∈ E, if the two images I i and I j match If the number of points is less than the threshold N 1 , then (I i , I j ) will be removed from the set of undirected graph edges. If the number of matching points in the two images I i and I j is greater than the threshold N 1 , the imaging image pair is retained ,generate
    Figure PCTCN2019102187-appb-100047
    Image pairs, generating the first image matching set.
  19. 如权利要求18所述的计算机可读存储介质,其特征在于,基于所述初次影像匹配集,生成核线影像并计算所述核线影像之间的重叠度,完成第二次影像匹配,生成第二次影像匹配集,包括:The computer-readable storage medium of claim 18, wherein based on the first image matching set, an epipolar image is generated and the degree of overlap between the epipolar images is calculated to complete the second image matching to generate The second image matching set, including:
    (a)利用尺度不变特征变换法算法对所述
    Figure PCTCN2019102187-appb-100048
    影像对进行点特征提取,获取分布均匀的高精度同名点后,利用基于RANSAC策略进行基础矩阵估计,得到基础矩阵;
    (a) Use the scale invariant feature transformation algorithm to analyze the
    Figure PCTCN2019102187-appb-100048
    Perform point feature extraction on the image pair, after obtaining uniformly distributed high-precision points with the same name, use the RANSAC-based strategy to estimate the basic matrix to obtain the basic matrix;
    (b)利用所述基础矩阵确定每组同名点对应的同名核线;(b) Using the basic matrix to determine the core line with the same name corresponding to each group of points with the same name;
    (c)根据核线必相交于核点的原理,采用最小二乘法确定
    Figure PCTCN2019102187-appb-100049
    影像对的核点坐标,根据核点坐标生成影像间核线的快速映射,并沿核线方向采用双线性内插法进行核线重采样,完成核线影像制作并匹配重新生成
    Figure PCTCN2019102187-appb-100050
    影像对,产生所述第二次影像匹配集。
    (c) According to the principle that the core line must intersect at the core point, the least square method is used to determine
    Figure PCTCN2019102187-appb-100049
    The core point coordinates of the image pair are used to generate a quick mapping of the epipolar lines between the images according to the core point coordinates, and the epipolar line is resampled by bilinear interpolation along the epipolar line direction to complete the epipolar image production and matching regeneration
    Figure PCTCN2019102187-appb-100050
    Image pairs, generating the second image matching set.
  20. 如权利要求15所述的计算机可读存储介质,其特征在于,所述执行三维重建,得到重构后的场景影像包括:15. The computer-readable storage medium of claim 15, wherein the performing three-dimensional reconstruction to obtain the reconstructed scene image comprises:
    经过密集匹配后,利用所述第三次影像匹配集中所有像素点存在的对应关系,计算出场景的景深,利用预设的3Dmax软件对场景进行重构,恢复场景空间的三维几何信息,得到重构后的场景影像。After dense matching, the corresponding relationship between all pixels in the third image matching set is used to calculate the depth of field of the scene, and the preset 3Dmax software is used to reconstruct the scene to restore the three-dimensional geometric information of the scene space. The constructed scene image.
PCT/CN2019/102187 2019-04-08 2019-08-23 Image matching method and device, and computer readable storage medium WO2020206903A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910274078.5 2019-04-08
CN201910274078.5A CN110135455B (en) 2019-04-08 2019-04-08 Image matching method, device and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020206903A1 true WO2020206903A1 (en) 2020-10-15

Family

ID=67569487

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/102187 WO2020206903A1 (en) 2019-04-08 2019-08-23 Image matching method and device, and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110135455B (en)
WO (1) WO2020206903A1 (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160377A (en) * 2020-03-07 2020-05-15 深圳移动互联研究院有限公司 Image acquisition system with key mechanism and evidence-based method thereof
CN112233228A (en) * 2020-10-28 2021-01-15 五邑大学 Unmanned aerial vehicle-based urban three-dimensional reconstruction method and device and storage medium
CN112381864A (en) * 2020-12-08 2021-02-19 兰州交通大学 Multi-source multi-scale high-resolution remote sensing image automatic registration technology based on antipodal geometry
CN112446951A (en) * 2020-11-06 2021-03-05 杭州易现先进科技有限公司 Three-dimensional reconstruction method and device, electronic equipment and computer storage medium
CN112509109A (en) * 2020-12-10 2021-03-16 上海影创信息科技有限公司 Single-view illumination estimation method based on neural network model
CN113096168A (en) * 2021-03-17 2021-07-09 西安交通大学 Optical remote sensing image registration method and system combining SIFT points and control line pairs
CN113741510A (en) * 2021-07-30 2021-12-03 深圳创动科技有限公司 Routing inspection path planning method and device and storage medium
CN113867410A (en) * 2021-11-17 2021-12-31 武汉大势智慧科技有限公司 Unmanned aerial vehicle aerial photography data acquisition mode identification method and system
CN113963132A (en) * 2021-11-15 2022-01-21 广东电网有限责任公司 Three-dimensional distribution reconstruction method of plasma and related device
CN114078249A (en) * 2021-11-19 2022-02-22 武汉大势智慧科技有限公司 Automatic grouping method and system for front and back overturning images of object
CN114140575A (en) * 2021-10-21 2022-03-04 北京航空航天大学 Three-dimensional model construction method, device and equipment
CN114332349A (en) * 2021-11-17 2022-04-12 浙江智慧视频安防创新中心有限公司 Binocular structured light edge reconstruction method and system and storage medium
CN114419116A (en) * 2022-01-11 2022-04-29 江苏省测绘研究所 Remote sensing image registration method and system based on network matching
CN114758151A (en) * 2022-03-21 2022-07-15 辽宁工程技术大学 Sequence image dense matching method combining line features and triangulation network constraints
CN114972536A (en) * 2022-05-26 2022-08-30 中国人民解放军战略支援部队信息工程大学 Aviation area array sweep type camera positioning and calibrating method
CN115063460A (en) * 2021-12-24 2022-09-16 山东建筑大学 High-precision self-adaptive homonymous pixel interpolation and optimization method
CN115661368A (en) * 2022-12-14 2023-01-31 海纳云物联科技有限公司 Image matching method, device, server and storage medium
CN116597184A (en) * 2023-07-11 2023-08-15 中国人民解放军63921部队 Least square image matching method
CN116596844A (en) * 2023-04-06 2023-08-15 北京四维远见信息技术有限公司 Aviation quality inspection method, device, equipment and storage medium
CN116612067A (en) * 2023-04-06 2023-08-18 北京四维远见信息技术有限公司 Method, apparatus, device and computer readable storage medium for checking aviation quality
CN117664087A (en) * 2024-01-31 2024-03-08 中国人民解放军战略支援部队航天工程大学 Method, system and equipment for generating vertical orbit circular scanning type satellite image epipolar line
CN118070434A (en) * 2024-04-22 2024-05-24 天津悦鸣腾宇通用机械设备有限公司 Method and system for constructing process information model of automobile part

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135455B (en) * 2019-04-08 2024-04-12 平安科技(深圳)有限公司 Image matching method, device and computer readable storage medium
CN111046906B (en) * 2019-10-31 2023-10-31 中国资源卫星应用中心 Reliable encryption matching method and system for planar feature points
CN112866504B (en) * 2021-01-28 2023-06-09 武汉博雅弘拓科技有限公司 Air three encryption method and system
CN114742869B (en) * 2022-06-15 2022-08-16 西安交通大学医学院第一附属医院 Brain neurosurgery registration method based on pattern recognition and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110090337A1 (en) * 2008-02-01 2011-04-21 Imint Image Intelligence Ab Generation of aerial images
CN104751451A (en) * 2015-03-05 2015-07-01 同济大学 Dense point cloud extracting method of low-altitude high resolution image based on UAV (Unmanned Aerial Vehicle)
CN105847750A (en) * 2016-04-13 2016-08-10 中测新图(北京)遥感技术有限责任公司 Geo-coding based unmanned aerial vehicle video image real time presenting method and apparatus
CN106023086A (en) * 2016-07-06 2016-10-12 中国电子科技集团公司第二十八研究所 Aerial photography image and geographical data splicing method based on ORB feature matching
CN108759788A (en) * 2018-03-19 2018-11-06 深圳飞马机器人科技有限公司 Unmanned plane image positioning and orientation method and unmanned plane
CN110135455A (en) * 2019-04-08 2019-08-16 平安科技(深圳)有限公司 Image matching method, device and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915965A (en) * 2014-03-14 2015-09-16 华为技术有限公司 Camera tracking method and device
CN107492127B (en) * 2017-09-18 2021-05-11 丁志宇 Light field camera parameter calibration method and device, storage medium and computer equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110090337A1 (en) * 2008-02-01 2011-04-21 Imint Image Intelligence Ab Generation of aerial images
CN104751451A (en) * 2015-03-05 2015-07-01 同济大学 Dense point cloud extracting method of low-altitude high resolution image based on UAV (Unmanned Aerial Vehicle)
CN105847750A (en) * 2016-04-13 2016-08-10 中测新图(北京)遥感技术有限责任公司 Geo-coding based unmanned aerial vehicle video image real time presenting method and apparatus
CN106023086A (en) * 2016-07-06 2016-10-12 中国电子科技集团公司第二十八研究所 Aerial photography image and geographical data splicing method based on ORB feature matching
CN108759788A (en) * 2018-03-19 2018-11-06 深圳飞马机器人科技有限公司 Unmanned plane image positioning and orientation method and unmanned plane
CN110135455A (en) * 2019-04-08 2019-08-16 平安科技(深圳)有限公司 Image matching method, device and computer readable storage medium

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160377A (en) * 2020-03-07 2020-05-15 深圳移动互联研究院有限公司 Image acquisition system with key mechanism and evidence-based method thereof
CN112233228A (en) * 2020-10-28 2021-01-15 五邑大学 Unmanned aerial vehicle-based urban three-dimensional reconstruction method and device and storage medium
CN112233228B (en) * 2020-10-28 2024-02-20 五邑大学 Unmanned aerial vehicle-based urban three-dimensional reconstruction method, device and storage medium
CN112446951A (en) * 2020-11-06 2021-03-05 杭州易现先进科技有限公司 Three-dimensional reconstruction method and device, electronic equipment and computer storage medium
CN112446951B (en) * 2020-11-06 2024-03-26 杭州易现先进科技有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, electronic equipment and computer storage medium
CN112381864A (en) * 2020-12-08 2021-02-19 兰州交通大学 Multi-source multi-scale high-resolution remote sensing image automatic registration technology based on antipodal geometry
CN112509109A (en) * 2020-12-10 2021-03-16 上海影创信息科技有限公司 Single-view illumination estimation method based on neural network model
CN113096168A (en) * 2021-03-17 2021-07-09 西安交通大学 Optical remote sensing image registration method and system combining SIFT points and control line pairs
CN113096168B (en) * 2021-03-17 2024-04-02 西安交通大学 Optical remote sensing image registration method and system combining SIFT points and control line pairs
CN113741510A (en) * 2021-07-30 2021-12-03 深圳创动科技有限公司 Routing inspection path planning method and device and storage medium
CN114140575A (en) * 2021-10-21 2022-03-04 北京航空航天大学 Three-dimensional model construction method, device and equipment
CN113963132A (en) * 2021-11-15 2022-01-21 广东电网有限责任公司 Three-dimensional distribution reconstruction method of plasma and related device
CN113867410B (en) * 2021-11-17 2023-11-03 武汉大势智慧科技有限公司 Unmanned aerial vehicle aerial photographing data acquisition mode identification method and system
CN114332349B (en) * 2021-11-17 2023-11-03 浙江视觉智能创新中心有限公司 Binocular structured light edge reconstruction method, system and storage medium
CN113867410A (en) * 2021-11-17 2021-12-31 武汉大势智慧科技有限公司 Unmanned aerial vehicle aerial photography data acquisition mode identification method and system
CN114332349A (en) * 2021-11-17 2022-04-12 浙江智慧视频安防创新中心有限公司 Binocular structured light edge reconstruction method and system and storage medium
CN114078249A (en) * 2021-11-19 2022-02-22 武汉大势智慧科技有限公司 Automatic grouping method and system for front and back overturning images of object
CN115063460A (en) * 2021-12-24 2022-09-16 山东建筑大学 High-precision self-adaptive homonymous pixel interpolation and optimization method
CN114419116B (en) * 2022-01-11 2024-04-09 江苏省测绘研究所 Remote sensing image registration method and system based on network matching
CN114419116A (en) * 2022-01-11 2022-04-29 江苏省测绘研究所 Remote sensing image registration method and system based on network matching
CN114758151A (en) * 2022-03-21 2022-07-15 辽宁工程技术大学 Sequence image dense matching method combining line features and triangulation network constraints
CN114758151B (en) * 2022-03-21 2024-05-24 辽宁工程技术大学 Sequence image dense matching method combining line characteristics and triangular mesh constraint
CN114972536B (en) * 2022-05-26 2023-05-09 中国人民解放军战略支援部队信息工程大学 Positioning and calibrating method for aviation area array swing scanning type camera
CN114972536A (en) * 2022-05-26 2022-08-30 中国人民解放军战略支援部队信息工程大学 Aviation area array sweep type camera positioning and calibrating method
CN115661368B (en) * 2022-12-14 2023-04-11 海纳云物联科技有限公司 Image matching method, device, server and storage medium
CN115661368A (en) * 2022-12-14 2023-01-31 海纳云物联科技有限公司 Image matching method, device, server and storage medium
CN116612067A (en) * 2023-04-06 2023-08-18 北京四维远见信息技术有限公司 Method, apparatus, device and computer readable storage medium for checking aviation quality
CN116612067B (en) * 2023-04-06 2024-02-23 北京四维远见信息技术有限公司 Method, apparatus, device and computer readable storage medium for checking aviation quality
CN116596844B (en) * 2023-04-06 2024-03-29 北京四维远见信息技术有限公司 Aviation quality inspection method, device, equipment and storage medium
CN116596844A (en) * 2023-04-06 2023-08-15 北京四维远见信息技术有限公司 Aviation quality inspection method, device, equipment and storage medium
CN116597184B (en) * 2023-07-11 2023-09-22 中国人民解放军63921部队 Least square image matching method
CN116597184A (en) * 2023-07-11 2023-08-15 中国人民解放军63921部队 Least square image matching method
CN117664087A (en) * 2024-01-31 2024-03-08 中国人民解放军战略支援部队航天工程大学 Method, system and equipment for generating vertical orbit circular scanning type satellite image epipolar line
CN117664087B (en) * 2024-01-31 2024-04-02 中国人民解放军战略支援部队航天工程大学 Method, system and equipment for generating vertical orbit circular scanning type satellite image epipolar line
CN118070434A (en) * 2024-04-22 2024-05-24 天津悦鸣腾宇通用机械设备有限公司 Method and system for constructing process information model of automobile part

Also Published As

Publication number Publication date
CN110135455A (en) 2019-08-16
CN110135455B (en) 2024-04-12

Similar Documents

Publication Publication Date Title
WO2020206903A1 (en) Image matching method and device, and computer readable storage medium
US11928800B2 (en) Image coordinate system transformation method and apparatus, device, and storage medium
TWI777538B (en) Image processing method, electronic device and computer-readable storage media
CN107705333B (en) Space positioning method and device based on binocular camera
US11521311B1 (en) Collaborative disparity decomposition
CA2826534C (en) Backfilling points in a point cloud
WO2015135323A1 (en) Camera tracking method and device
CN110176032B (en) Three-dimensional reconstruction method and device
EP3274964B1 (en) Automatic connection of images using visual features
US9286539B2 (en) Constructing contours from imagery
CN111127524A (en) Method, system and device for tracking trajectory and reconstructing three-dimensional image
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
CN107274483A (en) A kind of object dimensional model building method
US20160163114A1 (en) Absolute rotation estimation including outlier detection via low-rank and sparse matrix decomposition
WO2023024393A1 (en) Depth estimation method and apparatus, computer device, and storage medium
CN115439607A (en) Three-dimensional reconstruction method and device, electronic equipment and storage medium
WO2021244161A1 (en) Model generation method and apparatus based on multi-view panoramic image
WO2022237048A1 (en) Pose acquisition method and apparatus, and electronic device, storage medium and program
KR101593316B1 (en) Method and apparatus for recontructing 3-dimension model using stereo camera
CN112150518B (en) Attention mechanism-based image stereo matching method and binocular device
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
US8509522B2 (en) Camera translation using rotation from device
WO2021142843A1 (en) Image scanning method and device, apparatus, and storage medium
CN113436269B (en) Image dense stereo matching method, device and computer equipment
CN103489165B (en) A kind of decimal towards video-splicing searches table generating method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924318

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924318

Country of ref document: EP

Kind code of ref document: A1