CN113674407B - Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image - Google Patents

Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image Download PDF

Info

Publication number
CN113674407B
CN113674407B CN202110801417.8A CN202110801417A CN113674407B CN 113674407 B CN113674407 B CN 113674407B CN 202110801417 A CN202110801417 A CN 202110801417A CN 113674407 B CN113674407 B CN 113674407B
Authority
CN
China
Prior art keywords
image
right images
dimensional
topographic
constrained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110801417.8A
Other languages
Chinese (zh)
Other versions
CN113674407A (en
Inventor
杜兴卓
张晶晶
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN202110801417.8A priority Critical patent/CN113674407B/en
Publication of CN113674407A publication Critical patent/CN113674407A/en
Application granted granted Critical
Publication of CN113674407B publication Critical patent/CN113674407B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a three-dimensional terrain reconstruction method, device and storage medium based on binocular vision images, which recover three-dimensional information of a terrain surface and realize high-precision and high-efficiency three-dimensional terrain reconstruction. Simulating a human visual system by using only two cameras, and firstly shooting a checkerboard to obtain an internal and external parameter matrix of the cameras; then, shooting a topographic image, and preprocessing and correcting the topographic image; then using an improved feature point extraction and matching algorithm to match feature points of the binocular topographic image; secondly, combining the matched characteristic points by utilizing an improved parallax map generation algorithm to obtain a parallax map of the target terrain; and finally, combining the internal and external parameters of the camera to perform point cloud splicing and point cloud color rendering, so that the three-dimensional reconstruction of the terrain can be completed. Compared with the existing three-dimensional reconstruction method, the method provided by the invention can improve the three-dimensional terrain reconstruction precision to a great extent, can be widely applied to the fields of aircraft visual simulation, aviation mapping, unmanned and the like in practice, and has strong practicability.

Description

Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image
Technical Field
The invention relates to the technical field of computer vision and image processing, in particular to a three-dimensional terrain reconstruction method, device and storage medium based on binocular vision images.
Background
The computer vision is to collect the digitized image through the camera equipment, send the digital image into the computer, and replace the brain with the computer to complete the perception, understanding and recognition of the image content. The computer vision provides technologies including virtual vision simulation, visual identification, three-dimensional reconstruction and the like in the aspects of intelligent production, aerospace, medical modeling and the like, and can effectively improve production efficiency and product quality and reduce operation cost and resource energy consumption.
The traditional three-dimensional terrain reconstruction method comprises an infrared reconstruction method, a laser reconstruction method, a sonar reconstruction method and the like, and the methods have the defects of more or less difficult operation, complicated process, narrow application scene, expensive equipment and the like, and a large amount of manpower and material resources are required to be consumed in the practical application process.
Disclosure of Invention
The invention provides a three-dimensional terrain reconstruction method, a device and a storage medium based on binocular vision images, which are used for solving the problems that the traditional three-dimensional terrain reconstruction method is difficult to operate, complex in process, narrow in application scene, expensive in equipment and large in consumption of manpower and material resources. Only two cameras are used for simulating a human visual system under natural light, and firstly, a topographic image is photographed and corrected; then using an improved sparse feature point extraction and matching algorithm to match feature points of the binocular topographic image; secondly, an improved parallax map generation algorithm is utilized to obtain a parallax map of the target terrain; and finally, combining the internal and external parameters of the camera to perform point cloud splicing and point cloud color rendering, so that the three-dimensional reconstruction of the terrain can be completed.
In order to achieve the above object, the present invention provides a three-dimensional terrain reconstruction method based on binocular vision images, comprising the steps of:
s1, calibrating a binocular camera to obtain an internal and external parameter matrix of the binocular camera;
s2, shooting target topographic images to be reconstructed by using the binocular camera to obtain left and right target topographic images;
s3, correcting, preprocessing and epipolar constraint are sequentially carried out on the left target topographic image and the right target topographic image, so that epipolar constrained left and right images are obtained;
s4, extracting and matching characteristic points of the left and right images constrained by the epipolar line by using a SURF algorithm to obtain matching characteristic points;
s5, calculating the parallax of the left and right images constrained by the epipolar line by utilizing an SGBM algorithm according to the matched feature points, extracting the topographic edge features of the left and right images constrained by the epipolar line by utilizing an edge detection method and a morphological processing method, and calculating to obtain a parallax map of the left and right images constrained by the epipolar line by taking the topographic edge features as constraint conditions;
s6, according to the disparity maps of the left and right images constrained by the epipolar lines, combining the internal and external parameter matrixes of the binocular camera to calculate three-dimensional coordinate information of each characteristic point of the three-dimensional topographic image, and generating a three-dimensional characteristic point cloud;
and S7, arranging the three-dimensional characteristic point cloud in a three-dimensional space according to the three-dimensional space coordinate information, extracting color information of characteristic points in the left target topographic image and the right target topographic image, and performing color rendering on the three-dimensional characteristic point cloud according to the color information to obtain a reconstructed three-dimensional topographic image.
Further, the step S3 specifically includes:
s31, correcting the left target topographic image and the right target topographic image according to the internal and external parameter matrixes of the binocular camera to obtain corrected left and right images;
s32, performing gray level conversion, noise reduction and signal to noise ratio increase on the corrected left and right images by using an image preprocessing method to obtain preprocessed left and right images;
and S33, drawing a plurality of epipolar lines on the preprocessed image according to the epipolar geometry principle to carry out epipolar constraint, so as to obtain left and right epipolar constrained images.
Further, the step S32 specifically includes:
s321, carrying out gray level transformation on the corrected left and right images to obtain gray level transformed left and right images, wherein the specific formula of the gray level transformation is as follows:
s=clog(1+r)
wherein c is a constant, r is an original image gray value, and s is a converted gray value;
s322, carrying out histogram equalization on the left and right images after gray level conversion to enhance the signal to noise ratio and obtain left and right images with enhanced signal to noise ratio;
s323, gaussian filtering noise reduction is carried out on the left and right images with the enhanced signal to noise ratio, the left and right images after noise reduction are obtained, and the specific formula of Gaussian filtering noise reduction is as follows:
where f denotes performing a gaussian filtering operation, x denotes any continuous random variable, μ is a constant, σ is the standard deviation of a normal distribution, and x is said to obey a gaussian distribution with parameters μ, σ (σ > 0).
Further, the step S4 specifically includes:
s41, detecting angular points with gray value changes larger than a set threshold value in left and right images of the epipolar constraint by using a SURF algorithm, and marking the angular points as characteristic points;
s42, according to the feature points on the left image and the epipolar constraint condition, finding out the feature points corresponding to the feature points on the left image on the right image to obtain matched feature points;
s43, screening out wrong matching feature points by using a RANSAC algorithm, and finishing feature point extraction and matching.
Further, step S41 specifically includes:
s411, constructing a Hessian matrix to locate characteristic points in the left and right images of the epipolar constraint;
the determinant of the Hessian matrix is:
detH=DxxDxy-(0.9Dxy) 2
wherein Dxx and Dyy are second-order partial derivatives in the horizontal X, Y direction of the Hessian matrix, and Dxy is second-order partial derivative in the vertical direction of the Hessian matrix;
s412, selecting the characteristic direction of the characteristic points to obtain the main direction of the characteristic points;
s413, constructing a SURF feature point description operator according to the feature points and the main directions of the feature points.
Further, the step S5 specifically includes:
s51, extracting the topographic edge characteristics of the left and right images constrained by the polar lines by using an edge detection method, wherein the specific formula of the edge detection method is as follows:
a1 and A2 are edge gradient values of the left and right images constrained by the polar lines respectively, and theta represents an edge gray value of an actual target terrain;
s52, performing expansion operation on the left and right images constrained by the polar lines, and replacing a certain pixel point by a point with the highest gray value around the pixel point to enhance the image effect; corroding the left and right images constrained by the polar lines, replacing a certain pixel point by a point with the lowest gray value around the pixel point, deleting a part of images or image features, and weakening the image effect; finally subtracting the corrosion operation result from the expansion operation result to complete morphological treatment so as to strengthen the topographic edge characteristics of the object;
s53, calculating parallax maps of the left and right images constrained by the epipolar line by using the SGBM algorithm by taking the topographic edge characteristics as constraint conditions, wherein a specific calculation formula is as follows:
E(A)=∑(B(m,A m )+∑X 1 I(|A m -A n |=1)+∑X 2 I(|A m -A n |>1))
wherein A is the parallax value of any feature point, E (A) is the error function corresponding to the parallax value, m, n are feature points in the neighborhood of any feature point in the image, B (m, A m ) Is the parallax of the current feature point is A m When the characteristic point is different from the characteristic point in other adjacent domains, X 1 Is a penalty factor of less than 1 in a parallax value with m in a feature point in a neighborhood corresponding to m, X 2 Is a penalty factor that the parallax value with m is greater than 1 in the feature points in the neighborhood corresponding to m, when Am and An are true values, the returned value is 1, otherwise, the returned value is 0.
In addition, in order to achieve the above object, the present invention also provides a three-dimensional terrain reconstruction device based on binocular vision images, comprising the following modules:
the camera calibration module is used for calibrating the binocular camera to obtain an internal and external parameter matrix of the binocular camera;
the image acquisition module is used for shooting target topographic images to be reconstructed by using the binocular camera to obtain left and right target topographic images;
the image processing module is used for sequentially correcting, preprocessing and restricting the polar lines of the left target topographic image and the right target topographic image to obtain polar line restricted left and right images;
the extraction and matching module is used for extracting and matching characteristic points of the left and right images constrained by the epipolar line by using a SURF algorithm to obtain matching characteristic points;
the parallax calculating module is used for calculating the parallax of the polar-constrained left and right images by utilizing an SGBM algorithm according to the matched feature points, extracting the topographic edge features of the polar-constrained left and right images by utilizing an edge detection method and a morphological processing method, and calculating to obtain a parallax map of the polar-constrained left and right images by taking the topographic edge features as constraint conditions;
the point cloud generation module is used for calculating three-dimensional coordinate information of each characteristic point of the three-dimensional topographic image by combining the internal and external parameter matrixes of the binocular camera according to the parallax images of the left and right images constrained by the polar lines to generate a three-dimensional characteristic point cloud;
and the splicing rendering module is used for arranging the three-dimensional characteristic point clouds in a three-dimensional space according to the three-dimensional space coordinate information, extracting color information of characteristic points in the left target topographic image and the right target topographic image, and performing color rendering on the three-dimensional characteristic point clouds according to the color information to obtain a reconstructed three-dimensional topographic image.
In addition, in order to achieve the above object, the present invention also provides a storage medium having stored thereon a three-dimensional topography reconstruction program which, when executed by a processor, implements the steps of the three-dimensional topography reconstruction method.
The invention has the beneficial effects that:
according to the binocular vision-based three-dimensional terrain reconstruction method, only two cameras are used for simulating a human vision system, and binocular terrain images of target terrains are obtained; then using an improved feature point extraction and matching algorithm (SURF algorithm combined with RANSAC algorithm) to match feature points of the binocular topographic image; secondly, an improved parallax map generation algorithm (SGBM algorithm is combined with an edge detection method and a morphological processing method) is combined with the matched characteristic points to obtain a parallax map of the target terrain; and finally, combining the internal and external parameters of the camera to perform point cloud splicing and point cloud color rendering, so that the three-dimensional reconstruction of the terrain can be completed. Compared with the existing three-dimensional reconstruction method, the method provided by the invention can improve the three-dimensional terrain reconstruction precision to a great extent, can be widely applied to the fields of aircraft visual simulation, aviation mapping, unmanned and the like in practice, and has strong practicability.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a three-dimensional terrain reconstruction method based on binocular vision images of the present invention;
FIG. 2 is a simulated terrain experimental scenario constructed by the present invention;
FIG. 3 is a black and white checkerboard image captured by the camera of the present invention;
FIG. 4 is a graph showing the calibration and correction results of the camera of the present invention;
FIG. 5 is a captured image of a target terrain in accordance with the present invention;
fig. 5 (a) is a left image, and fig. 5 (b) is a right image;
FIG. 6 is an image of a target topography image preprocessed by the present invention;
fig. 6 (a) corresponds to the left image, and fig. 6 (b) corresponds to the right image;
FIG. 7 is a graph showing the result of epipolar constraint of a target topography image according to the present invention;
fig. 7 (a) corresponds to the left image, and fig. 7 (b) corresponds to the right image;
FIG. 8 is a feature point matching result of the present invention;
FIG. 9 is a graph of feature point matching accuracy of the present invention;
fig. 10 is a disparity map generated by the present invention;
FIG. 11 is a three-dimensional spatial coordinate of a point cloud and the point RGB color information calculated by the present invention;
FIG. 12 is a resulting elevation view of the reconstructed three-dimensional terrain of the present invention;
FIG. 13 is a resulting side view of the reconstructed three-dimensional terrain of the present invention;
fig. 14 is a block diagram of a three-dimensional terrain reconstruction device based on binocular vision images according to the present invention.
Detailed Description
For a clearer understanding of technical features, objects and effects of the present invention, a detailed description of embodiments of the present invention will be made with reference to the accompanying drawings.
Referring to fig. 1, the present embodiment provides a three-dimensional terrain reconstruction method based on binocular vision images, including the steps of:
s1, shooting a standard black-and-white checkerboard image by using a binocular camera, and calculating to obtain an internal and external parameter matrix of the binocular camera by a Zhengyou plane calibration method (refer to figures 2-4);
s2, shooting target topographic images to be reconstructed by using a binocular camera to obtain left and right target topographic images (refer to FIG. 5);
and S3, correcting, preprocessing and epipolar constraint are sequentially carried out on the left target topographic image and the right target topographic image, so that epipolar constrained left and right images are obtained.
In this embodiment, step S3 specifically includes:
s31, correcting the left target topographic image and the right target topographic image according to the internal and external parameter matrixes of the binocular camera to obtain corrected left and right images (refer to FIG. 4);
s32, performing gray level conversion, noise reduction and signal to noise ratio increase on the corrected left and right images by using an image preprocessing method to obtain preprocessed left and right images (refer to FIG. 6);
the step S32 specifically includes:
s321, carrying out gray level transformation on the corrected left and right images to obtain gray level transformed left and right images, wherein the specific formula of the gray level transformation is as follows:
s=clog(1+r)
wherein c is a constant, r is an original image gray value, and s is a converted gray value;
s322, carrying out histogram equalization on the left and right images after gray level conversion to enhance the signal to noise ratio and obtain left and right images with enhanced signal to noise ratio;
s323, carrying out Gaussian filtering noise reduction on the left and right images with the enhanced signal-to-noise ratio to obtain the left and right images after noise reduction, namely obtaining the preprocessed left and right images, wherein the specific formula of Gaussian filtering noise reduction is as follows:
where f denotes performing a gaussian filtering operation, x denotes any continuous random variable, μ is a constant, σ is the standard deviation of a normal distribution, and x is said to obey a gaussian distribution with parameters μ, σ (σ > 0).
And S33, drawing a plurality of epipolar lines on the preprocessed image according to the epipolar geometry principle to carry out epipolar constraint, so as to obtain left and right epipolar constrained images (refer to FIG. 7).
And S4, extracting and matching characteristic points of the left and right images constrained by the epipolar line by using a SURF algorithm to obtain matching characteristic points (refer to FIG. 8 and FIG. 9).
In this embodiment, step S4 specifically includes:
s41, detecting angular points with gray value changes larger than a set threshold in left and right images of the epipolar constraint by using a SURF algorithm, and marking the angular points as characteristic points, wherein the set threshold is 0.7 in the embodiment;
s41 specifically comprises:
s411, constructing a Hessian matrix to locate characteristic points in the left and right images of the epipolar constraint;
the determinant of the Hessian matrix is:
detH=DxxDxy-(0.9Dxy) 2
wherein Dxx and Dyy are second-order partial derivatives in the horizontal X, Y direction of the Hessian matrix, and Dxy is second-order partial derivative in the vertical direction of the Hessian matrix;
s412, selecting the characteristic direction of the characteristic points to obtain the main direction of the characteristic points;
s413, constructing a SURF feature point description operator according to the feature points and the main directions of the feature points.
S42, according to the feature points on the left image and the epipolar constraint condition, finding out the feature points corresponding to the feature points on the left image on the right image to obtain matched feature points;
s43, screening out wrong matching feature points by using a RANSAC algorithm, and screening out the feature points which are wrong in matching by calculating the average distance from the feature points to the center, thereby finishing feature point extraction and matching.
S5, according to the matching feature points, calculating the parallaxes of the left and right images constrained by the epipolar line by utilizing an SGBM algorithm, extracting the topographic edge features of the left and right images constrained by the epipolar line by utilizing an edge detection method and a morphological processing method, and calculating to obtain the parallaxes of the left and right images constrained by the epipolar line by taking the topographic edge features as constraint conditions (refer to FIG. 10).
In this embodiment, step S5 specifically includes:
s51, extracting topographic edge features of the left and right images constrained by the polar lines by using an edge detection method, wherein the specific formula of the edge detection method is as follows:
a1 and A2 are edge gradient values of the left and right images constrained by the polar lines respectively, and theta represents an edge gray value of an actual target terrain;
s52, performing expansion operation on the left and right images constrained by the polar lines, and replacing a certain pixel point by a point with the highest gray value around the pixel point to enhance the image effect; corroding the left and right images constrained by the polar lines, replacing a certain pixel point by a point with the lowest gray value around the pixel point, deleting a part of images or image features, and weakening the image effect; finally subtracting the corrosion operation result from the expansion operation result to complete morphological treatment so as to strengthen the topographic edge characteristics of the object;
and S53, taking the topographic edge characteristics as constraint conditions, and detecting and calculating only in the topographic image edges, so that the interference of other irrelevant factors can be eliminated, and the parallax calculation accuracy is further improved. After the parallaxes of all the characteristic points are calculated, a parallax image of a binocular topographic image can be obtained, the parallax images of the left and right images constrained by the polar lines are calculated by using an SGBM algorithm, and the specific calculation formula is as follows:
EA)=∑(B(m,A m )+∑X 1 I(|A m -A n |=1)+∑X 2 I(|A m -A n |>1))
wherein A is the parallax value of any feature point, E (A) is the error function corresponding to the parallax value, m, n are feature points in the neighborhood of any feature point in the image, B (m, A m ) Is the parallax of the current feature point is A m When the characteristic point is different from the characteristic point in other adjacent domains, X 1 Is a penalty factor of less than 1 in a parallax value with m in a feature point in a neighborhood corresponding to m, X 2 Is a penalty factor that the parallax value with m is greater than 1 in the feature points in the neighborhood corresponding to m, when Am and An are true values, the returned value is 1, otherwise, the returned value is 0.
And S6, according to the disparity maps of the left and right images constrained by the epipolar lines, combining the internal and external parameter matrixes of the binocular camera to calculate three-dimensional coordinate information of each characteristic point of the three-dimensional topographic image, and generating a three-dimensional characteristic point cloud.
And S7, arranging the three-dimensional characteristic point clouds in a three-dimensional space according to the three-dimensional space coordinate information, extracting color information of characteristic points in the left target topographic image and the right target topographic image (refer to FIG. 11), and performing color rendering on the three-dimensional characteristic point clouds according to the color information to obtain a reconstructed three-dimensional topographic image (refer to FIGS. 12 and 13).
As an optional implementation manner, the present implementation further provides a three-dimensional terrain reconstruction device based on the binocular vision image, which is used for implementing the three-dimensional terrain reconstruction method.
Referring to fig. 14, the apparatus includes the following modules:
the camera calibration module 1 is used for shooting standard black-and-white checkerboard images by using a binocular camera, and calculating to obtain an internal and external parameter matrix of the binocular camera by using a Zhengyou plane calibration method;
the image acquisition module 2 is used for shooting target topographic images to be reconstructed by using a binocular camera to obtain left and right target topographic images;
the image processing module 3 is used for sequentially correcting, preprocessing and restricting the polar lines of the left target topographic image and the right target topographic image to obtain polar line restricted left and right images;
the extracting and matching module 4 is used for extracting and matching characteristic points of the left and right images constrained by the epipolar line by using a SURF algorithm to obtain matching characteristic points;
the parallax calculating module 5 is configured to calculate, according to the matching feature points, the parallax of the polar-constrained left and right images by using an SGBM algorithm, extract the topographic edge features of the polar-constrained left and right images by using an edge detection method and a morphological processing method, and calculate, using the topographic edge features as constraint conditions, a parallax map of the polar-constrained left and right images;
the point cloud generating module 6 is used for combining the internal and external parameter matrixes of the binocular camera to calculate three-dimensional coordinate information of each characteristic point of the three-dimensional topographic image according to the disparity map of the left and right images constrained by the polar lines, and generating a three-dimensional characteristic point cloud;
and the splicing rendering module 7 is used for arranging the three-dimensional characteristic point clouds in a three-dimensional space according to the three-dimensional space coordinate information, extracting color information of characteristic points in the left target topographic image and the right target topographic image, and performing color rendering on the three-dimensional characteristic point clouds according to the color information to obtain a reconstructed three-dimensional topographic image.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the terms first, second, third, etc. do not denote any order, but rather the terms first, second, third, etc. are used to interpret the terms as labels.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. The three-dimensional terrain reconstruction method based on the binocular vision image is characterized by comprising the following steps of:
s1, calibrating a binocular camera to obtain an internal and external parameter matrix of the binocular camera;
s2, shooting target topographic images to be reconstructed by using the binocular camera to obtain left and right target topographic images;
s3, correcting, preprocessing and epipolar constraint are sequentially carried out on the left target topographic image and the right target topographic image, so that epipolar constrained left and right images are obtained;
s4, extracting and matching characteristic points of the left and right images constrained by the epipolar line by using a SURF algorithm to obtain matching characteristic points;
s5, according to the matched feature points, calculating the parallax of the left and right images constrained by the epipolar line by utilizing an SGBM algorithm, extracting the topographic edge features of the left and right images constrained by the epipolar line by utilizing an edge detection method and a morphological processing method, and calculating to obtain the parallax map of the left and right images constrained by the epipolar line by taking the topographic edge features as constraint conditions, wherein the parallax map comprises the following specific steps:
s51, extracting the topographic edge characteristics of the left and right images constrained by the polar lines by using an edge detection method, wherein the specific formula of the edge detection method is as follows:
a1 and A2 are edge gradient values of the left and right images constrained by the polar lines respectively, and theta represents an edge gray value of an actual target terrain;
s52, performing expansion operation on the left and right images constrained by the polar lines, and replacing a certain pixel point by a point with the highest gray value around the pixel point to enhance the image effect; corroding the left and right images constrained by the polar lines, replacing a certain pixel point by a point with the lowest gray value around the pixel point, deleting a part of images or image features, and weakening the image effect; finally subtracting the corrosion operation result from the expansion operation result to complete morphological treatment so as to strengthen the topographic edge characteristics of the object;
s53, calculating parallax maps of the left and right images constrained by the epipolar line by using the SGBM algorithm by taking the topographic edge characteristics as constraint conditions, wherein a specific calculation formula is as follows:
E(A)=∑(B(m,A m )+∑X 1 I(|A m -A n |=1)+∑X 2 I(|A m -A n |>1))
wherein, A is the parallax value of any certain characteristic point, E (A) is the corresponding error function of the parallax value, m, n is the characteristic point in the neighborhood of any one characteristic point in the image, B (m, am) is the difference between the characteristic point and the characteristic point in other neighborhood when the parallax of the current characteristic point is Am, X1 is the penalty factor that the parallax value between the characteristic point in the neighborhood corresponding to m and m is smaller than 1, X2 is the penalty factor that the parallax value between the characteristic point in the neighborhood corresponding to m and m is larger than 1, I returns to 1 when Am and An are true values, otherwise I returns to 0;
s6, according to the disparity maps of the left and right images constrained by the epipolar lines, combining the internal and external parameter matrixes of the binocular camera to calculate three-dimensional space coordinate information of each characteristic point of the three-dimensional topographic image, and generating a three-dimensional characteristic point cloud;
and S7, arranging the three-dimensional characteristic point cloud in a three-dimensional space according to the three-dimensional space coordinate information, extracting color information of characteristic points in the left target topographic image and the right target topographic image, and performing color rendering on the three-dimensional characteristic point cloud according to the color information to obtain a reconstructed three-dimensional topographic image.
2. The binocular vision image-based three-dimensional terrain reconstruction method of claim 1, wherein: the step S3 specifically comprises the following steps:
s31, correcting the left target topographic image and the right target topographic image according to the internal and external parameter matrixes of the binocular camera to obtain corrected left and right images;
s32, performing gray level conversion, noise reduction and signal to noise ratio increase on the corrected left and right images by using an image preprocessing method to obtain preprocessed left and right images;
and S33, drawing a plurality of epipolar lines on the preprocessed image according to the epipolar geometry principle to carry out epipolar constraint, so as to obtain left and right epipolar constrained images.
3. The binocular vision image-based three-dimensional terrain reconstruction method of claim 2, wherein: the step S32 specifically includes:
s321, carrying out gray level transformation on the corrected left and right images to obtain gray level transformed left and right images, wherein the specific formula of the gray level transformation is as follows:
s=clog(1+r)
wherein c is a constant, r is an original image gray value, and s is a converted gray value;
s322, carrying out histogram equalization on the left and right images after gray level conversion to enhance the signal to noise ratio and obtain left and right images with enhanced signal to noise ratio;
s323, gaussian filtering noise reduction is carried out on the left and right images with the enhanced signal to noise ratio, the left and right images after noise reduction are obtained, and the specific formula of Gaussian filtering noise reduction is as follows:
where f denotes performing a gaussian filtering operation, x denotes any continuous random variable, μ is a constant, σ is the standard deviation of a normal distribution, and x is said to obey a gaussian distribution with parameters μ, σ (σ > 0).
4. The binocular vision image-based three-dimensional terrain reconstruction method of claim 1, wherein: the step S4 specifically comprises the following steps:
s41, detecting angular points with gray value changes larger than a set threshold value in left and right images of the epipolar constraint by using a SURF algorithm, and marking the angular points as characteristic points;
s42, according to the feature points on the left image and the epipolar constraint condition, finding out the feature points corresponding to the feature points on the left image on the right image to obtain matched feature points;
s43, screening out wrong matching feature points by using a RANSAC algorithm, and finishing feature point extraction and matching.
5. The binocular vision image-based three-dimensional terrain reconstruction method of claim 4, wherein: the step S41 specifically includes:
s411, constructing a Hessian matrix to locate characteristic points in the left and right images of the epipolar constraint;
the determinant of the Hessian matrix is:
detH=DxxDxy-(0.9Dxy) 2
wherein Dxx and Dyy are second-order partial derivatives in the horizontal X, Y direction of the Hessian matrix, and Dxy is second-order partial derivative in the vertical direction of the Hessian matrix;
s412, selecting the characteristic direction of the characteristic points to obtain the main direction of the characteristic points;
s413, constructing a SURF feature point description operator according to the feature points and the main directions of the feature points.
6. A three-dimensional topography rebuilding device based on binocular vision image, its characterized in that: the method comprises the following modules:
the camera calibration module is used for calibrating the binocular camera to obtain an internal and external parameter matrix of the binocular camera;
the image acquisition module is used for shooting target topographic images to be reconstructed by using a binocular camera to obtain left and right target topographic images;
the image processing module is used for sequentially correcting, preprocessing and restricting the polar lines of the left target topographic image and the right target topographic image to obtain polar line restricted left and right images;
the extraction and matching module is used for extracting and matching characteristic points of the left and right images constrained by the epipolar line by using a SURF algorithm to obtain matching characteristic points;
the parallax calculating module is used for calculating the parallax of the polar-constrained left and right images by utilizing an SGBM algorithm according to the matched feature points, extracting the topographic edge features of the polar-constrained left and right images by utilizing an edge detection method and a morphological processing method, and calculating to obtain a parallax map of the polar-constrained left and right images by taking the topographic edge features as constraint conditions; the work flow of the parallax calculation module is as follows:
s51, extracting the topographic edge characteristics of the left and right images constrained by the polar lines by using an edge detection method, wherein the specific formula of the edge detection method is as follows:
a1 and A2 are edge gradient values of the left and right images constrained by the polar lines respectively, and theta represents an edge gray value of an actual target terrain;
s52, performing expansion operation on the left and right images constrained by the polar lines, and replacing a certain pixel point by a point with the highest gray value around the pixel point to enhance the image effect; corroding the left and right images constrained by the polar lines, replacing a certain pixel point by a point with the lowest gray value around the pixel point, deleting a part of images or image features, and weakening the image effect; finally subtracting the corrosion operation result from the expansion operation result to complete morphological treatment so as to strengthen the topographic edge characteristics of the object;
s53, calculating parallax maps of the left and right images constrained by the epipolar line by using the SGBM algorithm by taking the topographic edge characteristics as constraint conditions, wherein a specific calculation formula is as follows:
E(A)=∑(B(m,A m )+∑x 1 I(|A m -A n |=1)+∑X 2 I(|A m -A n |>1))
wherein, A is the parallax value of any certain characteristic point, E (A) is the corresponding error function of the parallax value, m, n is the characteristic point in the neighborhood of any one characteristic point in the image, B (m, am) is the difference between the characteristic point and the characteristic point in other neighborhood when the parallax of the current characteristic point is Am, X1 is the penalty factor that the parallax value between the characteristic point in the neighborhood corresponding to m and m is smaller than 1, X2 is the penalty factor that the parallax value between the characteristic point in the neighborhood corresponding to m and m is larger than 1, I returns to 1 when Am and An are true values, otherwise I returns to 0;
the point cloud generation module is used for calculating three-dimensional space coordinate information of each characteristic point of the three-dimensional topographic image by combining the internal and external parameter matrixes of the binocular camera according to the parallax images of the left and right images constrained by the polar lines to generate three-dimensional characteristic point clouds;
and the splicing rendering module is used for arranging the three-dimensional characteristic point clouds in a three-dimensional space according to the three-dimensional space coordinate information, extracting color information of characteristic points in the left target topographic image and the right target topographic image, and performing color rendering on the three-dimensional characteristic point clouds according to the color information to obtain a reconstructed three-dimensional topographic image.
7. A storage medium having stored thereon a three-dimensional terrain reconstruction program which, when executed by a processor, implements the steps of the three-dimensional terrain reconstruction method according to any of claims 1 to 5.
CN202110801417.8A 2021-07-15 2021-07-15 Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image Active CN113674407B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110801417.8A CN113674407B (en) 2021-07-15 2021-07-15 Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110801417.8A CN113674407B (en) 2021-07-15 2021-07-15 Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image

Publications (2)

Publication Number Publication Date
CN113674407A CN113674407A (en) 2021-11-19
CN113674407B true CN113674407B (en) 2024-02-13

Family

ID=78539238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110801417.8A Active CN113674407B (en) 2021-07-15 2021-07-15 Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image

Country Status (1)

Country Link
CN (1) CN113674407B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115147763B (en) * 2022-07-01 2023-10-20 兰州理工大学 Method and device for rapidly acquiring glacier movement surface flow velocity

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching
CN106303501A (en) * 2016-08-23 2017-01-04 深圳市捷视飞通科技股份有限公司 Stereo-picture reconstructing method based on image sparse characteristic matching and device
CN107423772A (en) * 2017-08-08 2017-12-01 南京理工大学 A kind of new binocular image feature matching method based on RANSAC
CN109269478A (en) * 2018-10-24 2019-01-25 南京大学 A kind of container terminal based on binocular vision bridge obstacle detection method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102008060141B4 (en) * 2008-12-03 2017-12-21 Forschungszentrum Jülich GmbH Method for measuring the growth of leaf discs and a device suitable for this purpose
US10122996B2 (en) * 2016-03-09 2018-11-06 Sony Corporation Method for 3D multiview reconstruction by feature tracking and model registration

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching
CN106303501A (en) * 2016-08-23 2017-01-04 深圳市捷视飞通科技股份有限公司 Stereo-picture reconstructing method based on image sparse characteristic matching and device
CN107423772A (en) * 2017-08-08 2017-12-01 南京理工大学 A kind of new binocular image feature matching method based on RANSAC
CN109269478A (en) * 2018-10-24 2019-01-25 南京大学 A kind of container terminal based on binocular vision bridge obstacle detection method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
双目视觉技术在人机交互体育运动中的应用;薛亚东;;自动化与仪器仪表(10);全文 *
基于双目立体视觉的物流包装箱尺寸测量研究;张志刚;霍晓丽;周冰;;包装工程(19);全文 *

Also Published As

Publication number Publication date
CN113674407A (en) 2021-11-19

Similar Documents

Publication Publication Date Title
CN110363858B (en) Three-dimensional face reconstruction method and system
CN111063021B (en) Method and device for establishing three-dimensional reconstruction model of space moving target
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
CN104156536B (en) The visualization quantitatively calibrating and analysis method of a kind of shield machine cutter abrasion
CN101996407B (en) Colour calibration method for multiple cameras
CN107767456A (en) A kind of object dimensional method for reconstructing based on RGB D cameras
CN107560592B (en) Precise distance measurement method for photoelectric tracker linkage target
CN110853151A (en) Three-dimensional point set recovery method based on video
CN112801074B (en) Depth map estimation method based on traffic camera
CN104537707A (en) Image space type stereo vision on-line movement real-time measurement system
CN107492107B (en) Object identification and reconstruction method based on plane and space information fusion
CN113538569A (en) Weak texture object pose estimation method and system
CN110120013A (en) A kind of cloud method and device
Concha et al. Real-time localization and dense mapping in underwater environments from a monocular sequence
CN112348890A (en) Space positioning method and device and computer readable storage medium
CN113674407B (en) Three-dimensional terrain reconstruction method, device and storage medium based on binocular vision image
CN107958489B (en) Curved surface reconstruction method and device
CN115222884A (en) Space object analysis and modeling optimization method based on artificial intelligence
Parmehr et al. Automatic registration of optical imagery with 3d lidar data using local combined mutual information
Nouduri et al. Deep realistic novel view generation for city-scale aerial images
CN110487254B (en) Rapid underwater target size measuring method for ROV
CN116883590A (en) Three-dimensional face point cloud optimization method, medium and system
CN113963107B (en) Binocular vision-based large-scale target three-dimensional reconstruction method and system
Fan et al. Collaborative three-dimensional completion of color and depth in a specified area with superpixels
Kitt et al. Trinocular optical flow estimation for intelligent vehicle applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant