CN113112588A - Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction - Google Patents

Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction Download PDF

Info

Publication number
CN113112588A
CN113112588A CN202110371345.8A CN202110371345A CN113112588A CN 113112588 A CN113112588 A CN 113112588A CN 202110371345 A CN202110371345 A CN 202110371345A CN 113112588 A CN113112588 A CN 113112588A
Authority
CN
China
Prior art keywords
point cloud
image
rgb
dimensional
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110371345.8A
Other languages
Chinese (zh)
Inventor
邓凤淋
余兆凯
彭晓峰
朱朝显
邱昌杰
代志强
时云洪
王勇
周青媛
罗云馨
常友谦
李沛良
李训
宋娟
杨娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PowerChina Guizhou Electric Power Engineering Co Ltd
Original Assignee
PowerChina Guizhou Electric Power Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PowerChina Guizhou Electric Power Engineering Co Ltd filed Critical PowerChina Guizhou Electric Power Engineering Co Ltd
Priority to CN202110371345.8A priority Critical patent/CN113112588A/en
Publication of CN113112588A publication Critical patent/CN113112588A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20028Bilateral filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention discloses an underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction, which comprises the following steps: acquiring image data frames of the underground pipe well from different view angles through an RGB-D depth camera, wherein the information of the image data frames comprises a target depth image and a target RGB image; preprocessing the target depth image to obtain a corrected target depth image; carrying out fusion processing on the corrected target depth image and the target RGB image under the same visual angle to obtain a fusion image data frame; acquiring initial pose information between adjacent fusion image data frames and fusion three-dimensional point cloud according to the fusion image data frames; according to the initial pose information and the fused three-dimensional point cloud, realizing the registration of the point cloud under different viewing angles, and acquiring the reconstructed three-dimensional point cloud of the underground pipe well; and processing the reconstructed three-dimensional point cloud to obtain a real three-dimensional visual model of the underground pipe well, thereby realizing three-dimensional visualization of the internal environment condition.

Description

Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction
Technical Field
The invention relates to an underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction, and belongs to the technical field of underground pipe well surveying and mapping.
Background
With the improvement of the information degree of the underground space, the traditional expression mode of two-dimensional images and video information cannot meet the visualization requirements of underground well rooms and pipelines. The traditional underground well chamber measurement mainly uses schemes of manual mapping, 2D imaging, 24-mesh underground robot measurement and the like. However, the manual mapping has the defects of high danger, high labor cost, difficulty in ensuring accuracy and the like; the depth information of the common 2D image is lost, and the three-dimensional scale information cannot be accurately measured; in addition, the full view of the underground cannot be obtained due to the limited shooting area. The 24-mesh underground robot is complex in structure, large in size and poor in applicability, calibration needs to be carried out in a specific calibration chamber before the equipment is used, calibration is complex, and cost is high. Generally, the existing underground well chamber measuring means has the defects of three-dimensional information loss, low efficiency, high cost and the like.
Disclosure of Invention
Based on the above, the invention provides an underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction, which aims to solve the technical problems of three-dimensional information loss, low efficiency and higher cost of the existing underground well chamber measurement means.
The technical scheme of the invention is as follows: a three-dimensional visualization method for an underground pipe well based on RGB-D depth camera reconstruction is disclosed, wherein the method comprises the following steps:
acquiring image data frames of the underground pipe well from different view angles through an RGB-D depth camera, wherein the information of the image data frames comprises a target depth image and a target RGB image;
preprocessing the target depth image to obtain a corrected target depth image;
carrying out fusion processing on the corrected target depth image and the target RGB image under the same visual angle to obtain a fusion image data frame;
acquiring initial pose information between adjacent fusion image data frames and fusion three-dimensional point cloud according to the fusion image data frames;
according to the initial pose information and the fused three-dimensional point cloud, realizing the registration of the point cloud under different viewing angles, and acquiring the reconstructed three-dimensional point cloud of the underground pipe well;
and processing the reconstructed three-dimensional point cloud to obtain a real three-dimensional visual model of the underground pipe well.
Optionally, before the RGB-D depth camera performs shooting, the depth measurement error of the RGB-D depth camera is corrected based on a polynomial surface fitting manner.
Optionally, the method for preprocessing the target depth image includes: and on the basis of a joint bilateral filtering algorithm, the target RGB image is used as a guide map to realize the rapid filtering and denoising of the target depth image.
Optionally, the method for calculating the initial pose information includes: and matching the adjacent fusion image data frames according to an SIFT algorithm to acquire initial pose information.
Optionally, the initial pose information and the fused three-dimensional point cloud are processed by using an ICP algorithm based on neighborhood characteristics to realize the registration of the point cloud under different viewing angles.
Optionally, the method for processing the reconstructed three-dimensional point cloud includes:
performing curved surface reconstruction on the reconstructed three-dimensional point cloud to obtain a curved surface reconstruction result;
and performing texture mapping on the curved surface reconstruction result according to the color image data of the target RGB image.
The invention has the beneficial effects that: the invention carries out visual three-dimensional reconstruction on the underground well chamber based on the RGB-D depth camera, realizes the three-dimensional visualization of the internal environment condition, carries out quantitative analysis on the defect condition of the underground space on the basis of the three-dimensional actual measurement point cloud model, and can provide technical support for detecting and maintaining the disease of the underground space.
Drawings
FIG. 1 illustrates the Kinect triangulation principle;
FIG. 2 is Kinect range error distance curve fitting;
FIG. 3 shows the depth error correction results;
FIG. 4 is an original match result;
FIG. 5 shows the refined matching result of the modified RANSAC algorithm;
FIG. 6 is a pinhole camera model;
FIG. 7 is an ICP point cloud accurate registration process;
FIG. 8 is an ICP point cloud accurate registration;
FIG. 9 is a result of the point cloud after global registration;
FIG. 10 results of reconstruction of Poisson surfaces of a downhole well
FIG. 11 is a subsurface well pattern model.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein, but rather should be construed as broadly as the present invention is capable of modification in various respects, all without departing from the spirit and scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1 to 11, in an embodiment of the present invention, a method for three-dimensional visualization of a subsurface tube well based on RGB-D depth camera reconstruction includes:
s1, acquiring image data frames of the underground pipe well from different perspectives through the RGB-D depth camera, wherein the information of the image data frames comprises a target depth image and a target RGB image;
conventional laser scanners are difficult to operate in underground chambers due to their small space and unique texture. Aiming at the requirements of an underground well room application scene, the Kinect v2.0 depth camera based on the ToF distance measuring principle is adopted as image acquisition equipment in the embodiment, the depth camera can acquire depth image data of a detected target under any illumination condition, can be well applied to an underground space environment, and has important significance for three-dimensional reconstruction of the underground space.
When an image data frame is collected, a depth camera is placed in an underground well chamber through a mechanical transmission device, and depth and color data of three different directions are obtained in a rotating mode from an upper view angle, a middle view angle and a lower view angle. The default value of the selection times of the rotating equipment is set to be 16 times, the rotating angle is 20 degrees every time, and data collection is carried out on the rotating equipment from three layers of the well chamber. And during acquisition, the measuring distance of the depth camera is ensured to be at least 1m, and the obtained data quality is ensured to be better. The acquisition range of the equipment is set to be 0.6m-7.5m, and the effectiveness of the acquired data is ensured. Since the field angle of the Kinect v2.0 depth sensor is 70 degrees horizontally and 60 degrees vertically, the coverage range of the Kinect v2.0 depth sensor in the vertical and horizontal directions is limited. Therefore, in actual measurement, the Kinect v2.0 depth camera is respectively upwards inclined by a certain angle and downwards inclined for multiple times of scanning, so that the acquisition coverage range of the camera is increased, and more overlapping is ensured to exist between image data acquired after one rotation.
In order to ensure the accuracy of the depth measurement of the Kinect depth camera, before the Kinect depth camera shoots, the depth measurement error of the Kinect depth camera is corrected based on a polynomial surface fitting mode.
Specifically, a depth measurement error correction method based on polynomial surface fitting is provided for distance measurement error performance of a Kinect depth camera in different directions, a Kinect depth camera triangulation principle can be represented by a schematic diagram shown in FIG. 1, and the depth measurement error correction method can be obtained according to similarity of triangles:
Figure BDA0003009428410000041
wherein Z isoFor the reference plane depth value, f is the focal length of the infrared camera, b represents the base length, and d is the parallax. The depth of the measured target is obtained by the following formula:
Figure BDA0003009428410000042
if Δ d is the parallax error, the error in depth measurement Δ ZkCan be expressed as follows:
Figure BDA0003009428410000043
the parallax error is determined by a system error, which is fixed, and thus, the parallax error is also constant. From equation (3), the depth measurement error is proportional to the measurement distance. By the error model of the triangulation of the RGB-D depth camera, the real depth value F (u, v) of each pixel point can be obtained relative to the distance Z measured by the RGB-D depth camerakThe polynomial surface model of (1), namely:
Figure BDA0003009428410000044
in the formula, a1(u,v),…,an(u, v) represents polynomial coefficients, b0And (u, upsilon) represents a constant term.
In the experiment, the distance measurement result of the Kinect depth camera and the laser distance measurement result are analyzed, and multiple groups of depth measurement errors and changes of the Kinect in the distance measurement view field range are obtained. And constructing a polynomial surface model of the real depth value relative to the measured distance by using the acquired depth error data, fitting the acquired measured error data based on the principle of a least square method to acquire a fitting formula of the measured depth relative to the depth error, wherein the fitting result is shown in fig. 2.
The depth error result in fig. 2 is fitted by a curved function model to obtain the following fitting formula:
y=4E-10x3-3E-6x2+0.01x (5)
in the formula, y is an error correction value, and x is a distance value. If the depth value of Kinect v2.0 is d, the measured and corrected depth error is as follows:
d′=d-(4E-10x3-3E-6x2+0.01x) (6)
the depth error is corrected by an error correction equation of polynomial surface fitting to obtain a corrected error distribution result, as shown in fig. 3. As can be seen from the experimental result of fig. 3, the original depth error has a tendency of rapid increase with the increase of the measured distance, and the depth error after error compensation and correction has a more gradual change with the increase of the distance. And the Kinect depth camera is in the range of 0.5-4.5 m, and the depth measurement error is less than 2 cm; the depth measurement error can be kept within 4.5cm within the measurement range of 4.5-7 m. Experimental results show that the error correction method provided by the invention can better correct the error of Kinect depth measurement data and effectively improve the ranging precision of the Kinect depth measurement data.
S2, preprocessing the target depth image to obtain a corrected target depth image;
preferably, on the basis of a joint bilateral filtering algorithm, the target RGB image is used as a guide map to realize rapid filtering and denoising of the target depth image.
Specifically, the joint bilateral filter is an improved algorithm based on the bilateral filter, and can be used for up-sampling of low-resolution images, so that information loss of the images is filled. All weights are calculated from one image in the bilateral filtering algorithm, and a more optimal weight can be obtained by introducing an image with rich information in the combined bilateral filtering algorithm to calculate the weight. The RGB-D camera can simultaneously acquire a target depth image and an RGB image, wherein the RGB color image contains complete information of a scene and can be used for complementing the missing part of the depth image. Therefore, on the basis of the joint bilateral filtering algorithm, the fast filtering and denoising of the depth image are realized by taking the RGB image direction as the guide image.
The joint bilateral filtering weight function is expressed as follows:
Figure BDA0003009428410000051
wherein (i, j) and (x, y) represent the coordinates of two selected pixel points, and the color image is grayDegree domain weight wrAnd depth image space domain weight wgThe calculation formula is as follows:
Figure BDA0003009428410000052
Figure BDA0003009428410000053
in the formula, g (i, j) and g (x, y) represent gray values obtained by converting the color image into a gray image at the pixel points (i, j) and (x, y), respectively.
S3, carrying out fusion processing on the corrected target depth image and the target RGB image under the same visual angle to obtain a fusion image data frame;
the fusion of the depth image and the RGB image is common knowledge of those skilled in the art and will not be described in detail herein.
S4, acquiring initial pose information between adjacent fusion image data frames and fusion three-dimensional point cloud according to the fusion image data frames;
preferably, the adjacent fusion image data frames are matched according to the SIFT algorithm, and initial pose information is obtained.
The depth camera can rapidly acquire the depth and color information of the scene target, so that color point cloud data can be obtained. However, due to the limited range of the view angle of the depth camera, the large size of the scene target, the obstruction of the obstacle, and the like, only a part of data of the measured target can be acquired by the scanning method from one angle at a time. Therefore, data acquisition needs to be performed multiple times from different angles to acquire global point cloud data of scene targets. Coordinate systems of point cloud data obtained under different viewing angles are inconsistent, and the point cloud data need to be unified to the same coordinate system in a point cloud registration mode.
The Scale Invariant Feature Transform (SIFT) algorithm can detect and describe the regional features in the image, and the principle is based on detecting key points in different established scale spaces and calculating the directions of the key points. The SIFT mainly detects key points with obvious information, and has obvious detection effect on the shielded target. And the SIFT features have larger information quantity, and are suitable for quick and accurate matching in a massive database [ i ]. After the SIFT feature points of the images are acquired, feature matching is carried out on two adjacent frames of images, so that the pose transformation relation between the two frames of images is acquired. The traditional RANSAC algorithm utilizes randomly selected feature points for matching, ignores the difference of the feature points and is easy to cause the error matching of images. The improved RACSAC algorithm is based on the idea of iteration, firstly, the result of random sampling in the iteration process is screened, and the matching point pairs with obvious errors are removed. The principle is as follows:
1) the ORB feature point set extracted from two adjacent frames of images is assumed as follows:
P={pi|pi∈P,i=1,2,…,m)
Q={qi|qi∈Q,i=1,2,…,m) (10)
finding a feature point P in PiTwo feature points of nearest neighbor and next neighbor in Q. If the ratio of the Euclidean distances to the nearest neighbor and the next neighbor is less than a set threshold value, it is considered that p isiThe nearest neighbor ORB feature points are a pair of matching points.
2) And (3) utilizing a bidirectional matching mechanism, namely finding the characteristic point corresponding to the characteristic point in P in Q, and also finding the point corresponding to the characteristic point in Q in P. If the two are matched in a one-to-one correspondence manner, the matching is considered to be correct, and the result which is not matched correctly is removed, so that the purpose of screening is achieved.
3) And eliminating the parts which are larger than the minimum matching distance by a certain multiple in the screened residual matching results by setting a threshold value of the minimum matching distance. And obtaining the matching point pairs with higher matching quality through the steps.
And in the experiment, the acquired image data is matched through an SIFT feature point algorithm, and initial pose estimation information is obtained. And a better initial calculation value is provided for the subsequent accurate registration of the ICP point cloud, and the registration accuracy of the point cloud image is further improved. As shown in fig. 4, the original matching effect of the SIFT feature points of the underground well chamber image is shown, wherein more mismatching exists, which is not favorable for the initial pose estimation between image frames. In order to improve the initial pose estimation accuracy, the improved RANSAC algorithm is used for processing the feature matching result and eliminating the mismatching result, as shown in FIG. 5. One group of data is selected for statistics in the experiment, and the matching results shown in table 1 are obtained. The experimental results show that: the matching rate of the feature points processed by the improved RANSAC algorithm is obviously improved, the correct matching rate can reach 95.24 percent, and the wrong matching point pairs in the feature points are effectively removed
TABLE 1 SIFT feature point matching result statistics
Figure BDA0003009428410000071
In the invention, the method for acquiring the fused three-dimensional point cloud from the fused image data frame is based on the projection principle of a pinhole camera model. The pinhole camera model is a commonly used camera model, as shown in fig. 3-6. Wherein P (X)C,YC,ZC) A coordinate point in the camera coordinate system is shown whose projected point on the image plane is P' (x, y).
From the similarity relationship in fig. 6, it can be derived:
Figure BDA0003009428410000072
where f is the distance from O to O1. Expression (11) is expressed in terms of homogeneous coordinates and matrices:
Figure BDA0003009428410000073
the method is simplified and can be obtained:
Figure BDA0003009428410000074
in the formula (X)w,Yw,Zw) Representing the coordinates of a spatial point in a real scene in a world coordinate system, and (u, upsilon) representing the coordinates of the spatial point in a pixel coordinate systemCoordinates, in pixels. Wherein a isx=f/dx,ay=f/dy(ii) a M is a 3X 4 projection matrix, M1Intrinsic parameter a by Kinect depth camerax,ay,u0,v0(u) determination0,v0) As principal point coordinates, axAnd ayRepresenting scale factors on the u-axis and the v-axis, respectively, in the pixel coordinate system. M2Then derived from Kinect depth camera extrinsic parameters.
As can be seen from equation (13), if the internal and external parameters of the camera are known, the projection matrix M can be determined. When the P coordinate and M of the space point are known, the vector is obtained
Figure BDA0003009428410000081
The coordinates (u, v) of the projection point p can be obtained from the formula (13).
S5, realizing the registration of point clouds under different viewing angles according to the initial pose information and the fused three-dimensional point cloud, and acquiring the reconstructed three-dimensional point cloud of the underground pipe well;
preferably, the initial pose information and the fused three-dimensional point cloud are processed by adopting an ICP algorithm based on neighborhood characteristics so as to realize the registration of the point cloud under different visual angles.
After image data are matched by using an SIFT algorithm, initial pose information between image data frames is obtained. On the basis of feature point matching, accurate pose estimation among three-dimensional point clouds is further realized by utilizing an Iterative Closest Point (ICP) algorithm based on neighborhood features, so that accurate point cloud registration is realized, and the algorithm flow is shown in FIG. 6.
Firstly, establishing a neighborhood characteristic, wherein the principle is as follows:
a. selecting characteristic points: let r be the sample point piA neighborhood, pijRepresents piAll adjacent points in the field and a curvature of hijThen the neighborhood curvature value of the current sampling point
Figure BDA0003009428410000082
Can be expressed as follows:
Figure BDA0003009428410000083
where n is the number of neighboring points of the sample point. Variance σ of point cloudNSum mean μNThe calculation is as follows:
Figure BDA0003009428410000084
Figure BDA0003009428410000085
in the formula (10), N is the number of the sampling points of the point cloud, and the neighborhood curvature of the sampling points is positioned at muN±ασNAnd taking the points outside the range as selected characteristic points. Where α is an adjustment factor for adjusting the number of feature points.
b. Determining a corresponding point: if P and Q are two adjacent point clouds, if a characteristic point P in the point cloud P is to be foundiThe corresponding points in the point cloud Q can be triangulated by selecting three nearest neighboring points in Q.
c. Screening corresponding point pairs: some mismatching point pairs exist in the selected corresponding point pairs, and the mismatching point pairs can be screened through the neighborhood characteristics of the points and removed. The neighborhood features are described as
Figure BDA0003009428410000086
Wherein n is a normal vector, and n is a normal vector,
Figure BDA0003009428410000087
the curvature weighted average of neighboring points in the neighborhood of the sample point.
After the field features are established, accurate pose estimation between two adjacent frames of point clouds is carried out by using an ICP algorithm based on the established field features. Suppose P and P' are already matched pairs of points:
P={p1,…,pn},
P′=(p'1,…,p′n} (16)
i.e. find a euclidean transformation R, t such that:
Figure BDA0003009428410000091
on the pose estimation initial value provided by SIFT feature matching, the ICP algorithm combined with the domain features can be used for realizing the accurate registration of the local inter-frame point cloud and the global point cloud. The registration result of the point clouds between frames is shown in fig. 7, and it can be seen that good splicing and fusion are realized between two frames of point clouds after accurate registration by the ICP algorithm, and the details of local registration also have good expression. After the two frames of point clouds are precisely registered, only local point cloud registration is realized, and global point clouds are not fused. Further, the multi-view point clouds are fused by an ICP fine registration algorithm based on neighborhood characteristics, so as to obtain a point cloud reconstruction registration result which is globally consistent, as shown in fig. 8. The accurate registration results of two frames of point clouds and the global point cloud are counted through an experiment, and are compared and analyzed with a classical ICP algorithm, wherein the registration results of the two algorithms are shown in a table 2. The experimental results show that: the accurate registration effect of the method is superior to that of a classical ICP (inductively coupled plasma) algorithm in local point cloud and global point cloud, the accurate registration error of the point cloud is small, and the time consumed by registration is remarkably reduced.
TABLE 2 Point cloud ICP registration results
Figure BDA0003009428410000092
And S6, processing the reconstructed three-dimensional point cloud to obtain a real three-dimensional visual model of the underground pipe well.
Preferably, the method for processing the reconstructed three-dimensional point cloud comprises:
s61, performing curved surface reconstruction on the reconstructed three-dimensional point cloud to obtain a curved surface reconstruction result;
after point cloud reconstruction is carried out, a sparse or dense underground well chamber point cloud reconstruction result can be obtained, but the point cloud reconstruction result only stays in a point cloud map with a large number of point sets, 3D points are not obviously connected, and visibility is poor; and a more real physical original appearance needs to be restored by further curved surface reconstruction, so that the visualization effect is enhanced. The invention carries out curved surface reconstruction on the reconstructed three-dimensional point cloud of the underground well room by utilizing a PCL point cloud data processing base based on VS2017 in a Windows 10 environment. As shown in fig. 9, the result of poisson surface reconstruction based on the three-dimensional reconstruction point cloud of the underground well chamber is shown. According to the reconstructed surface model, after the Poisson surface is reconstructed, a certain relation is established among the independent 3D point clouds, the basic outline and the shape of the target surface are well expressed, and the visibility of the model is obviously improved.
S62 performs texture mapping on the curved surface reconstruction result according to the color image data of the target RGB image.
The model obtained after the curved surface reconstruction lacks color texture information, has certain difference with the visual effect of an actual scene, has the visual effect which is not optimal, and does not realize the three-dimensional real scene reconstruction in the real sense. Therefore, the method is combined with a Kinect depth camera to obtain color image data on the basis of the reconstructed curved surface model, and the model with color textures is reconstructed through texture mapping. As shown in fig. 10, which is a texture view of a scene model of an underground well, it can be seen that the three-dimensional reconstruction result after the real texture mapping has better visualization effect on local texture details of the underground well than the previous point cloud reconstruction result, and the result is substantially consistent with the actual scene. The original three-dimensional point cloud reconstruction result is only a large number of point cloud sets which are not connected with each other, the process of raising points to the surface can be realized by the curved surface reconstruction method, a three-dimensional reconstruction model is obtained, the real face of a target scene can be restored after real texture mapping is carried out, and real three-dimensional visualization is realized.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (6)

1. A three-dimensional visualization method for an underground pipe well based on RGB-D depth camera reconstruction is disclosed, wherein the method comprises the following steps:
acquiring image data frames of the underground pipe well from different view angles through an RGB-D depth camera, wherein the information of the image data frames comprises a target depth image and a target RGB image;
preprocessing the target depth image to obtain a corrected target depth image;
carrying out fusion processing on the corrected target depth image and the target RGB image under the same visual angle to obtain a fusion image data frame;
acquiring initial pose information between adjacent fusion image data frames and fusion three-dimensional point cloud according to the fusion image data frames;
according to the initial pose information and the fused three-dimensional point cloud, realizing the registration of the point cloud under different viewing angles, and acquiring the reconstructed three-dimensional point cloud of the underground pipe well;
and processing the reconstructed three-dimensional point cloud to obtain a real three-dimensional visual model of the underground pipe well.
2. A method for three-dimensional visualization of subterranean wells according to claim 1, wherein the depth measurement error of the RGB-D depth camera is corrected based on polynomial surface fitting prior to the capturing by the RGB-D depth camera.
3. A method for three-dimensional visualization of a subterranean well bore according to claim 1, wherein the target depth image is preprocessed by: and on the basis of a joint bilateral filtering algorithm, the target RGB image is used as a guide map to realize the rapid filtering and denoising of the target depth image.
4. The underground pipe well three-dimensional visualization method according to claim 1, wherein the calculation method of the initial pose information is as follows: and matching the adjacent fusion image data frames according to an SIFT algorithm to acquire initial pose information.
5. The underground pipe well three-dimensional visualization method according to claim 1, wherein the initial pose information and the fused three-dimensional point cloud are processed by adopting an ICP (inductively coupled plasma) algorithm based on neighborhood characteristics so as to realize the registration of the point cloud under different viewing angles.
6. A method for three-dimensional visualization of a subterranean well bore as claimed in claim 1, wherein the method of processing the reconstructed three-dimensional point cloud comprises:
performing curved surface reconstruction on the reconstructed three-dimensional point cloud to obtain a curved surface reconstruction result;
and performing texture mapping on the curved surface reconstruction result according to the color image data of the target RGB image.
CN202110371345.8A 2021-04-07 2021-04-07 Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction Pending CN113112588A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110371345.8A CN113112588A (en) 2021-04-07 2021-04-07 Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110371345.8A CN113112588A (en) 2021-04-07 2021-04-07 Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction

Publications (1)

Publication Number Publication Date
CN113112588A true CN113112588A (en) 2021-07-13

Family

ID=76714423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110371345.8A Pending CN113112588A (en) 2021-04-07 2021-04-07 Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction

Country Status (1)

Country Link
CN (1) CN113112588A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820954A (en) * 2022-06-29 2022-07-29 武汉中仪物联技术股份有限公司 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN116168163A (en) * 2023-03-29 2023-05-26 湖北工业大学 Three-dimensional model construction method, device and storage medium
CN116663408A (en) * 2023-05-30 2023-08-29 昆明理工大学 Establishment method of optimal digging pose of pseudo-ginseng
CN116894907A (en) * 2023-09-11 2023-10-17 菲特(天津)检测技术有限公司 RGBD camera texture mapping optimization method and system
CN116912805A (en) * 2023-09-07 2023-10-20 山东博昂信息科技有限公司 Well lid abnormity intelligent detection and identification method and system based on unmanned sweeping vehicle
CN117557733A (en) * 2024-01-11 2024-02-13 江西啄木蜂科技有限公司 Natural protection area three-dimensional reconstruction method based on super resolution

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN103456038A (en) * 2013-08-19 2013-12-18 华中科技大学 Method for rebuilding three-dimensional scene of downhole environment
US20180211399A1 (en) * 2017-01-26 2018-07-26 Samsung Electronics Co., Ltd. Modeling method and apparatus using three-dimensional (3d) point cloud

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102938142A (en) * 2012-09-20 2013-02-20 武汉大学 Method for filling indoor light detection and ranging (LiDAR) missing data based on Kinect
CN103456038A (en) * 2013-08-19 2013-12-18 华中科技大学 Method for rebuilding three-dimensional scene of downhole environment
US20180211399A1 (en) * 2017-01-26 2018-07-26 Samsung Electronics Co., Ltd. Modeling method and apparatus using three-dimensional (3d) point cloud

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
信寄遥 等: "基于RGB-D相机的多视角机械零件三维重建", 计算机技术与自动化, 28 September 2020 (2020-09-28), pages 147 - 152 *
朱迪 等: "基于Kinect的实物地质标本重建方法研究", 现代计算机(专业版), 5 November 2018 (2018-11-05), pages 29 - 37 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114820954A (en) * 2022-06-29 2022-07-29 武汉中仪物联技术股份有限公司 Three-dimensional reconstruction method and device, electronic equipment and storage medium
CN116168163A (en) * 2023-03-29 2023-05-26 湖北工业大学 Three-dimensional model construction method, device and storage medium
CN116168163B (en) * 2023-03-29 2023-11-17 湖北工业大学 Three-dimensional model construction method, device and storage medium
CN116663408A (en) * 2023-05-30 2023-08-29 昆明理工大学 Establishment method of optimal digging pose of pseudo-ginseng
CN116663408B (en) * 2023-05-30 2023-12-22 昆明理工大学 Establishment method of optimal digging pose of pseudo-ginseng
CN116912805A (en) * 2023-09-07 2023-10-20 山东博昂信息科技有限公司 Well lid abnormity intelligent detection and identification method and system based on unmanned sweeping vehicle
CN116912805B (en) * 2023-09-07 2024-02-02 山东博昂信息科技有限公司 Well lid abnormity intelligent detection and identification method and system based on unmanned sweeping vehicle
CN116894907A (en) * 2023-09-11 2023-10-17 菲特(天津)检测技术有限公司 RGBD camera texture mapping optimization method and system
CN116894907B (en) * 2023-09-11 2023-11-21 菲特(天津)检测技术有限公司 RGBD camera texture mapping optimization method and system
CN117557733A (en) * 2024-01-11 2024-02-13 江西啄木蜂科技有限公司 Natural protection area three-dimensional reconstruction method based on super resolution

Similar Documents

Publication Publication Date Title
CN113112588A (en) Underground pipe well three-dimensional visualization method based on RGB-D depth camera reconstruction
US8326025B2 (en) Method for determining a depth map from images, device for determining a depth map
CN104574393B (en) A kind of three-dimensional pavement crack pattern picture generates system and method
CN110807809B (en) Light-weight monocular vision positioning method based on point-line characteristics and depth filter
CN108335350A (en) The three-dimensional rebuilding method of binocular stereo vision
Zou et al. A method of stereo vision matching based on OpenCV
CN106340045B (en) Calibration optimization method in three-dimensional facial reconstruction based on binocular stereo vision
IL178299A (en) Fine stereoscopic image matching and dedicated instrument having a low stereoscopic coefficient
CN108876861B (en) Stereo matching method for extraterrestrial celestial body patrolling device
CN115909025A (en) Terrain vision autonomous detection and identification method for small celestial body surface sampling point
CN109613974A (en) A kind of AR household experiential method under large scene
CN115375745A (en) Absolute depth measurement method based on polarization microlens light field image parallax angle
CN115719320B (en) Tilt correction dense matching method based on remote sensing image
CN110487254B (en) Rapid underwater target size measuring method for ROV
CN117372647A (en) Rapid construction method and system of three-dimensional model for building
Bang et al. Comparative analysis of alternative methodologies for true ortho-photo generation from high resolution satellite imagery
Um et al. Three-dimensional scene reconstruction using multiview images and depth camera
CN114998532A (en) Three-dimensional image visual transmission optimization method based on digital image reconstruction
Bang et al. Comprehensive analysis of alternative methodologies for true orthophoto generation from high resolution satellite and aerial imagery
El-Hakim et al. Effective high resolution 3D geometric reconstruction of heritage and archaeological sites from images
Brunken et al. Incorporating Plane-Sweep in Convolutional Neural Network Stereo Imaging for Road Surface Reconstruction.
Hirzinger et al. Photo-realistic 3D modelling-From robotics perception to-wards cultural heritage
Klaus et al. MetropoGIS: A semi-automatic city documentation system
Yun et al. 3D scene reconstruction system with hand-held stereo cameras
CN110189403A (en) A kind of submarine target three-dimensional rebuilding method based on simple beam Forward-looking Sonar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination