CN114119891A - Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map - Google Patents

Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map Download PDF

Info

Publication number
CN114119891A
CN114119891A CN202111422402.7A CN202111422402A CN114119891A CN 114119891 A CN114119891 A CN 114119891A CN 202111422402 A CN202111422402 A CN 202111422402A CN 114119891 A CN114119891 A CN 114119891A
Authority
CN
China
Prior art keywords
line segment
semi
key frame
dense
cluster
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111422402.7A
Other languages
Chinese (zh)
Inventor
马国军
马洪朋
冷加俊
段云龙
申佳玮
朱琎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University of Science and Technology
Original Assignee
Jiangsu University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University of Science and Technology filed Critical Jiangsu University of Science and Technology
Priority to CN202111422402.7A priority Critical patent/CN114119891A/en
Publication of CN114119891A publication Critical patent/CN114119891A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/543Depth or shape recovery from line drawings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a three-dimensional reconstruction method and a three-dimensional reconstruction method for a robot monocular semi-dense map, wherein the method comprises the following steps: 1. taking an image collected by a robot carrying monocular camera as a key frame, calculating the depth of the key frame and forming a semi-dense depth map; combining the key frame with the semi-dense depth map to obtain semi-dense point cloud data; 2. extracting edge line segments of the key frame, wherein each edge line segment is a pixel sequence formed by a plurality of pixels; 3. fitting each edge line segment in the key frame into a first 3D line segment to form a first 3D line segment set LD; 4. clustering is carried out on the first 3D line segment set in the same key frame, each 3D line segment cluster is fitted into a second 3D line segment after clustering, the second 3D line segment in the key frame is used for representing image details, semi-dense point cloud data obtained by ORB-SLAM are simplified, and three-dimensional reconstruction of a semi-dense map is achieved.

Description

Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map
Technical Field
The invention relates to the field of simultaneous positioning and map building of a visual robot, in particular to a three-dimensional reconstruction method and a three-dimensional reconstruction system for a monocular semi-dense map of a robot.
Background
In most application fields, a mobile robot needs to determine its own position in an unknown environment, so the mobile robot must have the capability of environment mapping and positioning, and a simultaneous localization and mapping (SLAM) technology is generated for solving the problem of simultaneous localization and mapping of the mobile robot.
Visual SLAM has recently become one of the hot directions in the field of SLAM research. The ORB-SLAM (organized FAST and BRIEF-SLAM) is one of the latest vision SLAM algorithms at present, has stability far exceeding that of other schemes, and certainly has some disadvantages, such as that a constructed sparse feature point map can only meet the requirement for positioning, but cannot provide various functions of robot navigation, obstacle avoidance, interaction and the like, and the three-dimensional surface of a scene is essential for various applications, such as photogrammetry, robot navigation, augmented reality and the like.
Disclosure of Invention
The purpose of the invention is as follows: the invention provides a three-dimensional reconstruction method and a three-dimensional reconstruction system for a monocular semi-dense map of a robot, which simplify semi-dense point cloud data obtained by ORB-SLAM and can generate a semi-dense three-dimensional scene formed by 3D line segments.
The technical scheme is as follows: the invention discloses a three-dimensional reconstruction method of a robot monocular semi-dense map, which comprises the following steps:
s1, using the image collected by the robot carrying monocular camera as a key frame, calculating the depth of the key frame and forming a semi-dense depth map; combining the key frame with the semi-dense depth map to obtain semi-dense point cloud data;
s2, extracting edge line segments of the key frame, wherein each edge line segment is a pixel sequence formed by a plurality of pixels;
s3, fitting each edge line segment in the key frame into a first 3D line segment to form a first 3D line segment set LD;
s4, clustering the first 3D line segment set in the same key frame, fitting each 3D line segment cluster into a second 3D line segment after clustering, representing image details by the second 3D line segment in the key frame, simplifying semi-dense point cloud data obtained by ORB-SLAM, and realizing three-dimensional reconstruction of the semi-dense map.
Preferably, the step S2 adopts an edge rendering algorithm to extract the key frame edge line segment.
Specifically, the step S3 of fitting the edge line segment in the key frame to the first 3D line segment includes the following steps:
s31, setting the mth edge line segment ls in the key framemIs composed of n pixel points, and the corresponding pixel sequence is lsm={p1,p2,...,pnThe coordinate of each pixel point is (x)p,yp,Zp) Wherein (x)p,yp) Is the coordinate of a pixel point in the image coordinate system, ZpThe depth of the pixel point in the semi-dense depth map is obtained;
initializing two empty pixel sets pixels and outliers, wherein the pixels is used for storing pixel points to be fitted, and the outliers is used for storing pixel points of the current outlier;
s32, according to lsmThe first B pixels in (a) fit a straight line:
first get lsmFirst B pixels, based on coordinates (x)p,yp) Fitting a straight line l in the image coordinate system OXYim(ii) a For lsmEstablishing a local coordinate system p1xz, origin of the local coordinate system is lsmMiddle first pixel point p1The x-axis is the first pixel point p1Point to the last pixel point pnOf rays of
Figure BDA0003376973720000021
Positive z-axis direction p1The depth direction of the hole; based on coordinates (D)p,Zp) In a local coordinate system p1Fitting a straight line l in xzdeAs an initial line segment, where DpThe value of the pixel point on the x axis of the local coordinate system is obtained;
s33, calculating lsmThe remaining pixels to two straight lines limAnd ldeThe distance of (c):
separately calculating the pixel sequence lsmFrom the B + k th pixel point to limAnd ldeDistance d ofimAnd dde(ii) a If d isim<e1And d isde<e2If not, the B +1 th pixel point is an outlier and is added to the outliers set; k is 1,2, …, n-B; e.g. of the type1And e2The first distance threshold and the second distance threshold are preset;
s34, judging whether B continuous pixels are outliers or not, and if so, respectively enabling the pixels in the pixel sets to be in the image coordinate system OXY and the local coordinate system p1Fitting a new initial straight line l in xzim' and lde', then with lim' and lde' determination of depth value Z in pixels on a planepAnd the distance D of the pixel to the x-axispFitting again to obtain the edge line segment lsmOf the first 3D line segment ldmWill ldmAdding LD set, and clearing outliers and pixels, jumping to step S31 for next edge segment lsm+1Fitting.
Preferably, the invention uses a total least squares method for straight line fitting.
Specifically, the step S4 specifically includes:
s41, dividing the first 3D line segment LD in LD1As a cluster of initial line segments C1Setting the total number H of the current line segment clusters to be 1; w 3D line segment LD from LDwInitially, clustering is performed as follows in steps S42-S44; w is 2,3, …, W is the total number of the first 3D line segments in the current key frame;
s42, changing h to 1;
s43, calculating ldwAnd line segment cluster ChAngle measure α and distance measure D between the first 3D line segments:
Figure BDA0003376973720000031
d=min(d1,d2)
Figure BDA0003376973720000032
wherein p isi、qiAre respectively ldwTwo end points of (a); p is a radical ofi+1、qi+1Are respectively line segment clusters ChTwo end points of a first 3D line segment;
s44, if alpha<λαAnd d is<λdThen ld will bewAdding a cluster of line segments ChIn the step S, adding one to w, and jumping to the step S42 to cluster the next line segment; otherwise, if h<H, adding one to H, and jumping to step S43 to determine the line segment ldwWhether the next line segment cluster can be added or not, if H is H, a new line segment cluster C is createdH+1Will ldwAs CH+1Adding one to H and adding one to W, and jumping to step S42 to perform clustering on the next line segment until W equals W, thereby completing clustering on all 3D line segments; lambda [ alpha ]αAnd λdRespectively setting a preset clustering angle threshold and a preset clustering distance threshold;
s45, filtering line segment clusters with the number of line segments smaller than 3;
and S46, for each remaining line segment cluster, fitting the 3D line segments into a new 3D line segment as a second 3D line segment representing the cluster.
The step S46 specifically includes:
for line segment cluster ChWith PeRepresents the set containing all the endpoints of the 3D line segments in the cluster, pair PeCarrying out SVD on the matrix formed by all the points to obtain a vector
Figure BDA0003376973720000041
Obtaining parallelism
Figure BDA0003376973720000042
And through Pe3D straight line l of the center of mass3D(ii) a Will PeAll points in (c) are projected to (l)3DDetermining a piece by taking two projection points with the farthest distance as end points3D line segment Seg3D,Seg3DI.e. representing a line segment cluster ChThe second 3D line segment.
In another aspect, the present invention further provides a three-dimensional reconstruction system for a robot monocular semi-dense map, including:
the semi-dense point cloud data acquisition module is used for calculating the depth of a key frame by taking an image acquired by a robot carrying monocular camera as the key frame to form a semi-dense depth map; combining the key frame with the semi-dense depth map to obtain semi-dense point cloud data;
the key frame edge line segment extraction module is used for extracting key frame edge line segments, and each edge line segment is a pixel sequence formed by a plurality of pixels;
the first 3D line segment set building module is used for fitting each edge line segment in the key frame into a first 3D line segment to form a first 3D line segment set LD;
and the second 3D line segment acquisition module is used for clustering the first 3D line segment set in the same key frame, fitting each 3D line segment cluster into a second 3D line segment after clustering, representing image details by using the second 3D line segments in the key frame, simplifying semi-dense point cloud data obtained by ORB-SLAM, and realizing three-dimensional reconstruction of the semi-dense map.
Has the advantages that: the invention discloses a three-dimensional reconstruction method and a system core idea of a robot monocular semi-dense map, which are characterized in that edge pixel-assisted three-dimensional line fitting is carried out by utilizing a key frame and a depth map of the key frame provided by an ORB-SLAM in real time, the fitted three-dimensional line is clustered, the clustered three-dimensional line is fitted again, and a semi-dense three-dimensional scene model consisting of three-dimensional lines is generated. Thereby simplifying the semi-dense point cloud data obtained by ORB-SLAM.
Drawings
FIG. 1 is a flow chart of a three-dimensional reconstruction method of a monocular semi-dense map of a robot disclosed by the invention;
FIG. 2 is a schematic view of an image coordinate system and a local coordinate system;
FIG. 3 is a schematic diagram of angle measure and distance measure calculation;
fig. 4 is a schematic composition diagram of the robot monocular semi-dense map three-dimensional reconstruction system disclosed by the invention.
Detailed Description
The invention is further elucidated with reference to the drawings and the detailed description.
A three-dimensional reconstruction method of a robot monocular semi-dense map, as shown in fig. 1, includes:
s1, using the image collected by the robot carrying monocular camera as a key frame, calculating the depth of the key frame and forming a semi-dense depth map; combining the key frame with the semi-dense depth map to obtain semi-dense point cloud data;
in this example, the following documents were used: the method in Probalistic Semi-Mapping from high elevation Accurate Feature-Based monoclonal SLAM III forms a Semi-Dense depth map.
S2, extracting edge line segments of the key frame, wherein each edge line segment is a pixel sequence formed by a plurality of pixels;
in this embodiment, an Edge Drawing algorithm (Edge Drawing, EDrawing) is used to extract a key frame Edge line segment, and the detailed steps are shown in the following documents: edge Drawing A combined real-time Edge and segment detector.
S3, fitting each edge line segment in the key frame into a first 3D line segment to form a first 3D line segment set LD, which comprises the following specific steps:
s31, setting the mth edge line segment ls in the key framemIs composed of n pixel points, and the corresponding pixel sequence is lsm={p1,p2,...,pnThe coordinate of each pixel point is (x)p,yp,Zp) Wherein (x)p,yp) Is the coordinate of a pixel point in the image coordinate system, ZpThe depth of the pixel point in the semi-dense depth map is obtained;
initializing two empty pixel sets pixels and outliers, wherein the pixels is used for storing pixel points to be fitted, and the outliers is used for storing pixel points of the current outlier;
s32, according to lsmThe first B pixels in (a) fit a straight line:
first get lsmThe first B pixels, B can be taken according to the width and height of the image,in this embodiment, B ═ int [0.02 × min (weight, light)]Where weight and light are the width and height of the key frame image, respectively, and int is the rounding function. Based on coordinates (x)p,yp) Fitting a straight line l in the image coordinate system OXYim(ii) a For lsmEstablishing a local coordinate system p1xz, origin of the local coordinate system is lsmMiddle first pixel point p1The x-axis is the first pixel point p1Point to the last pixel point pnOf rays of
Figure BDA0003376973720000051
Positive z-axis direction p1The depth direction of the hole; based on coordinates (D)p,Zp) In a local coordinate system p1Fitting a straight line l in xzdeAs an initial line segment, where DpThe value of the pixel point on the x axis of the local coordinate system is obtained; each coordinate system is shown in FIG. 2, O in FIG. 2wXwYwZ is the camera coordinate system, XwAnd YwThe axes are parallel to the X and Y axes of the image coordinate system oyxyz, the Z axis representing depth. Pi of plane is per OwAnd
Figure BDA0003376973720000061
the plane intersecting the image plane at an angle p1And pnIs measured.
S33, calculating lsmThe remaining pixels to two straight lines limAnd ldeThe distance of (c):
separately calculating the pixel sequence lsmFrom the B + k th pixel point to limAnd ldeDistance d ofimAnd dde(ii) a If d isim<e1And d isde<e2If not, the B +1 th pixel point is an outlier and is added to the outliers set; k is 1,2, …, n-B; e.g. of the type1And e2The first distance threshold and the second distance threshold are preset; in this embodiment, the values are respectively:
e1=0.002×min(weight,hight),e2=0.003×min(weight,hight));
s34, judging whether B continuous pixels are outliers or not, and if so, respectively enabling the pixels in the pixel sets to be in the image coordinate system OXY and the local coordinate system p1Fitting a new initial straight line l in xzim' and lde', then with lim' and lde' determination of depth value Z in pixels on a planepAnd the distance D of the pixel to the x-axispFitting again to obtain the edge line segment lsmOf the first 3D line segment ldmWill ldmAdding LD set, and clearing outliers and pixels, jumping to step S31 for next edge segment lsm+1Fitting.
Through the above steps S31-S34, the set LD is obtained.
S4, clustering the first 3D line segment set in the same key frame, fitting each 3D line segment cluster into a second 3D line segment after clustering, representing image details by the second 3D line segment in the key frame, simplifying semi-dense point cloud data obtained by ORB-SLAM, and realizing three-dimensional reconstruction of the semi-dense map; the method comprises the following specific steps:
s41, dividing the first 3D line segment LD in LD1As a cluster of initial line segments C1Setting the total number H of the current line segment clusters to be 1; w 3D line segment LD from LDwInitially, clustering is performed as follows in steps S42-S44; w is 2,3, …, where W is the total number of the first 3D line segments in the current key frame, i.e. the number of elements in the set LD;
s42, changing h to 1;
s43, calculating ldwAnd line segment cluster ChAngle measure α and distance measure D between the first 3D line segments:
Figure BDA0003376973720000071
d=min(d1,d2)
Figure BDA0003376973720000072
wherein p isi、qiAre respectively ldwTwo end points of (a); p is a radical ofi+1、qi+1Are respectively line segment clusters ChTwo end points of a first 3D line segment; as shown in fig. 3, where fig. 3(a) and fig. 3(b) represent the calculation schematic of the angle measure and the distance measure, respectively.
S44, if alpha<λαAnd d is<λdThen ld will bewAdding a cluster of line segments ChIn the step S, adding one to w, and jumping to the step S42 to cluster the next line segment; otherwise, if h<H, adding one to H, and jumping to step S43 to determine the line segment ldwWhether the next line segment cluster can be added or not, if H is H, a new line segment cluster C is createdH+1Will ldwAs CH+1Adding one to H and adding one to W, and jumping to step S42 to perform clustering on the next line segment until W equals W, thereby completing clustering on all 3D line segments;
λαand λdRespectively are a preset clustering angle threshold and a preset clustering distance threshold, in this embodiment, λα=10,λd=0.02。
S45, filtering line segment clusters with the number of line segments smaller than 3;
and S46, for each remaining line segment cluster, fitting the 3D line segments into a new 3D line segment as a second 3D line segment representing the cluster.
The step S46 specifically includes:
for line segment cluster ChWith PeRepresents the set containing all the endpoints of the 3D line segments in the cluster, pair PeCarrying out SVD on the matrix formed by all the points to obtain a vector
Figure BDA0003376973720000073
Obtaining parallelism
Figure BDA0003376973720000074
And through Pe3D straight line l of the center of mass3D(ii) a Will PeAll points in (c) are projected to (l)3DDetermining a 3D line segment Seg by taking two projection points with the farthest distance as end points3D,Seg3DI.e. representing a line segment cluster ChThe second 3D line segment.
In this embodiment, a total least square method is used to perform straight line fitting.
Through the processing of the steps, semi-dense point cloud data obtained by the ORB-SLAM are simplified, a map can be represented by a plurality of second 3D line segments in a key frame to display details, and the three-dimensional reconstruction of the semi-dense map is realized.
As shown in fig. 4, the system for implementing the robot monocular semi-dense map three-dimensional reconstruction method includes:
the semi-dense point cloud data acquisition module 1 is used for calculating the depth of a key frame by taking an image acquired by a robot carrying monocular camera as the key frame to form a semi-dense depth map; combining the key frame with the semi-dense depth map to obtain semi-dense point cloud data;
the key frame edge line segment extraction module 2 is used for extracting key frame edge line segments, and each edge line segment is a pixel sequence formed by a plurality of pixels;
the first 3D line segment set constructing module 3 is configured to fit each edge line segment in the key frame to a first 3D line segment to form a first 3D line segment set LD;
and the second 3D line segment acquisition module 4 is used for clustering the first 3D line segment set in the same key frame, fitting each 3D line segment cluster into a second 3D line segment after clustering, representing image details by using the second 3D line segments in the key frame, simplifying semi-dense point cloud data obtained by ORB-SLAM, and realizing three-dimensional reconstruction of the semi-dense map.

Claims (7)

1. A three-dimensional reconstruction method for a robot monocular semi-dense map is characterized by comprising the following steps:
s1, using the image collected by the robot carrying monocular camera as a key frame, calculating the depth of the key frame and forming a semi-dense depth map; combining the key frame with the semi-dense depth map to obtain semi-dense point cloud data;
s2, extracting edge line segments of the key frame, wherein each edge line segment is a pixel sequence formed by a plurality of pixels;
s3, fitting each edge line segment in the key frame into a first 3D line segment to form a first 3D line segment set LD;
s4, clustering the first 3D line segment set in the same key frame, fitting each 3D line segment cluster into a second 3D line segment after clustering, representing image details by the second 3D line segment in the key frame, simplifying semi-dense point cloud data obtained by ORB-SLAM, and realizing three-dimensional reconstruction of the semi-dense map.
2. The three-dimensional reconstruction method for the monocular semi-dense map of the robot as claimed in claim 1, wherein the step S2 employs an edge rendering algorithm to extract a key frame edge line segment.
3. The three-dimensional reconstruction method for the robot monocular semi-dense map according to claim 1, wherein the step S3 fits the edge line segment in the keyframe to the first 3D line segment, comprising the steps of:
s31, setting the mth edge line segment ls in the key framemIs composed of n pixel points, and the corresponding pixel sequence is lsm={p1,p2,...,pnThe coordinate of each pixel point is (x)p,yp,Zp) Wherein (x)p,yp) Is the coordinate of a pixel point in the image coordinate system, ZpThe depth of the pixel point in the semi-dense depth map is obtained;
initializing two empty pixel sets pixels and outliers, wherein the pixels is used for storing pixel points to be fitted, and the outliers is used for storing pixel points of the current outlier;
s32, according to lsmThe first B pixels in (a) fit a straight line:
first get lsmFirst B pixels, based on coordinates (x)p,yp) Fitting a straight line l in the image coordinate system OXYim(ii) a For lsmEstablishing a local coordinate system p1xz, origin of the local coordinate system is lsmMiddle first pixel point p1The x-axis is the first pixel point p1Point to the last pixel point pnOf rays of
Figure FDA0003376973710000011
Positive z-axis direction p1The depth direction of the hole; based on coordinates (D)p,Zp) In a local coordinate system p1Fitting a straight line l in xzdeAs an initial line segment, where DpThe value of the pixel point on the x axis of the local coordinate system is obtained;
s33, calculating lsmThe remaining pixels to two straight lines limAnd ldeThe distance of (c):
separately calculating the pixel sequence lsmFrom the B + k th pixel point to limAnd ldeDistance d ofimAnd dde(ii) a If d isim<e1And d isde<e2If not, the B +1 th pixel point is an outlier and is added to the outliers set; k is 1,2, …, n-B;
s34, judging whether B continuous pixels are outliers or not, and if so, respectively enabling the pixels in the pixel sets to be in the image coordinate system OXY and the local coordinate system p1Fitting a new initial straight line l in xzim' and lde', then with lim' and lde' determination of depth value Z in pixels on a planepAnd the distance D of the pixel to the x-axispFitting again to obtain the edge line segment lsmOf the first 3D line segment ldmWill ldmAdding LD set, and clearing outliers and pixels, jumping to step S31 for next edge segment lsm+1Fitting.
4. The method for three-dimensional reconstruction of a monocular semi-dense map according to claim 3, wherein the line fitting is performed using a total least squares method.
5. The three-dimensional reconstruction method of the robot monocular semi-dense map according to claim 1, wherein the step S4 specifically includes:
s41, dividing the first 3D line segment in LDld1As a cluster of initial line segments C1Setting the total number H of the current line segment clusters to be 1; w 3D line segment LD from LDwInitially, clustering is performed as follows in steps S42-S44; w is 2,3, …, W is the total number of the first 3D line segments in the current key frame;
s42, changing h to 1;
s43, calculating ldwAnd line segment cluster ChAngle measure α and distance measure D between the first 3D line segments:
Figure FDA0003376973710000021
d=min(d1,d2)
Figure FDA0003376973710000022
wherein p isi、qiAre respectively ldwTwo end points of (a); p is a radical ofi+1、qi+1Are respectively line segment clusters ChTwo end points of a first 3D line segment;
s44, if alpha<λαAnd d is<λdThen ld will bewAdding a cluster of line segments ChIn the step S, adding one to w, and jumping to the step S42 to cluster the next line segment; otherwise, if h<H, adding one to H, and jumping to step S43 to determine the line segment ldwWhether the next line segment cluster can be added or not, if H is H, a new line segment cluster C is createdH+1Will ldwAs CH+1Adding one to H and adding one to W, and jumping to step S42 to perform clustering on the next line segment until W equals W, thereby completing clustering on all 3D line segments; lambda [ alpha ]αAnd λdRespectively setting a preset clustering angle threshold and a preset clustering distance threshold;
s45, filtering line segment clusters with the number of line segments smaller than 3;
and S46, for each remaining line segment cluster, fitting the 3D line segments into a new 3D line segment as a second 3D line segment representing the cluster.
6. The three-dimensional reconstruction method of the robot monocular semi-dense map according to claim 1, wherein the step S46 specifically includes:
for line segment cluster ChWith PeRepresents the set containing all the endpoints of the 3D line segments in the cluster, pair PeCarrying out SVD on the matrix formed by all the points to obtain a vector
Figure FDA0003376973710000031
Obtaining parallelism
Figure FDA0003376973710000032
And through Pe3D straight line l of the center of mass3D(ii) a Will PeAll points in (c) are projected to (l)3DDetermining a 3D line segment Seg by taking two projection points with the farthest distance as end points3D,Seg3DI.e. representing a line segment cluster ChThe second 3D line segment.
7. A three-dimensional reconstruction system of a robot monocular semi-dense map, comprising:
the semi-dense point cloud data acquisition module is used for calculating the depth of a key frame by taking an image acquired by a robot carrying monocular camera as the key frame to form a semi-dense depth map; combining the key frame with the semi-dense depth map to obtain semi-dense point cloud data;
the key frame edge line segment extraction module is used for extracting key frame edge line segments, and each edge line segment is a pixel sequence formed by a plurality of pixels;
the first 3D line segment set building module is used for fitting each edge line segment in the key frame into a first 3D line segment to form a first 3D line segment set LD;
and the second 3D line segment acquisition module is used for clustering the first 3D line segment set in the same key frame, fitting each 3D line segment cluster into a second 3D line segment after clustering, representing image details by using the second 3D line segments in the key frame, simplifying semi-dense point cloud data obtained by ORB-SLAM, and realizing three-dimensional reconstruction of the semi-dense map.
CN202111422402.7A 2021-11-26 2021-11-26 Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map Pending CN114119891A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111422402.7A CN114119891A (en) 2021-11-26 2021-11-26 Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111422402.7A CN114119891A (en) 2021-11-26 2021-11-26 Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map

Publications (1)

Publication Number Publication Date
CN114119891A true CN114119891A (en) 2022-03-01

Family

ID=80370198

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111422402.7A Pending CN114119891A (en) 2021-11-26 2021-11-26 Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map

Country Status (1)

Country Link
CN (1) CN114119891A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205923A (en) * 2023-05-06 2023-06-02 威海锐鑫丰金属科技有限公司 Nondestructive testing method for internal defects of automobile hub based on X-RAY

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116205923A (en) * 2023-05-06 2023-06-02 威海锐鑫丰金属科技有限公司 Nondestructive testing method for internal defects of automobile hub based on X-RAY
CN116205923B (en) * 2023-05-06 2023-07-14 威海锐鑫丰金属科技有限公司 Nondestructive testing method for internal defects of automobile hub based on X-RAY

Similar Documents

Publication Publication Date Title
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
Yang et al. Image-based 3D scene reconstruction and exploration in augmented reality
US11521311B1 (en) Collaborative disparity decomposition
Geiger et al. Stereoscan: Dense 3d reconstruction in real-time
US11210804B2 (en) Methods, devices and computer program products for global bundle adjustment of 3D images
US9942535B2 (en) Method for 3D scene structure modeling and camera registration from single image
CN108776989B (en) Low-texture planar scene reconstruction method based on sparse SLAM framework
Letouzey et al. Scene flow from depth and color images
CN107862744A (en) Aviation image three-dimensional modeling method and Related product
CN106920276B (en) A kind of three-dimensional rebuilding method and system
CN104661010A (en) Method and device for establishing three-dimensional model
CN108399631B (en) Scale invariance oblique image multi-view dense matching method
Straub et al. Real-time Manhattan world rotation estimation in 3D
CN102521586B (en) High-resolution three-dimensional face scanning method for camera phone
Cosido et al. Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace
CN112750203A (en) Model reconstruction method, device, equipment and storage medium
Alcantarilla et al. Large-scale dense 3D reconstruction from stereo imagery
CN114119891A (en) Three-dimensional reconstruction method and reconstruction system for robot monocular semi-dense map
Kim et al. Analysis of applicability of orthophoto using 3D mesh on aerial image with large file size
Barth et al. Vehicle tracking at urban intersections using dense stereo
CN107784666B (en) Three-dimensional change detection and updating method for terrain and ground features based on three-dimensional images
CN115830116A (en) Robust visual odometer method
CN115330935A (en) Three-dimensional reconstruction method and system based on deep learning
Li et al. Learning dense consistent features for aerial-to-ground structure-from-motion
CN114663599A (en) Human body surface reconstruction method and system based on multiple views

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination