CN111275764A - Depth camera visual mileage measurement method based on line segment shadow - Google Patents

Depth camera visual mileage measurement method based on line segment shadow Download PDF

Info

Publication number
CN111275764A
CN111275764A CN202010089290.7A CN202010089290A CN111275764A CN 111275764 A CN111275764 A CN 111275764A CN 202010089290 A CN202010089290 A CN 202010089290A CN 111275764 A CN111275764 A CN 111275764A
Authority
CN
China
Prior art keywords
plane
line
cker
matrix
constraint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010089290.7A
Other languages
Chinese (zh)
Other versions
CN111275764B (en
Inventor
苑晶
周光召
孙沁璇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nankai University
Original Assignee
Nankai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nankai University filed Critical Nankai University
Priority to CN202010089290.7A priority Critical patent/CN111275764B/en
Publication of CN111275764A publication Critical patent/CN111275764A/en
Application granted granted Critical
Publication of CN111275764B publication Critical patent/CN111275764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S11/00Systems for determining distance or velocity not using reflection or reradiation
    • G01S11/12Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/02Systems using the reflection of electromagnetic waves other than radio waves
    • G01S17/06Systems determining position data of a target
    • G01S17/08Systems determining position data of a target for measuring distance only
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/48Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
    • G01S7/4802Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the field of autonomous positioning and navigation of mobile robots, and provides a RGB-D (red, green and blue) -visual odometer method based on linear shadow, which constructs a minimum reprojection error equation through the geometric relationship between a plane constraint blocking line and a blocked point line, and solves a pose through nonlinear optimizationLCarrying out pose estimation; finally, the accuracy of pose estimation is improved through shielded line matching constraint. The invention is mainly applied to the occasions of autonomous positioning and navigation of the mobile robot.

Description

Depth camera visual mileage measurement method based on line segment shadow
Technical Field
The invention belongs to the field of autonomous positioning and navigation of mobile robots, and particularly relates to a visual odometry method of a depth camera RGB-D.
Background
A visual odometer is a method of incrementally estimating the trajectory of a person or object's motion as perceived by a visual sensor fixed by a sequence of input images. Compared with the traditional inertial navigation and wheel type odometer, the visual odometer solves the problems of measurement errors of inertial navigation drift and tire slip, the visual sensor has the advantages of low power consumption, low price, rich acquired information and the like, and the visual odometer is widely concerned and applied in the fields of mobile robot positioning and navigation.
At present, the methods of the visual odometer mainly comprise a characteristic point method and a direct method. The feature point method mainly comprises the steps of extracting feature points of an image, matching the feature points, constructing a minimized reprojection error, and estimating the pose between frames by using nonlinear optimization. The characteristic point method is a traditional visual odometry method, has more successful application examples, and has some problems. The characteristic point extraction and matching steps of the characteristic point method have the problems of time consumption and mismatching, the obtained characteristic points are sparse, the map multiplexing cannot be realized, and when the image has the conditions of motion blur, low illumination, repeated texture or lack of texture and the like, the accuracy of the characteristic point method is greatly influenced. In view of the problem of the feature point method, researchers have proposed a direct method and a semi-direct method for directly aligning the pixel brightness values of two frames of pictures, the direct method estimating the pose between two frames by minimizing photometric errors. Compared with a characteristic point method, the direct method does not need to extract and match characteristic points any more, but based on the assumption that the brightness values of corresponding pixel points of two frames of pictures are unchanged, the camera model directly uses the values of the pixel points to construct photometric errors, and the photometric errors are minimized to estimate pose parameters. The direct method in the general visual odometer is semi-dense, namely only the pixel points with certain gradient information are used for calculating the luminosity error, so that the relative accuracy of pose estimation can be kept, and the real-time performance of the direct method is achieved. The direct method can obtain a robust and accurate pose estimation result when the camera does not move violently, and increases the robustness of the algorithm to motion blur, repeated textures and lack of textures due to more fully utilizing the whole image information. The main disadvantage of the direct method is that two frames of images to be aligned need to conform to the assumption that the brightness is not changed, and the brightness difference degree of the images determines the accuracy of the estimation result of the direct method. The direct method still works under small brightness difference, and the direct method can obtain wrong pose estimation under large brightness difference.
The visual sensor used for the visual odometer is generally a monocular camera, a binocular camera or an RGB-D camera, and the visual odometer can be implemented based on a feature point method or a direct method regardless of which of the three cameras is used. The visual odometry method based on the pure monocular camera is complex, three-dimensional map points need to be reconstructed while the pose is estimated, and the pose and the three-dimensional points obtained by estimation have no absolute scale, so that scale drift easily occurs, and better initialization is needed. The binocular camera can obtain the depth value of the pixel point in the image scene through binocular stereo matching, the image processing workload is large, and the binocular camera is not suitable for dark or environment with unobvious texture features. The RGB-D camera can simultaneously acquire a color image and a depth image of a scene, the depth image is acquired through a hardware structure of infrared structured light or a flight time method, the depth image is mostly used indoors due to the fact that the depth image is easily influenced by sunlight, and the range of depth measurement is limited. The vision odometer based on the existing depth can estimate and obtain a motion track containing absolute scales, scale drift is generally avoided, and a pose estimation result is more accurate.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a RGB-D visual odometer method based on linear shadow, which constructs a minimum reprojection error equation through the geometric relationship between a plane constraint blocking line and a blocked point line and solves the pose through nonlinear optimizationLCarrying out pose estimation; and finally, improving the pose estimation precision through shielded line matching constraint.
The method comprises the following specific steps:
s1, acquiring color and depth image information of an environment through a depth camera RGB-D sensor, and extracting a plane and a straight line structure in an image by using the color and depth information, wherein the straight line comprises a shielding part and a shielded point;
s2, defining a coordinate expression of Pl ü cker of a three-dimensional space straight line L, as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular analytical plane piLIs composed of an origin O and a straight line L, and a vector v is formed by R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
Figure BDA0002383176350000021
plane pi formed by origin and shielding lineLIs defined as:
Figure BDA0002383176350000022
wherein
Figure BDA0002383176350000023
Homogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
Figure BDA0002383176350000024
s3, defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T(4)
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
Figure BDA0002383176350000025
s4, using a plane pi and a blocking line L to estimate the pose, wherein the upper right corner marks c and r respectively represent a current frame and a reference frame, and the rigid body transformation T of the planecrThe expression (π) is:
Figure BDA0002383176350000026
rigid body transformation T of shield linecr(L) is:
Figure BDA0002383176350000027
to solve the pose, a reprojection error function is defined:
Figure BDA0002383176350000028
wherein
Figure BDA0002383176350000031
Figure BDA0002383176350000032
The rotational variable R can be determined by minimizing the formula (9)crMinimizing the expression (10) to obtain the displacement variable tcr
S5, further optimizing the pose through shielded line matching, wherein the coordinates of the reference frame relative to the original point of the current frame are
Figure BDA0002383176350000033
Plane under reference frame coordinate system
Figure BDA0002383176350000034
By a shielding line LcAnd the origin of the reference frame
Figure BDA0002383176350000035
The calculation method is as follows:
Figure BDA0002383176350000036
The shielded line is defined by a constraint plane pi and a plane
Figure BDA0002383176350000037
The intersection results in a dual Pl ü cker matrix as follows:
Figure BDA0002383176350000038
wherein
Figure BDA0002383176350000039
Figure BDA00023831763500000310
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr
Figure BDA00023831763500000311
πrBy matching the occluded lines of the current frame with the reference frame
Figure BDA00023831763500000312
Refining R found abovecr,tcrValue if Rcr,tcrCorrectly converge and satisfy pic=Tcrr),Lc=Tcr(Lr) Then, then
Figure BDA00023831763500000313
And
Figure BDA00023831763500000314
is matched, a new objective function is defined:
Figure BDA00023831763500000315
the final rotation and translation matrix is found by minimizing equation (15).
The invention has the characteristics and beneficial effects that:
1) the invention provides a latest RGB-D visual mileage calculation method, which constructs a geometric relation solving pose through a plane constraint plane and a blocking line in an image, and is different from the traditional milemeter method based on plane and line segment characteristic matching, and the method is more efficient.
2) The projection relation of the shielded lines is fused, the pose is further optimized, the accuracy of the algorithm is improved, wrong line segment feature matching is avoided through the planar constraint relation, and the robustness of the algorithm is improved.
Description of the drawings:
FIG. 1 is an illustration of an RGB-D visual odometer based on line segment shading.
A: spatial rectilinear shadow model, sub-graph B: a shading line motion estimation model subgraph C: the occluded line matches the model.
Detailed Description
The technical scheme adopted by the invention is as follows: an RGB-D visual odometry method based on straight line shadow comprises the following steps:
s1, acquiring color and depth image information of an environment through an RGB-D sensor, and extracting a plane and a straight line structure in an image by using the color and depth information, wherein the straight line comprises a shielding position and a shielded point.
S2, defining a coordinate expression of Pl ü cker of a three-dimensional space straight line L, as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular analytical plane piL(consisting of origin O and straight line L), vector v ∈ R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
Figure BDA0002383176350000041
plane pi formed by origin and shielding lineLIs defined as:
Figure BDA0002383176350000042
wherein
Figure BDA0002383176350000043
Homogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
Figure BDA0002383176350000044
s3, defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T(4)
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
Figure BDA0002383176350000045
and S4, carrying out pose estimation by using the plane pi and the blocking line L. The top right corner marks c and r represent the rigid body transformation T of the current frame, the reference frame and the plane respectivelycrThe expression (π) is:
Figure BDA0002383176350000046
rigid body transformation T of shield linecr(L) is:
Figure BDA0002383176350000047
to solve the pose, a reprojection error function is defined:
Figure BDA0002383176350000048
wherein
Figure BDA0002383176350000049
Figure BDA00023831763500000410
The rotational variable R can be determined by minimizing the formula (9)crMinimizing the expression (10) to obtain the displacement variable tcr
And S5, further optimizing the pose through shielded line matching. The origin coordinates of the reference frame relative to the current frame are
Figure BDA00023831763500000411
Plane under reference frame coordinate system
Figure BDA00023831763500000412
By a shielding line LcAnd the origin of the reference frame
Figure BDA00023831763500000413
The calculation method is as follows:
Figure BDA00023831763500000414
the shielded line is defined by a constraint plane pi and a plane
Figure BDA00023831763500000415
The intersection results in a dual Pl ü cker matrix as follows:
Figure BDA00023831763500000416
wherein
Figure BDA00023831763500000417
Figure BDA00023831763500000418
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr
Figure BDA00023831763500000419
πrBy matching the occluded lines of the current frame with the reference frame
Figure BDA0002383176350000051
Refining R found abovecr,tcrThe value is obtained. If R iscr,tcrCorrectly converge and satisfy pic=Tcrr),Lc=Tcr(Lr) Then, then
Figure BDA0002383176350000052
And
Figure BDA0002383176350000053
is matched, a new objective function is defined:
Figure BDA0002383176350000054
the final rotation and translation matrix is found by minimizing equation (15).
The present invention will be described in further detail with reference to the accompanying drawings and specific examples.
First, using the existing algorithm to extract the plane and line segment structure in the image and perform parameter fitting, defining the representation mode of the square line segment and plane under the Pl ü cker coordinate system, where the occlusion line L in the sub-graph a in fig. 1 is represented as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular plane piL(consisting of origin O and straight line L) and vector v ∈ R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
Figure BDA0002383176350000055
plane piLIs defined as:
Figure BDA0002383176350000056
wherein
Figure BDA0002383176350000057
Homogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
Figure BDA0002383176350000058
then defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
Figure BDA0002383176350000059
second, using the plane pi, the shielding line L and the plane piLAnd (3) performing pose estimation, for example, in a sub-graph B, defining the representation mode of each variable under the current coordinate system:
Figure BDA00023831763500000510
and
Figure BDA00023831763500000511
respectively representing the coordinates of the origin, L, of the reference frame and the current framecDenotes a shading line, picA plane of constraint is represented by a plane of constraint,
Figure BDA00023831763500000512
and
Figure BDA00023831763500000513
representing the plane formed by the origin and the shading line, picFor constraining the planes, the top right hand corner labels c and r represent the current and reference frames, respectively, the rigid transformation T of the plane πcrThe expression (π) is:
Figure BDA00023831763500000514
rigid body transformation T of the shield line Lcr(L) is:
Figure BDA00023831763500000515
defining an objective function, representing the reprojection error of the constraint plane and the occlusion line between two frames, and calculating the method as follows:
Figure BDA00023831763500000516
wherein
Figure BDA0002383176350000061
Figure BDA0002383176350000062
By minimizing E1(Rcr) Can obtain the rotation variable RcrMinimization of E2(Rcr,tcr) Calculating a displacement variable tcr
And finally, improving the pose estimation precision through shielded line matching constraint. The origin coordinates of the reference frame in the subgraph C relative to the current frame are
Figure BDA0002383176350000063
Analytic plane under reference frame coordinate system
Figure BDA0002383176350000064
By a shielding line LcAnd the origin of the reference frame
Figure BDA0002383176350000065
The calculation method is as follows:
Figure BDA0002383176350000066
shielded line
Figure BDA0002383176350000067
By constraining planes pi and plane
Figure BDA0002383176350000068
The intersection results in a dual Pl ü cker matrix as follows:
Figure BDA0002383176350000069
wherein
Figure BDA00023831763500000610
Figure BDA00023831763500000611
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr
Figure BDA00023831763500000612
πrBy matching the current frame with the referenceOccluded line of frame
Figure BDA00023831763500000613
Ridging Rcr,tcrThe value of (c). If R iscr,tcrCorrectly converge and satisfy pic=Tcrr),Lc=Tcr(Lr) Then, then
Figure BDA00023831763500000614
And
Figure BDA00023831763500000615
if the matching is correct, the orange part in the sub-image C is represented by a pose TcrThe transformed line segment or plane obtains a new reprojection error function as follows:
Figure BDA00023831763500000616
by minimizing F (R)cr,tcr) And solving a final rotation and translation matrix.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (2)

1. A depth camera visual mileage measurement method based on line segment shadow is characterized in that a plane and a line segment structure in an image are extracted and subjected to parameter fitting, a representation mode of a square line segment and a plane are defined under a Planckian (Pl ü cker) coordinate system, and a plane pi, a shielding line L and a plane pi are usedLCarrying out pose estimation; and finally, improving the pose estimation precision through shielded line matching constraint.
2. The method for measuring the visual mileage of the depth camera based on the line segment shadow as claimed in claim 1, which is characterized by comprising the following steps:
s1, acquiring color and depth image information of an environment through a depth camera RGB-D sensor, and extracting a plane and a straight line structure in an image by using the color and depth information, wherein the straight line comprises a shielding part and a shielded point;
s2, defining a coordinate expression of Pl ü cker of a three-dimensional space straight line L, as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular analytical plane piLIs composed of an origin O and a straight line L, and a vector v is formed by R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
Figure FDA0002383176340000011
plane pi formed by origin and shielding lineLIs defined as:
Figure FDA0002383176340000012
wherein
Figure FDA0002383176340000013
Homogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
Figure FDA0002383176340000014
s3, defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T(4)
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
Figure FDA0002383176340000015
s4, using a plane pi and a blocking line L to estimate the pose, wherein the upper right corner marks c and r respectively represent a current frame and a reference frame, and the rigid body transformation T of the planecrThe expression (π) is:
Figure FDA0002383176340000016
rigid body transformation T of shield linecr(L) is:
Figure FDA0002383176340000017
to solve the pose, a reprojection error function is defined:
Figure FDA0002383176340000018
wherein
Figure FDA0002383176340000019
Figure FDA00023831763400000110
The rotational variable R can be determined by minimizing the formula (9)crMinimizing the expression (10) to obtain the displacement variable tcr
S5, further optimizing the pose through shielded line matching, wherein the coordinates of the reference frame relative to the original point of the current frame are
Figure FDA0002383176340000021
Plane under reference frame coordinate system
Figure FDA0002383176340000022
By a shielding line LcAnd the source of the reference frameDot
Figure FDA0002383176340000023
The calculation method is as follows:
Figure FDA0002383176340000024
the shielded line is defined by a constraint plane pi and a plane
Figure FDA0002383176340000025
The intersection results in a dual Pl ü cker matrix as follows:
Figure FDA0002383176340000026
wherein
Figure FDA0002383176340000027
Figure FDA0002383176340000028
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr
Figure FDA0002383176340000029
πrBy matching the occluded lines of the current frame with the reference frame
Figure FDA00023831763400000210
Refining R found abovecr,tcrValue if Rcr,tcrCorrectly converge and satisfy pic=Tcrr),Lc=Tcr(Lr) Then, then
Figure FDA00023831763400000211
And
Figure FDA00023831763400000212
is matched, a new objective function is defined:
Figure FDA00023831763400000213
the final rotation and translation matrix is found by minimizing equation (15).
CN202010089290.7A 2020-02-12 2020-02-12 Depth camera visual mileage measurement method based on line segment shadows Active CN111275764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010089290.7A CN111275764B (en) 2020-02-12 2020-02-12 Depth camera visual mileage measurement method based on line segment shadows

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010089290.7A CN111275764B (en) 2020-02-12 2020-02-12 Depth camera visual mileage measurement method based on line segment shadows

Publications (2)

Publication Number Publication Date
CN111275764A true CN111275764A (en) 2020-06-12
CN111275764B CN111275764B (en) 2023-05-16

Family

ID=71002047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010089290.7A Active CN111275764B (en) 2020-02-12 2020-02-12 Depth camera visual mileage measurement method based on line segment shadows

Country Status (1)

Country Link
CN (1) CN111275764B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113790719A (en) * 2021-08-13 2021-12-14 北京自动化控制设备研究所 Unmanned aerial vehicle inertia/vision landing navigation method based on line characteristics
CN117197229A (en) * 2023-09-22 2023-12-08 北京科技大学顺德创新学院 Multi-stage estimation monocular vision odometer method based on brightness alignment
WO2024018605A1 (en) * 2022-07-21 2024-01-25 株式会社ソニー・インタラクティブエンタテインメント Image information processing device, image information processing method, and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170277197A1 (en) * 2016-03-22 2017-09-28 Sharp Laboratories Of America, Inc. Autonomous Navigation using Visual Odometry
CN109029417A (en) * 2018-05-21 2018-12-18 南京航空航天大学 Unmanned plane SLAM method based on mixing visual odometry and multiple dimensioned map
US20190003836A1 (en) * 2016-03-11 2019-01-03 Kaarta,Inc. Laser scanner with real-time, online ego-motion estimation
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109648558A (en) * 2018-12-26 2019-04-19 清华大学 Robot non-plane motion localization method and its motion locating system
CN110060277A (en) * 2019-04-30 2019-07-26 哈尔滨理工大学 A kind of vision SLAM method of multiple features fusion
CN110782494A (en) * 2019-10-16 2020-02-11 北京工业大学 Visual SLAM method based on point-line fusion

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190003836A1 (en) * 2016-03-11 2019-01-03 Kaarta,Inc. Laser scanner with real-time, online ego-motion estimation
US20170277197A1 (en) * 2016-03-22 2017-09-28 Sharp Laboratories Of America, Inc. Autonomous Navigation using Visual Odometry
CN109029417A (en) * 2018-05-21 2018-12-18 南京航空航天大学 Unmanned plane SLAM method based on mixing visual odometry and multiple dimensioned map
CN109166149A (en) * 2018-08-13 2019-01-08 武汉大学 A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU
CN109648558A (en) * 2018-12-26 2019-04-19 清华大学 Robot non-plane motion localization method and its motion locating system
CN110060277A (en) * 2019-04-30 2019-07-26 哈尔滨理工大学 A kind of vision SLAM method of multiple features fusion
CN110782494A (en) * 2019-10-16 2020-02-11 北京工业大学 Visual SLAM method based on point-line fusion

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A. PEREZ-YUS ET AL: "Extrinsic Calibration of Multiple RGB-D Cameras From Line Observations", 《ROBOTICS AND AUTOMATION LETTERS》 *
G. ZHANG ET AL: "Building a 3-D Line-Based Map Using Stereo SLAM", 《TRANSACTIONS ON ROBOTICS》 *
PEDRO F ET AL.: "Probabilistic RGB-D odometry based on points, lines and planes under depth uncertainty", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113790719A (en) * 2021-08-13 2021-12-14 北京自动化控制设备研究所 Unmanned aerial vehicle inertia/vision landing navigation method based on line characteristics
CN113790719B (en) * 2021-08-13 2023-09-12 北京自动化控制设备研究所 Unmanned aerial vehicle inertial/visual landing navigation method based on line characteristics
WO2024018605A1 (en) * 2022-07-21 2024-01-25 株式会社ソニー・インタラクティブエンタテインメント Image information processing device, image information processing method, and program
CN117197229A (en) * 2023-09-22 2023-12-08 北京科技大学顺德创新学院 Multi-stage estimation monocular vision odometer method based on brightness alignment
CN117197229B (en) * 2023-09-22 2024-04-19 北京科技大学顺德创新学院 Multi-stage estimation monocular vision odometer method based on brightness alignment

Also Published As

Publication number Publication date
CN111275764B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN108717712B (en) Visual inertial navigation SLAM method based on ground plane hypothesis
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN112902953B (en) Autonomous pose measurement method based on SLAM technology
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN108682026B (en) Binocular vision stereo matching method based on multi-matching element fusion
CN111275764B (en) Depth camera visual mileage measurement method based on line segment shadows
Liu et al. Direct visual odometry for a fisheye-stereo camera
CN109974707A (en) A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm
CN108460779A (en) A kind of mobile robot image vision localization method under dynamic environment
CN109523589B (en) Design method of more robust visual odometer
CN109974743B (en) Visual odometer based on GMS feature matching and sliding window pose graph optimization
CN108629835A (en) Based on EO-1 hyperion, true coloured picture and the indoor method for reconstructing and system for putting cloud complementation
CN105740856A (en) Method for reading readings of pointer instrument based on machine vision
CN102521586B (en) High-resolution three-dimensional face scanning method for camera phone
CN109887029A (en) A kind of monocular vision mileage measurement method based on color of image feature
CN109087325A (en) A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method
CN107527366B (en) Camera tracking method for depth camera
CN113393522A (en) 6D pose estimation method based on monocular RGB camera regression depth information
CN110766024A (en) Visual odometer feature point extraction method based on deep learning and visual odometer
CN116222543B (en) Multi-sensor fusion map construction method and system for robot environment perception
Heng et al. Semi-direct visual odometry for a fisheye-stereo camera
CN111882602A (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN111951339A (en) Image processing method for performing parallax calculation by using heterogeneous binocular cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant