CN111275764A - Depth camera visual mileage measurement method based on line segment shadow - Google Patents
Depth camera visual mileage measurement method based on line segment shadow Download PDFInfo
- Publication number
- CN111275764A CN111275764A CN202010089290.7A CN202010089290A CN111275764A CN 111275764 A CN111275764 A CN 111275764A CN 202010089290 A CN202010089290 A CN 202010089290A CN 111275764 A CN111275764 A CN 111275764A
- Authority
- CN
- China
- Prior art keywords
- plane
- line
- cker
- matrix
- constraint
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S11/00—Systems for determining distance or velocity not using reflection or reradiation
- G01S11/12—Systems for determining distance or velocity not using reflection or reradiation using electromagnetic waves other than radio waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/4802—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the field of autonomous positioning and navigation of mobile robots, and provides a RGB-D (red, green and blue) -visual odometer method based on linear shadow, which constructs a minimum reprojection error equation through the geometric relationship between a plane constraint blocking line and a blocked point line, and solves a pose through nonlinear optimizationLCarrying out pose estimation; finally, the accuracy of pose estimation is improved through shielded line matching constraint. The invention is mainly applied to the occasions of autonomous positioning and navigation of the mobile robot.
Description
Technical Field
The invention belongs to the field of autonomous positioning and navigation of mobile robots, and particularly relates to a visual odometry method of a depth camera RGB-D.
Background
A visual odometer is a method of incrementally estimating the trajectory of a person or object's motion as perceived by a visual sensor fixed by a sequence of input images. Compared with the traditional inertial navigation and wheel type odometer, the visual odometer solves the problems of measurement errors of inertial navigation drift and tire slip, the visual sensor has the advantages of low power consumption, low price, rich acquired information and the like, and the visual odometer is widely concerned and applied in the fields of mobile robot positioning and navigation.
At present, the methods of the visual odometer mainly comprise a characteristic point method and a direct method. The feature point method mainly comprises the steps of extracting feature points of an image, matching the feature points, constructing a minimized reprojection error, and estimating the pose between frames by using nonlinear optimization. The characteristic point method is a traditional visual odometry method, has more successful application examples, and has some problems. The characteristic point extraction and matching steps of the characteristic point method have the problems of time consumption and mismatching, the obtained characteristic points are sparse, the map multiplexing cannot be realized, and when the image has the conditions of motion blur, low illumination, repeated texture or lack of texture and the like, the accuracy of the characteristic point method is greatly influenced. In view of the problem of the feature point method, researchers have proposed a direct method and a semi-direct method for directly aligning the pixel brightness values of two frames of pictures, the direct method estimating the pose between two frames by minimizing photometric errors. Compared with a characteristic point method, the direct method does not need to extract and match characteristic points any more, but based on the assumption that the brightness values of corresponding pixel points of two frames of pictures are unchanged, the camera model directly uses the values of the pixel points to construct photometric errors, and the photometric errors are minimized to estimate pose parameters. The direct method in the general visual odometer is semi-dense, namely only the pixel points with certain gradient information are used for calculating the luminosity error, so that the relative accuracy of pose estimation can be kept, and the real-time performance of the direct method is achieved. The direct method can obtain a robust and accurate pose estimation result when the camera does not move violently, and increases the robustness of the algorithm to motion blur, repeated textures and lack of textures due to more fully utilizing the whole image information. The main disadvantage of the direct method is that two frames of images to be aligned need to conform to the assumption that the brightness is not changed, and the brightness difference degree of the images determines the accuracy of the estimation result of the direct method. The direct method still works under small brightness difference, and the direct method can obtain wrong pose estimation under large brightness difference.
The visual sensor used for the visual odometer is generally a monocular camera, a binocular camera or an RGB-D camera, and the visual odometer can be implemented based on a feature point method or a direct method regardless of which of the three cameras is used. The visual odometry method based on the pure monocular camera is complex, three-dimensional map points need to be reconstructed while the pose is estimated, and the pose and the three-dimensional points obtained by estimation have no absolute scale, so that scale drift easily occurs, and better initialization is needed. The binocular camera can obtain the depth value of the pixel point in the image scene through binocular stereo matching, the image processing workload is large, and the binocular camera is not suitable for dark or environment with unobvious texture features. The RGB-D camera can simultaneously acquire a color image and a depth image of a scene, the depth image is acquired through a hardware structure of infrared structured light or a flight time method, the depth image is mostly used indoors due to the fact that the depth image is easily influenced by sunlight, and the range of depth measurement is limited. The vision odometer based on the existing depth can estimate and obtain a motion track containing absolute scales, scale drift is generally avoided, and a pose estimation result is more accurate.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a RGB-D visual odometer method based on linear shadow, which constructs a minimum reprojection error equation through the geometric relationship between a plane constraint blocking line and a blocked point line and solves the pose through nonlinear optimizationLCarrying out pose estimation; and finally, improving the pose estimation precision through shielded line matching constraint.
The method comprises the following specific steps:
s1, acquiring color and depth image information of an environment through a depth camera RGB-D sensor, and extracting a plane and a straight line structure in an image by using the color and depth information, wherein the straight line comprises a shielding part and a shielded point;
s2, defining a coordinate expression of Pl ü cker of a three-dimensional space straight line L, as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular analytical plane piLIs composed of an origin O and a straight line L, and a vector v is formed by R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
plane pi formed by origin and shielding lineLIs defined as:
whereinHomogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
s3, defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T(4)
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
s4, using a plane pi and a blocking line L to estimate the pose, wherein the upper right corner marks c and r respectively represent a current frame and a reference frame, and the rigid body transformation T of the planecrThe expression (π) is:
rigid body transformation T of shield linecr(L) is:
to solve the pose, a reprojection error function is defined:
wherein
The rotational variable R can be determined by minimizing the formula (9)crMinimizing the expression (10) to obtain the displacement variable tcr;
S5, further optimizing the pose through shielded line matching, wherein the coordinates of the reference frame relative to the original point of the current frame arePlane under reference frame coordinate systemBy a shielding line LcAnd the origin of the reference frameThe calculation method is as follows:
The shielded line is defined by a constraint plane pi and a planeThe intersection results in a dual Pl ü cker matrix as follows:
wherein
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr,πrBy matching the occluded lines of the current frame with the reference frameRefining R found abovecr,tcrValue if Rcr,tcrCorrectly converge and satisfy pic=Tcr(πr),Lc=Tcr(Lr) Then, thenAndis matched, a new objective function is defined:
the final rotation and translation matrix is found by minimizing equation (15).
The invention has the characteristics and beneficial effects that:
1) the invention provides a latest RGB-D visual mileage calculation method, which constructs a geometric relation solving pose through a plane constraint plane and a blocking line in an image, and is different from the traditional milemeter method based on plane and line segment characteristic matching, and the method is more efficient.
2) The projection relation of the shielded lines is fused, the pose is further optimized, the accuracy of the algorithm is improved, wrong line segment feature matching is avoided through the planar constraint relation, and the robustness of the algorithm is improved.
Description of the drawings:
FIG. 1 is an illustration of an RGB-D visual odometer based on line segment shading.
A: spatial rectilinear shadow model, sub-graph B: a shading line motion estimation model subgraph C: the occluded line matches the model.
Detailed Description
The technical scheme adopted by the invention is as follows: an RGB-D visual odometry method based on straight line shadow comprises the following steps:
s1, acquiring color and depth image information of an environment through an RGB-D sensor, and extracting a plane and a straight line structure in an image by using the color and depth information, wherein the straight line comprises a shielding position and a shielded point.
S2, defining a coordinate expression of Pl ü cker of a three-dimensional space straight line L, as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular analytical plane piL(consisting of origin O and straight line L), vector v ∈ R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
plane pi formed by origin and shielding lineLIs defined as:
whereinHomogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
s3, defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T(4)
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
and S4, carrying out pose estimation by using the plane pi and the blocking line L. The top right corner marks c and r represent the rigid body transformation T of the current frame, the reference frame and the plane respectivelycrThe expression (π) is:
rigid body transformation T of shield linecr(L) is:
to solve the pose, a reprojection error function is defined:
wherein
The rotational variable R can be determined by minimizing the formula (9)crMinimizing the expression (10) to obtain the displacement variable tcr。
And S5, further optimizing the pose through shielded line matching. The origin coordinates of the reference frame relative to the current frame arePlane under reference frame coordinate systemBy a shielding line LcAnd the origin of the reference frameThe calculation method is as follows:
the shielded line is defined by a constraint plane pi and a planeThe intersection results in a dual Pl ü cker matrix as follows:
wherein
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr,πrBy matching the occluded lines of the current frame with the reference frameRefining R found abovecr,tcrThe value is obtained. If R iscr,tcrCorrectly converge and satisfy pic=Tcr(πr),Lc=Tcr(Lr) Then, thenAndis matched, a new objective function is defined:
the final rotation and translation matrix is found by minimizing equation (15).
The present invention will be described in further detail with reference to the accompanying drawings and specific examples.
First, using the existing algorithm to extract the plane and line segment structure in the image and perform parameter fitting, defining the representation mode of the square line segment and plane under the Pl ü cker coordinate system, where the occlusion line L in the sub-graph a in fig. 1 is represented as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular plane piL(consisting of origin O and straight line L) and vector v ∈ R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
plane piLIs defined as:
whereinHomogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
then defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
second, using the plane pi, the shielding line L and the plane piLAnd (3) performing pose estimation, for example, in a sub-graph B, defining the representation mode of each variable under the current coordinate system:andrespectively representing the coordinates of the origin, L, of the reference frame and the current framecDenotes a shading line, picA plane of constraint is represented by a plane of constraint,andrepresenting the plane formed by the origin and the shading line, picFor constraining the planes, the top right hand corner labels c and r represent the current and reference frames, respectively, the rigid transformation T of the plane πcrThe expression (π) is:
rigid body transformation T of the shield line Lcr(L) is:
defining an objective function, representing the reprojection error of the constraint plane and the occlusion line between two frames, and calculating the method as follows:
wherein
By minimizing E1(Rcr) Can obtain the rotation variable RcrMinimization of E2(Rcr,tcr) Calculating a displacement variable tcr。
And finally, improving the pose estimation precision through shielded line matching constraint. The origin coordinates of the reference frame in the subgraph C relative to the current frame areAnalytic plane under reference frame coordinate systemBy a shielding line LcAnd the origin of the reference frameThe calculation method is as follows:
shielded lineBy constraining planes pi and planeThe intersection results in a dual Pl ü cker matrix as follows:
wherein
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr,πrBy matching the current frame with the referenceOccluded line of frameRidging Rcr,tcrThe value of (c). If R iscr,tcrCorrectly converge and satisfy pic=Tcr(πr),Lc=Tcr(Lr) Then, thenAndif the matching is correct, the orange part in the sub-image C is represented by a pose TcrThe transformed line segment or plane obtains a new reprojection error function as follows:
by minimizing F (R)cr,tcr) And solving a final rotation and translation matrix.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (2)
1. A depth camera visual mileage measurement method based on line segment shadow is characterized in that a plane and a line segment structure in an image are extracted and subjected to parameter fitting, a representation mode of a square line segment and a plane are defined under a Planckian (Pl ü cker) coordinate system, and a plane pi, a shielding line L and a plane pi are usedLCarrying out pose estimation; and finally, improving the pose estimation precision through shielded line matching constraint.
2. The method for measuring the visual mileage of the depth camera based on the line segment shadow as claimed in claim 1, which is characterized by comprising the following steps:
s1, acquiring color and depth image information of an environment through a depth camera RGB-D sensor, and extracting a plane and a straight line structure in an image by using the color and depth information, wherein the straight line comprises a shielding part and a shielded point;
s2, defining a coordinate expression of Pl ü cker of a three-dimensional space straight line L, as follows:
L=[uT,vT]T
wherein the vector u ∈ R3Perpendicular analytical plane piLIs composed of an origin O and a straight line L, and a vector v is formed by R3Represents the direction vector of the straight line L, and L satisfies the Pl ü cker constraint:
uTv=0
the Pl ü cker matrix for L is defined as:
plane pi formed by origin and shielding lineLIs defined as:
whereinHomogeneous coordinate of origin, L*For a dual Pl ü cker matrix of L, the calculation method is as follows:
s3, defining a constraint plane pi ═ nT,d]TStraight line expression L formed by upper shielded pointsπThe Pl ü cker matrix of (A) is as follows:
Lπ=[-duT,(u×n)T]T(4)
wherein n isTRepresents the normal vector of the plane, d represents the distance from the plane to the origin, LπThe dual Pl ü cker matrix of (A) is:
s4, using a plane pi and a blocking line L to estimate the pose, wherein the upper right corner marks c and r respectively represent a current frame and a reference frame, and the rigid body transformation T of the planecrThe expression (π) is:
rigid body transformation T of shield linecr(L) is:
to solve the pose, a reprojection error function is defined:
wherein
The rotational variable R can be determined by minimizing the formula (9)crMinimizing the expression (10) to obtain the displacement variable tcr;
S5, further optimizing the pose through shielded line matching, wherein the coordinates of the reference frame relative to the original point of the current frame arePlane under reference frame coordinate systemBy a shielding line LcAnd the source of the reference frameDotThe calculation method is as follows:
the shielded line is defined by a constraint plane pi and a planeThe intersection results in a dual Pl ü cker matrix as follows:
wherein
The observations of the occlusion line, occluded line and constraint plane of the reference frame are represented as: l isr,πrBy matching the occluded lines of the current frame with the reference frameRefining R found abovecr,tcrValue if Rcr,tcrCorrectly converge and satisfy pic=Tcr(πr),Lc=Tcr(Lr) Then, thenAndis matched, a new objective function is defined:
the final rotation and translation matrix is found by minimizing equation (15).
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010089290.7A CN111275764B (en) | 2020-02-12 | 2020-02-12 | Depth camera visual mileage measurement method based on line segment shadows |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010089290.7A CN111275764B (en) | 2020-02-12 | 2020-02-12 | Depth camera visual mileage measurement method based on line segment shadows |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111275764A true CN111275764A (en) | 2020-06-12 |
CN111275764B CN111275764B (en) | 2023-05-16 |
Family
ID=71002047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010089290.7A Active CN111275764B (en) | 2020-02-12 | 2020-02-12 | Depth camera visual mileage measurement method based on line segment shadows |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111275764B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113790719A (en) * | 2021-08-13 | 2021-12-14 | 北京自动化控制设备研究所 | Unmanned aerial vehicle inertia/vision landing navigation method based on line characteristics |
CN117197229A (en) * | 2023-09-22 | 2023-12-08 | 北京科技大学顺德创新学院 | Multi-stage estimation monocular vision odometer method based on brightness alignment |
WO2024018605A1 (en) * | 2022-07-21 | 2024-01-25 | 株式会社ソニー・インタラクティブエンタテインメント | Image information processing device, image information processing method, and program |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170277197A1 (en) * | 2016-03-22 | 2017-09-28 | Sharp Laboratories Of America, Inc. | Autonomous Navigation using Visual Odometry |
CN109029417A (en) * | 2018-05-21 | 2018-12-18 | 南京航空航天大学 | Unmanned plane SLAM method based on mixing visual odometry and multiple dimensioned map |
US20190003836A1 (en) * | 2016-03-11 | 2019-01-03 | Kaarta,Inc. | Laser scanner with real-time, online ego-motion estimation |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN109648558A (en) * | 2018-12-26 | 2019-04-19 | 清华大学 | Robot non-plane motion localization method and its motion locating system |
CN110060277A (en) * | 2019-04-30 | 2019-07-26 | 哈尔滨理工大学 | A kind of vision SLAM method of multiple features fusion |
CN110782494A (en) * | 2019-10-16 | 2020-02-11 | 北京工业大学 | Visual SLAM method based on point-line fusion |
-
2020
- 2020-02-12 CN CN202010089290.7A patent/CN111275764B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190003836A1 (en) * | 2016-03-11 | 2019-01-03 | Kaarta,Inc. | Laser scanner with real-time, online ego-motion estimation |
US20170277197A1 (en) * | 2016-03-22 | 2017-09-28 | Sharp Laboratories Of America, Inc. | Autonomous Navigation using Visual Odometry |
CN109029417A (en) * | 2018-05-21 | 2018-12-18 | 南京航空航天大学 | Unmanned plane SLAM method based on mixing visual odometry and multiple dimensioned map |
CN109166149A (en) * | 2018-08-13 | 2019-01-08 | 武汉大学 | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU |
CN109648558A (en) * | 2018-12-26 | 2019-04-19 | 清华大学 | Robot non-plane motion localization method and its motion locating system |
CN110060277A (en) * | 2019-04-30 | 2019-07-26 | 哈尔滨理工大学 | A kind of vision SLAM method of multiple features fusion |
CN110782494A (en) * | 2019-10-16 | 2020-02-11 | 北京工业大学 | Visual SLAM method based on point-line fusion |
Non-Patent Citations (3)
Title |
---|
A. PEREZ-YUS ET AL: "Extrinsic Calibration of Multiple RGB-D Cameras From Line Observations", 《ROBOTICS AND AUTOMATION LETTERS》 * |
G. ZHANG ET AL: "Building a 3-D Line-Based Map Using Stereo SLAM", 《TRANSACTIONS ON ROBOTICS》 * |
PEDRO F ET AL.: "Probabilistic RGB-D odometry based on points, lines and planes under depth uncertainty", 《ROBOTICS AND AUTONOMOUS SYSTEMS》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113790719A (en) * | 2021-08-13 | 2021-12-14 | 北京自动化控制设备研究所 | Unmanned aerial vehicle inertia/vision landing navigation method based on line characteristics |
CN113790719B (en) * | 2021-08-13 | 2023-09-12 | 北京自动化控制设备研究所 | Unmanned aerial vehicle inertial/visual landing navigation method based on line characteristics |
WO2024018605A1 (en) * | 2022-07-21 | 2024-01-25 | 株式会社ソニー・インタラクティブエンタテインメント | Image information processing device, image information processing method, and program |
CN117197229A (en) * | 2023-09-22 | 2023-12-08 | 北京科技大学顺德创新学院 | Multi-stage estimation monocular vision odometer method based on brightness alignment |
CN117197229B (en) * | 2023-09-22 | 2024-04-19 | 北京科技大学顺德创新学院 | Multi-stage estimation monocular vision odometer method based on brightness alignment |
Also Published As
Publication number | Publication date |
---|---|
CN111275764B (en) | 2023-05-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108717712B (en) | Visual inertial navigation SLAM method based on ground plane hypothesis | |
CN110569704B (en) | Multi-strategy self-adaptive lane line detection method based on stereoscopic vision | |
CN110223348B (en) | Robot scene self-adaptive pose estimation method based on RGB-D camera | |
CN112902953B (en) | Autonomous pose measurement method based on SLAM technology | |
CN103106688B (en) | Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering | |
CN108682026B (en) | Binocular vision stereo matching method based on multi-matching element fusion | |
CN111275764B (en) | Depth camera visual mileage measurement method based on line segment shadows | |
Liu et al. | Direct visual odometry for a fisheye-stereo camera | |
CN109974707A (en) | A kind of indoor mobile robot vision navigation method based on improvement cloud matching algorithm | |
CN108460779A (en) | A kind of mobile robot image vision localization method under dynamic environment | |
CN109523589B (en) | Design method of more robust visual odometer | |
CN109974743B (en) | Visual odometer based on GMS feature matching and sliding window pose graph optimization | |
CN108629835A (en) | Based on EO-1 hyperion, true coloured picture and the indoor method for reconstructing and system for putting cloud complementation | |
CN105740856A (en) | Method for reading readings of pointer instrument based on machine vision | |
CN102521586B (en) | High-resolution three-dimensional face scanning method for camera phone | |
CN109887029A (en) | A kind of monocular vision mileage measurement method based on color of image feature | |
CN109087325A (en) | A kind of direct method point cloud three-dimensional reconstruction and scale based on monocular vision determines method | |
CN107527366B (en) | Camera tracking method for depth camera | |
CN113393522A (en) | 6D pose estimation method based on monocular RGB camera regression depth information | |
CN110766024A (en) | Visual odometer feature point extraction method based on deep learning and visual odometer | |
CN116222543B (en) | Multi-sensor fusion map construction method and system for robot environment perception | |
Heng et al. | Semi-direct visual odometry for a fisheye-stereo camera | |
CN111882602A (en) | Visual odometer implementation method based on ORB feature points and GMS matching filter | |
CN114494150A (en) | Design method of monocular vision odometer based on semi-direct method | |
CN111951339A (en) | Image processing method for performing parallax calculation by using heterogeneous binocular cameras |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |