CN113793417A - Monocular SLAM method capable of creating large-scale map - Google Patents

Monocular SLAM method capable of creating large-scale map Download PDF

Info

Publication number
CN113793417A
CN113793417A CN202111119850.XA CN202111119850A CN113793417A CN 113793417 A CN113793417 A CN 113793417A CN 202111119850 A CN202111119850 A CN 202111119850A CN 113793417 A CN113793417 A CN 113793417A
Authority
CN
China
Prior art keywords
image
initial
pose
map
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111119850.XA
Other languages
Chinese (zh)
Inventor
段苏洋
苗芷萱
姜天奇
宁馨
吴雨薇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202111119850.XA priority Critical patent/CN113793417A/en
Publication of CN113793417A publication Critical patent/CN113793417A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Abstract

The invention discloses a monocular SLAM method capable of creating a large-scale map, which comprises the steps of acquiring upper-end image information of a space needing to be mapped through an image acquisition device, acquiring non-upper-end image information of the space needing to be mapped through a 3D laser radar, carrying out data processing on an environment image, constructing an initial environment map and identifying the pose of the image acquisition device. According to the invention, wide-angle rotation scanning and fixed-point detection are realized through the 3d laser radar, a blind area with a larger distance in the detection process is avoided, the comprehensiveness and high resolution of detection data are ensured, the positioning accuracy of the monocular SLAM is optimized, richer map information is created, two initial images are obtained through respective shooting of two image acquisition devices, an initial SLAM map is constructed by utilizing mutually matched characteristic points in the initial images, after the initialization is successful, the images are shot by utilizing the image acquisition devices, so that the construction of the monocular SLAM is carried out, the success rate of the construction is improved, and the information loss in the map is reduced.

Description

Monocular SLAM method capable of creating large-scale map
Technical Field
The invention relates to the field of map creation, in particular to a monocular SLAM method capable of creating a large-scale map.
Background
In recent years, computer vision has begun to gain much attention in the field of robotics as computer technology, digital image processing technology, and image processing hardware have further advanced. SLAM is short for synchronous positioning and mapping, and this concept was first proposed by Smith, Self and Cheeseman in 1988. This approach describes a scenario where the robot starts from an unknown location of the unknown environment and then explores the unknown environment: the robot repeatedly observes the environment in the motion process, positions the self pose and the posture according to the environmental characteristics sensed by the sensor, and builds a map in an incremental mode according to the self pose. Real-time monocular SLAM has become an increasingly popular research topic. One of the main advantages of monocular SLAM, while also being one of the biggest challenges, on one hand, the inherent scale of self-contained blur: this scale cannot be observed and drifted over time, becoming a major source of error. It has the advantage of allowing seamless switching between environments of different scales, such as indoor desktop environments and large scale outdoor environments. On the other hand, zoom sensors, such as depth or stereo cameras, have certain limitations that provide reliable measurements but do not guarantee their flexibility.
Disclosure of Invention
The technical problem to be solved by the invention is to overcome the defects of the prior art and provide a monocular SLAM method capable of creating a large-scale map.
In order to solve the technical problems, the invention provides the following technical scheme:
the invention relates to a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through an image acquisition device to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image and identifying the pose of the image acquisition device;
d. transforming the pose of the image acquisition device by using the inherent pose transformation relation between the image acquisition device and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) carrying out monocular SLAM mapping on the secondary environment image with the matched pose and the initial SLAM map by combining the primary environment image obtained by the image acquisition device to obtain a final environment image.
As a preferred technical scheme of the invention, the image acquisition device comprises a TOF depth camera, a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera.
As a preferred technical solution of the present invention, the primary environment image is a visible light image, and the secondary environment image is a depth image and a visible light image.
As a preferred technical solution of the present invention, the construction of the pose of the initial environment map and the image capturing device mainly includes the following steps:
g. acquiring two initial images respectively shot by any two image acquisition devices, wherein the viewing ranges of the two image acquisition devices are at least partially overlapped;
h. and determining the three-dimensional space positions of the mutually matched characteristic points according to the internal parameters and the external parameters which are calibrated in advance by the two image acquisition devices and the parallaxes of the mutually matched characteristic points in the two initial images so as to obtain map points corresponding to the mutually matched characteristic points, and constructing an initial SLAM map so as to complete initialization.
As a preferred technical solution of the present invention, the acquiring two initial images respectively captured by any two image capturing devices includes: acquiring an image A and an image B which are obtained by respectively shooting by two image acquisition devices; wherein, the focal lengths of the two image acquisition devices are different; the image a and the image B are processed in accordance with the image effect obtained by shooting with the same focal length, with the processed image a and image B as two initial images.
As a preferred embodiment of the present invention, the monocular SLAM is constructed by projecting a point p on the reference frame onto a ray of the current frame, the line is also called epipolar line, the process of determining p on the epipolar line is called epipolar line search, after the epipolar line search is used, the depth of p can be determined by using triangulation, and the depth is updated by using a filter.
As a preferred technical scheme of the invention, the epipolar line search specifically comprises the step of constructing two three-dimensional points p on a depth extension line of the seed p (x, y)1And p2The two three-dimensional points originate from the same pixel, but are determined to be different in depth. In depth filters typically set to p1(x, y, z-n sigma) and p2(x, y, z + n sigma), where z is the initial depth of the seed, sigma is the variance of the depth, and n can be adjusted to different values, such as 1, 2, 3.. etc., typically taking 3;
p is to be1 andp2projecting the pose of the frame to the current frame with the projection point u1And u2Is connected to u1And u2And obtaining the polar line.
As a preferred technical solution of the present invention, the filter is a depth filter, a statistical filter, or a voxel filter.
Compared with the prior art, the invention has the following beneficial effects:
1: according to the invention, wide-angle rotation scanning and fixed-point detection are realized through the 3d laser radar, a blind area with a large distance in the detection process is avoided, meanwhile, detailed detection information of an object can be obtained, the information of a reticular barrier and the side environment of the robot is detected, and the comprehensiveness and high resolution of the detection data are ensured.
2: the invention optimizes the positioning precision of monocular SLAM and creates richer map information. The current pose information and the map are calculated through the acquired image based on the SLAM, a depth sensor which is horizontally placed is newly added to acquire the depth information, real-time operation can be realized, more accurate pose can be provided, richer map information is created, and the advantages of the SLAM in positioning navigation and automatic obstacle avoidance are brought into play.
3: according to the invention, two initial images are obtained by respectively shooting through two image acquisition devices, an initial SLAM map is constructed by using the mutually matched characteristic points in the initial images, and after the initialization is successful, the images are shot by using the image acquisition devices to carry out image construction of the monocular SLAM, so that the success rate of image construction is improved, and the information loss in the map is reduced.
Detailed Description
The following description of the preferred embodiments of the present invention is provided for the purpose of illustration and description, and is in no way intended to limit the invention.
Example 1
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera, a structured light depth camera and an obstacle avoidance camera to obtain a first-level environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera, a structured light depth camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera, the structured light depth camera and the obstacle avoidance camera by using the inherent pose transformation relation among the TOF depth camera, the structured light depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) establishing a monocular SLAM by combining a primary environment image obtained by a TOF depth camera, a structured light depth camera and an obstacle avoidance camera on the basis of the pose-matched secondary environment image and the initial SLAM map to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the TOF depth camera, the structured light depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. acquiring any two initial images which are respectively obtained by shooting by a TOF depth camera, a structured light depth camera or an obstacle avoidance camera, and acquiring any two initial images which are respectively obtained by shooting by the TOF depth camera, the structured light depth camera or the obstacle avoidance camera comprises: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras, structured light depth cameras or obstacle avoidance cameras; the two TOF depth cameras, the structured light depth camera or the obstacle avoidance camera have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein at least part of the view ranges of the two TOF depth cameras, the structured light depth cameras or the obstacle avoidance cameras are overlapped;
h. and determining the three-dimensional space positions of the mutually matched feature points according to the internal parameters and the external parameters which are calibrated in advance by the two TOF depth cameras, the structured light depth cameras or the obstacle avoidance cameras and the parallax of the mutually matched feature points in the two initial images so as to obtain the map points corresponding to the mutually matched feature points, and constructing an initial SLAM map to complete initialization.
Example 2
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera and an obstacle avoidance camera to obtain a first-level environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera and the obstacle avoidance camera by using the inherent pose transformation relation between the TOF depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) carrying out monocular SLAM mapping on the secondary environment image with the matched pose and the initial SLAM map by combining the primary environment image obtained by the TOF depth camera and the obstacle avoidance camera to obtain a final environment image.
Specifically, the construction of the initial environment image and the poses of the TOF depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. acquiring two initial images obtained by respectively shooting any two TOF depth cameras or obstacle avoidance cameras, wherein the acquiring of two initial images obtained by respectively shooting any two TOF depth cameras or obstacle avoidance cameras comprises: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras or obstacle avoidance cameras; the two TOF depth cameras or the obstacle avoidance cameras have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein the viewing ranges of the two TOF depth cameras or the obstacle avoidance cameras are at least partially overlapped;
h. and determining the three-dimensional space positions of the mutually matched feature points according to the internal parameters and the external parameters which are calibrated in advance by the two TOF depth cameras or the obstacle avoidance cameras and the parallaxes of the mutually matched feature points in the two initial images so as to obtain map points corresponding to the mutually matched feature points, and constructing an initial SLAM map so as to complete initialization.
Example 3
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera;
d. transforming the poses of the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera by using the inherent pose transformation relation among the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) establishing a monocular SLAM by combining a primary environment image obtained by a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera on the basis of the pose-matched secondary environment image and the initial SLAM map to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. the method comprises the following steps of acquiring two initial images obtained by respectively shooting any two binocular stereo cameras, any two structured light depth cameras and or any two obstacle avoidance cameras, and acquiring two initial images obtained by respectively shooting any two binocular stereo cameras, any two structured light depth cameras and any two obstacle avoidance cameras, wherein the two initial images comprise: acquiring an image A and an image B which are obtained by respectively shooting two binocular stereo cameras, a structured light depth camera and/or an obstacle avoidance camera; the two binocular stereo cameras, the structured light depth camera and/or the obstacle avoidance camera have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein the viewing ranges of the two binocular stereo cameras, the structured light depth camera and/or the obstacle avoidance camera are at least partially overlapped;
h. according to internal parameters and external parameters which are calibrated in advance by the two binocular stereo cameras, the structured light depth camera and/or the obstacle avoidance camera and the parallax error of the mutually matched feature points in the two initial images, the three-dimensional space positions of the mutually matched feature points are determined so as to obtain map points corresponding to the mutually matched feature points, and an initial SLAM map is constructed to complete initialization.
Example 4
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera, a binocular stereo camera and an obstacle avoidance camera to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera, a binocular stereo camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera, the binocular stereo camera and the obstacle avoidance camera by using the inherent pose transformation relation among the TOF depth camera, the binocular stereo camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) establishing a monocular SLAM by combining a primary environment image obtained by a TOF depth camera, a binocular stereo camera and an obstacle avoidance camera on the basis of the pose-matched secondary environment image and the initial SLAM map to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the TOF depth camera, the binocular stereo camera and the obstacle avoidance camera mainly comprises the following steps:
g. the method comprises the following steps of acquiring two initial images obtained by respectively shooting by two TOF depth cameras, two binocular stereo cameras or an obstacle avoidance camera, and acquiring two initial images obtained by respectively shooting by two TOF depth cameras, two binocular stereo cameras or an obstacle avoidance camera: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras, binocular stereo cameras or obstacle avoidance cameras; wherein the focal lengths of the two TOF depth cameras, the binocular stereo camera or the obstacle avoidance camera are different; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein the viewing ranges of the two TOF depth cameras, the binocular stereo camera or the obstacle avoidance camera are at least partially overlapped;
h. and determining the three-dimensional space positions of the mutually matched feature points according to the internal parameters and the external parameters which are calibrated in advance by the two TOF depth cameras, the binocular stereo cameras or the obstacle avoidance cameras and the parallaxes of the mutually matched feature points in the two initial images so as to obtain the map points corresponding to the mutually matched feature points, and constructing an initial SLAM map to complete initialization.
Example 5
The invention provides a monocular SLAM method capable of creating a large-scale map, which specifically comprises the following steps:
a. acquiring upper-end image information of a space needing to be mapped through a TOF depth camera, a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image, and identifying the poses of a TOF depth camera, a binocular stereo camera, a structured light depth camera and an obstacle avoidance camera;
d. transforming the poses of the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera by using the inherent pose transformation relation between the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) carrying out monocular SLAM mapping on the secondary environment image with the matched pose and the initial SLAM map by combining the primary environment image obtained by the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera to obtain a final environment image.
Specifically, the establishment of the initial environment image and the poses of the TOF depth camera, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera mainly comprises the following steps:
g. acquire two arbitrary TOF depth cameras, two mesh stereo cameras, structured light depth camera or keep away the barrier camera and shoot two initial images that obtain respectively, acquire two arbitrary TOF depth cameras, two mesh stereo cameras of structured light depth camera or keep away the barrier camera and shoot two initial images that obtain respectively and include: acquiring an image A and an image B which are obtained by respectively shooting two TOF depth cameras, a binocular stereo camera, a structured light depth camera or an obstacle avoidance camera; the two TOF depth cameras, the binocular stereo camera, the structured light depth camera or the obstacle avoidance camera have different focal lengths; processing the image A and the image B according to an image effect obtained by shooting with the same focal length, taking the processed image A and the processed image B as two initial images, wherein at least partial overlapping of view ranges of the two TOF depth cameras, the binocular stereo camera, the structured light depth camera and the obstacle avoidance camera exists;
h. according to internal parameters and external parameters which are calibrated in advance by the two TOF depth cameras, the binocular stereo camera, the structured light depth camera or the obstacle avoidance camera and the parallax of the mutually matched feature points in the two initial images, the three-dimensional space positions of the mutually matched feature points are determined so as to obtain map points corresponding to the mutually matched feature points, and an initial SLAM map is constructed to complete initialization.
The invention realizes wide-angle rotation scanning and fixed-point detection through the 3d laser radar, avoids a blind area with a larger distance in the detection process, can acquire detailed detection information of an object, detects the information of a reticular obstacle and the side environment of the robot, ensures the comprehensiveness and high resolution of detection data, optimizes the positioning precision of a monocular SLAM, creates richer map information, calculates the current pose information and map based on the SLAM through the acquired images, and newly adds a depth sensor which is horizontally arranged to acquire the depth information, thereby not only realizing real-time operation, but also providing more accurate pose, creating richer map information, further playing the advantages of the SLAM in positioning navigation and automatic obstacle avoidance, respectively shooting through two image acquisition devices to obtain two initial images, constructing the initial SLAM map by utilizing mutually matched characteristic points in the initial images, after the initialization is successful, the image acquisition device is used for shooting the image so as to build the map of the monocular SLAM, the success rate of map building is improved, and the information loss in the map is reduced.
Finally, it should be noted that: although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that changes may be made in the embodiments and/or equivalents thereof without departing from the spirit and scope of the invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (8)

1. A monocular SLAM method capable of creating a large-scale map is characterized by specifically comprising the following steps:
a. acquiring upper-end image information of a space needing to be mapped through an image acquisition device to obtain a primary environment image;
b. acquiring image information of the non-upper end of a space needing drawing construction through a 3D laser radar to obtain a secondary environment image;
c. performing data processing on the primary environment image, constructing an initial environment image and identifying the pose of the image acquisition device;
d. transforming the pose of the image acquisition device by using the inherent pose transformation relation between the image acquisition device and the 3D laser radar to obtain the pose of the 3D laser radar;
e. transforming the secondary environment image by using the pose of the 3D laser radar to obtain a pose-matched secondary environment image;
f. and (3) carrying out monocular SLAM mapping on the secondary environment image with the matched pose and the initial SLAM map by combining the primary environment image obtained by the image acquisition device to obtain a final environment image.
2. The monocular SLAM method of claim 1, wherein the image capture devices comprise TOF depth cameras, binocular stereo cameras, structured light depth cameras, and obstacle avoidance cameras.
3. The monocular SLAM method of claim 1, wherein the primary environmental image is a visible light image and the secondary environmental image is a depth image and a visible light image.
4. The monocular SLAM method of claim 1, wherein the construction of the initial environment map and the pose of the image capture device comprises the following steps:
g. acquiring two initial images respectively shot by any two image acquisition devices, wherein the viewing ranges of the two image acquisition devices are at least partially overlapped;
h. and determining the three-dimensional space positions of the mutually matched characteristic points according to the internal parameters and the external parameters which are calibrated in advance by the two image acquisition devices and the parallaxes of the mutually matched characteristic points in the two initial images so as to obtain map points corresponding to the mutually matched characteristic points, and constructing an initial SLAM map so as to complete initialization.
5. The monocular SLAM method of claim 4, wherein the acquiring of two initial images captured by any two image capturing devices comprises: acquiring an image A and an image B which are obtained by respectively shooting by two image acquisition devices; wherein, the focal lengths of the two image acquisition devices are different; the image a and the image B are processed in accordance with the image effect obtained by shooting with the same focal length, with the processed image a and image B as two initial images.
6. The monocular SLAM method of claim 1, wherein the monocular SLAM is constructed by projecting a point p on a reference frame onto a ray of a current frame, the line is also called epipolar line, the process of determining p on the epipolar line is called epipolar line search, after using epipolar line search, the depth of p can be determined by triangulation, and the depth is updated by using a filter.
7. The monocular SLAM method of claim 6, wherein the epipolar search is performed by constructing two three-dimensional points p on the extension of the depth of the seed p (x, y)1And p2The two three-dimensional points originate from the same pixel, but are determined to be different in depth. In depth filters typically set to p1(x, y, z-n sigma) and p2(x, y, z + n sigma), where z is the initial depth of the seed, sigma is the variance of the depth, and n can be adjusted to different values, such as 1, 2, 3.. etc., typically taking 3;
p is to be1 andp2projecting the pose of the frame to the current frame with the projection point u1And u2Is connected to u1And u2And obtaining the polar line.
8. The monocular SLAM method of claim 6, wherein the filters are depth filters, statistical filters, and voxel filters.
CN202111119850.XA 2021-09-24 2021-09-24 Monocular SLAM method capable of creating large-scale map Pending CN113793417A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111119850.XA CN113793417A (en) 2021-09-24 2021-09-24 Monocular SLAM method capable of creating large-scale map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111119850.XA CN113793417A (en) 2021-09-24 2021-09-24 Monocular SLAM method capable of creating large-scale map

Publications (1)

Publication Number Publication Date
CN113793417A true CN113793417A (en) 2021-12-14

Family

ID=78879172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111119850.XA Pending CN113793417A (en) 2021-09-24 2021-09-24 Monocular SLAM method capable of creating large-scale map

Country Status (1)

Country Link
CN (1) CN113793417A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN109887087A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM of vehicle builds drawing method and system
CN110163963A (en) * 2019-04-12 2019-08-23 南京华捷艾米软件科技有限公司 A kind of building based on SLAM and builds drawing method at map device
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105809687A (en) * 2016-03-08 2016-07-27 清华大学 Monocular vision ranging method based on edge point information in image
CN109887087A (en) * 2019-02-22 2019-06-14 广州小鹏汽车科技有限公司 A kind of SLAM of vehicle builds drawing method and system
CN110163963A (en) * 2019-04-12 2019-08-23 南京华捷艾米软件科技有限公司 A kind of building based on SLAM and builds drawing method at map device
CN112785702A (en) * 2020-12-31 2021-05-11 华南理工大学 SLAM method based on tight coupling of 2D laser radar and binocular camera

Similar Documents

Publication Publication Date Title
CN109615652B (en) Depth information acquisition method and device
CN110728715B (en) Intelligent inspection robot camera angle self-adaptive adjustment method
CN112894832A (en) Three-dimensional modeling method, three-dimensional modeling device, electronic equipment and storage medium
CN102072725B (en) Spatial three-dimension (3D) measurement method based on laser point cloud and digital measurable images
CN110319772B (en) Visual large-span distance measurement method based on unmanned aerial vehicle
JP2019510234A (en) Depth information acquisition method and apparatus, and image acquisition device
US10545215B2 (en) 4D camera tracking and optical stabilization
JP2011123071A (en) Image capturing device, method for searching occlusion area, and program
CN112207821B (en) Target searching method of visual robot and robot
WO2021139176A1 (en) Pedestrian trajectory tracking method and apparatus based on binocular camera calibration, computer device, and storage medium
CN109658497B (en) Three-dimensional model reconstruction method and device
CN110120098B (en) Scene scale estimation and augmented reality control method and device and electronic equipment
CN111854636B (en) Multi-camera array three-dimensional detection system and method
CN113160327A (en) Method and system for realizing point cloud completion
JP2023546739A (en) Methods, apparatus, and systems for generating three-dimensional models of scenes
CN115035235A (en) Three-dimensional reconstruction method and device
CN115222919A (en) Sensing system and method for constructing color point cloud map of mobile machine
CN111105467B (en) Image calibration method and device and electronic equipment
CN115880344A (en) Binocular stereo matching data set parallax truth value acquisition method
TW201605225A (en) Methods and systems for generating depth images and related computer products
CN111654626B (en) High-resolution camera containing depth information
WO2023088127A1 (en) Indoor navigation method, server, apparatus and terminal
Lipnickas et al. A stereovision system for 3-D perception
US11151740B2 (en) Simultaneous localization and mapping (SLAM) devices with scale determination and methods of operating the same
CN113793417A (en) Monocular SLAM method capable of creating large-scale map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination