CN111583331A - Method and apparatus for simultaneous localization and mapping - Google Patents

Method and apparatus for simultaneous localization and mapping Download PDF

Info

Publication number
CN111583331A
CN111583331A CN202010396437.7A CN202010396437A CN111583331A CN 111583331 A CN111583331 A CN 111583331A CN 202010396437 A CN202010396437 A CN 202010396437A CN 111583331 A CN111583331 A CN 111583331A
Authority
CN
China
Prior art keywords
key frame
pose
image
depth information
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010396437.7A
Other languages
Chinese (zh)
Other versions
CN111583331B (en
Inventor
谷晓琳
杨敏
张燚
曾峥
刘科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sunwise Space Technology Ltd
Original Assignee
Beijing Sunwise Space Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sunwise Space Technology Ltd filed Critical Beijing Sunwise Space Technology Ltd
Priority to CN202010396437.7A priority Critical patent/CN111583331B/en
Publication of CN111583331A publication Critical patent/CN111583331A/en
Application granted granted Critical
Publication of CN111583331B publication Critical patent/CN111583331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of robots and intelligent equipment, and discloses a method for simultaneous positioning and map construction. The method comprises the following steps: acquiring an image to be positioned and depth information corresponding to the image to be positioned; acquiring an estimated pose of the equipment to be positioned according to the image to be positioned and the depth information corresponding to the image to be positioned; acquiring a first map point according to the image to be positioned; acquiring a first feature point corresponding to the first map point on the image to be positioned; and optimizing the estimated pose according to the reprojection error and the depth error of the first map point and the first feature point to obtain a first optimized pose. The method can enable the obtained pose to be more accurate, so that the positioning accuracy of the equipment to be positioned is improved, and a more accurate navigation map can be obtained when the navigation map is constructed. The application also discloses an apparatus for simultaneous localization and mapping.

Description

Method and apparatus for simultaneous localization and mapping
Technical Field
The present application relates to the field of robotics and intelligent equipment technology, for example, to a method and apparatus for simultaneous localization and mapping.
Background
At present, the SLAM (Simultaneous Localization And Mapping) technology is used for generating an environment map And autonomously locating a robot, provides a solid foundation for subsequent robot to perform path planning, autonomous exploration And navigation, And is an important component of the robot. The vision-based SLAM technology can provide rich scene information and enable positioning. Visual SLAM refers to SLAM technology with a camera as the main sensor.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
the existing vision SLAM technology is difficult to accurately acquire the pose information of a camera, so that the positioning is not accurate enough.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview nor is intended to identify key/critical elements or to delineate the scope of such embodiments but rather as a prelude to the more detailed description that is presented later.
The embodiment of the disclosure provides a method and a device for simultaneous positioning and map building, so that a more accurate pose can be obtained in the process of simultaneous positioning and map building.
In some embodiments, the method comprises:
acquiring an image to be positioned and depth information corresponding to the image to be positioned;
acquiring an estimated pose of the equipment to be positioned according to the image to be positioned and the depth information corresponding to the image to be positioned;
acquiring a first map point according to the image to be positioned;
acquiring a first feature point corresponding to the first map point on the image to be positioned;
and optimizing the estimated pose according to the reprojection error and the depth error of the first map point and the first feature point to obtain a first optimized pose.
In some embodiments, the apparatus comprises: comprising a processor and a memory storing program instructions, the processor being configured to perform the above-described method for simultaneous localization and mapping when executing the program instructions.
The method and the device for simultaneous positioning and map building provided by the embodiment of the disclosure can achieve the following technical effects: the estimated pose of the equipment to be positioned is directly obtained through the image to be positioned, then the first map point is obtained from the image to be positioned, and the estimated pose of the equipment to be positioned is optimized according to the reprojection error and the depth error of the first map point and the corresponding first feature point, so that the optimized pose is obtained. The acquired pose can be more accurate, so that the positioning accuracy of the equipment to be positioned is improved, and a more accurate navigation map can be obtained when the navigation map is constructed.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the accompanying drawings and not in limitation thereof, in which elements having the same reference numeral designations are shown as like elements and not in limitation thereof, and wherein:
FIG. 1 is a schematic diagram of a method for simultaneous localization and mapping provided by an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an apparatus for simultaneous localization and mapping according to an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and elements of the disclosed embodiments can be understood in detail, a more particular description of the disclosed embodiments, briefly summarized above, may be had by reference to the embodiments, some of which are illustrated in the appended drawings. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may be practiced without these details. In other instances, well-known structures and devices may be shown in simplified form in order to simplify the drawing.
The terms "first," "second," and the like in the description and in the claims, and the above-described drawings of embodiments of the present disclosure, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the present disclosure described herein may be made. Furthermore, the terms "comprising" and "having," as well as any variations thereof, are intended to cover non-exclusive inclusions.
The term "plurality" means two or more unless otherwise specified.
In the embodiment of the present disclosure, the character "/" indicates that the preceding and following objects are in an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes objects, meaning that three relationships may exist. For example, a and/or B, represents: a or B, or A and B.
As shown in fig. 1, an embodiment of the present disclosure provides a method for simultaneous localization and mapping, including:
step S101, acquiring an image to be positioned and depth information corresponding to the image to be positioned;
step S102, obtaining an estimated pose of the equipment to be positioned according to the image to be positioned and the depth information corresponding to the image to be positioned;
step S103, acquiring a first map point according to an image to be positioned;
step S104, acquiring a first feature point corresponding to the first map point on the image to be positioned;
and S105, optimizing the estimated pose according to the reprojection error and the depth error of the first map point and the first feature point to obtain a first optimized pose of the equipment to be positioned.
By adopting the method for simultaneous positioning and map building provided by the embodiment of the disclosure, the estimated pose of the device to be positioned is directly obtained through the image to be positioned, then the first map point is obtained from the image to be positioned, and the estimated pose of the device to be positioned is optimized according to the reprojection error and the depth error of the first map point and the corresponding first feature point, so as to obtain the optimized pose. The obtained pose can be more accurate, and therefore the positioning accuracy of the equipment to be positioned is improved.
In some embodiments, the image to be positioned and its corresponding depth map are acquired by a depth camera of the device to be positioned. In some embodiments, the acquired depth map is sent to a preset depth network model for optimization, and depth information, i.e., a depth value, of the image to be positioned is obtained. Optionally, the image to be located is a color image.
Optionally, the image to be located and the depth map are preprocessed. Optionally, the preprocessing includes distortion correction processing, alignment processing of the color image and the depth image, filtering processing of the depth image, and grayscale image conversion processing of the color image. Therefore, the second feature point of the image to be positioned can be extracted and the estimation pose can be obtained conveniently.
Optionally, obtaining an estimated pose of the device to be positioned according to the image to be positioned and the depth information corresponding to the image to be positioned, includes:
acquiring depth information corresponding to the reference key frame and the pose of the equipment to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned;
and obtaining the estimated pose of the equipment to be positioned according to the pose of the equipment to be positioned in the reference key frame and the depth information corresponding to the reference key frame.
Optionally, obtaining depth information corresponding to the reference key frame and a pose of the device to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned, including:
taking the first image to be positioned as a reference key frame, and taking the depth information of the first image to be positioned as the depth information of the reference key frame;
and the pose of the equipment to be positioned in the reference key frame is a preset pose.
In some embodiments, a first image to be located acquired after the device to be located is started is selected as a reference key frame.
Optionally, obtaining depth information corresponding to the reference key frame and a pose of the device to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned, including:
selecting an image to be positioned meeting a first set condition as a local key frame, and taking the depth information of the image to be positioned meeting the first set condition as the depth information of the local key frame;
sequentially putting the local key frames into a sliding window;
taking pixel points meeting a second set condition in the sliding window as first effective key points;
comparing the pixel points in each local key frame with a third set condition, and determining a reference key frame and depth information corresponding to the reference key frame according to a comparison result;
acquiring photometric errors of the first effective key points on all local key frames according to the depth information of the reference key frame;
and obtaining the pose of the equipment to be positioned in the reference key frame according to the photometric errors of the first effective key point on all the local key frames.
Optionally, the first setting condition is any one of the following conditions, and the image to be positioned satisfying any one of the following conditions is selected as the local key frame:
under the condition that the number of the local key frames in the sliding window is smaller than a third set threshold Th1, selecting the image to be positioned as the local key frame; or the like, or, alternatively,
the distance between the pose of the image to be positioned and the pose of the nearest local key frame is greater than a fourth set threshold Th 2; or the like, or, alternatively,
the sum of the luminosity errors of the second effective key points on the image to be positioned is larger than the set multiple of the luminosity errors of the existing reference key frames.
Therefore, a proper local key frame can be selected, and a proper key point can be recovered, so that the map model positioned and constructed is more accurate.
Optionally, the second effective key point on the image to be positioned is a pixel point on the image to be positioned, where the pixel gradient is greater than the first set threshold.
Optionally, the second setting condition is: the pixel gradient is greater than a first set threshold. And taking the pixel points with the pixel gradient larger than a first set threshold value in the sliding window as first effective key points.
Optionally, the third setting condition is: is not a projected pixel point and the pixel gradient is greater than a first set threshold. Comparing the pixel points in each local key frame with a third set condition, and determining a reference key frame according to the comparison result, wherein the comparison comprises the following steps:
comparing the pixel points in each local key frame with a third set condition, projecting the first effective key point on the local key frame of the previous frame to obtain a projection pixel point aiming at each local key frame, taking the local key frame, namely the pixel point of which the pixel gradient except the projection pixel point on the projected local key frame is greater than a first set threshold value as a reference key point, and taking the local key frame containing the reference key point as a reference key frame. Optionally, the depth information corresponding to the local key frame containing the reference key point is used as the depth information corresponding to the reference key frame.
Optionally, in the sliding window, the local keyframes with the number of the first valid keypoints being less than the second set threshold are removed from the sliding window, and the first valid keypoints on the local keyframes are removed.
Optionally, the depth information of the reference key point relative to the local key frame is obtained according to the depth map of the local key frame, and the attribute information of the reference key point is updated.
Optionally, obtaining the pose of the device to be positioned in the reference key frame and the depth information of the first effective key point in the reference key frame according to the photometric errors of the first effective key point on all local key frames;
by calculation of
Figure BDA0002487756730000051
Acquiring pose T of equipment to be positioned in reference key framejAnd the depth of the first significant keypoint k in the reference keyframeDegree information dj
Wherein, TjPose of the device to be positioned in the reference keyframe, djDepth information of the first effective key point k in the reference key frame is obtained, and F is a reference key frame set in the sliding window; knIs a set of first significant keypoints, Ok(k) All local key frame sets of a first effective key point k can be observed in a sliding window, wherein m and j are positive integers;
Figure BDA0002487756730000061
the photometric error for the first valid keypoint k on the mth local keyframe where point k can be observed.
By calculation of
Figure BDA0002487756730000062
And acquiring the luminosity error of the first effective key point k on the m-th local key frame where the point k can be observed.
Wherein, ω iskIs the weight of the first significant keypoint k, Im[km]Is the gray value of the first effective key point k on the m-th local key frame where the point k can be observed, Ij[k']For the first valid keypoint k in the reference keyframe IjGray value of (1), tmFor the exposure time in the m-th local key frame where point k can be observed, amFor the first photometric correction parameter in the m-th local keyframe where point k can be observed, bmFor the second photometric correction parameter in the m-th local keyframe where point k can be observed, tjFor exposure time in reference key frames, ajFor a first photometric correction parameter in a reference key frame, bjFor the second photometric correction parameter in the reference key frame, e is a natural constant, kmThe world coordinates of the local keyframe for the first valid keypoint k at the mth observable point k.
By calculation of
Figure BDA0002487756730000063
Obtaining local key frame of the first valid key point k at the m-th observable point kWorld coordinate, djDepth information in the reference keyframe for the first significant keypoint k;
wherein, TmPosition pose, Π, of the m-th local keyframe from which point k can be observedcK 'is the world coordinate of the first valid keypoint k in the reference keyframe, which is the depth camera's internal reference matrix.
Therefore, the local key frames are optimized through the sliding window, and the photometric errors of the first effective key points in the sliding window on all the local key frames obtain the pose of the equipment to be positioned on the reference key frame and the depth information of the first effective key points, so that the obtained pose of the equipment to be positioned on the reference key frame is more accurate.
Optionally, obtaining an estimated pose of the device to be positioned according to the pose of the device to be positioned in the reference key frame and the depth information corresponding to the reference key frame includes:
acquiring the luminosity error of an image to be positioned according to the depth information corresponding to the reference key frame;
acquiring the relative pose of the equipment to be positioned in the image to be positioned relative to the equipment to be positioned in the reference key frame according to the luminosity error of the image to be positioned;
and obtaining the estimated pose of the equipment to be positioned according to the relative pose and the pose of the equipment to be positioned in the reference key frame. Therefore, for an environment with sparse texture, an accurate initial estimation pose can be obtained, and the situation of positioning failure is avoided.
Optionally, by calculating Ti=TjiTjObtaining an estimated pose T of a device to be positionedi
Wherein, TiEstimated pose, T, for a device to be positionedjPose of a device to be positioned in a reference keyframe, TjiAnd determining the relative pose of the device to be positioned in the image to be positioned relative to the device to be positioned in the reference key frame.
Optionally, the relative pose T is obtained by minimizing photometric errors of the image to be positionedjiFor example, minimizing the photometric error of the image to be located by the gauss-newton descent method; the photometric error of the image to be positioned beingLocating the sum of luminosity errors of key points on the image;
optionally by calculation
Figure BDA0002487756730000071
Obtaining the luminosity error of an image to be positioned;
wherein S isnFor the second set of valid keypoints on the image to be located, EsThe photometric error of the second significant keypoint s on the image to be located, s being a positive integer.
Optionally by calculation
Figure BDA0002487756730000072
Obtaining the luminosity error of a second effective key point s;
wherein, ω issIs the weight, t, of the second significant keypoint siFor the exposure time in the image to be positioned, aiFor a first photometric correction parameter in the image to be positioned, biFor a second photometric correction parameter in the image to be positioned, tjFor exposure time in reference key frames, ajFor a first photometric correction parameter in a reference key frame, bjFor the second photometric correction parameter in the reference key frame, I and j are both positive integers, e is a natural constant, Ij[Ps′]Is the pixel gray value, P, of the second valid key point s in the reference key frames′World coordinates of the second significant key point s in the reference key frame, Ii[Ps]Is the pixel gray value, P, of the second effective key point s in the image to be positionedsThe world coordinate value of the second effective key point s in the image to be positioned is obtained by projection.
Optionally by calculation
Figure BDA0002487756730000081
Obtaining world coordinates of a second effective key point s in an image to be positioned;
therein, IIcInternal reference matrix, T, for depth camera of device to be positionedjiIs a relative pose; djIs the second to be effectiveDepth information of a keypoint s in a reference key frame, Ps′The world coordinates of the second significant keypoint s in the reference keyframe. Optionally, the relative pose is calculated according to a uniform motion model.
Therefore, the relative pose is obtained through the luminosity error of the image to be positioned, and the estimated pose of the equipment to be positioned in the image to be positioned is directly obtained through the relative pose and the pose of the reference key frame, so that the robustness on a sparse texture scene is better, and more accurate estimated pose is provided for the subsequent pose optimization.
Optionally, optimizing the estimated pose according to a reprojection error and a depth error of the first map point and the first feature point, to obtain a first optimized pose of the device to be positioned, including:
obtaining a reprojection error and a depth error of the first map point and the first feature point according to the estimated pose;
and minimizing the reprojection error and the depth error to obtain a first optimized pose. Therefore, by minimizing the reprojection error and the depth error of the first map point and the first feature point, the accuracy of obtaining the pose of the equipment to be positioned can be improved, particularly the accuracy of obtaining the pose is improved for the environment with illumination change, and the robustness of the algorithm is enhanced.
Optionally, acquiring a first map point according to the image to be positioned includes: extracting a second feature point of the image to be positioned through a feature extraction thread; and restoring the first map point corresponding to the second feature point according to the estimated pose of the image to be positioned and the depth information of the image to be positioned.
Optionally, a first feature point corresponding to the first map point on the image to be positioned is obtained through a matching algorithm. Optionally, the estimated pose T is determined from the image to be positionediAnd projecting the first map point to an image to be positioned to obtain a projection point of the first map point on the image to be positioned, and taking a second feature point which is closest to each projection point on the image to be positioned as a first feature point corresponding to the first map point.
Optionally, a reprojection error between the first map point p of the image to be located and the first feature point is a distance error between a position of a projection point of the first map point p on the image to be located and a position of the first feature point:
optionally, by calculation
Figure BDA0002487756730000091
Obtaining a reprojection error e of a first map point p and a first characteristic point of an image to be positionedr
Wherein u ispFor in an image I to be positionediPixel coordinate of the upper first feature point, xwThree-dimensional world coordinates, Π, of a first map point pcInternal reference matrix, T, for depth camera of device to be positionediIn order to estimate the pose of the device to be positioned,
Figure BDA0002487756730000092
is the feature variance of the first map point p.
In some embodiments, the acquired depth map is sent to a depth network model trained in advance for optimization, and depth information is obtained.
Optionally by calculation
Figure BDA0002487756730000093
Obtaining a depth error between the first map point p and the first feature point;
wherein d ispFor an image I to be positionediDepth information of the upper first feature point;
Figure BDA0002487756730000094
projecting the depth information of a point on the image to be positioned for the first map point p;
optionally by calculation
Figure BDA0002487756730000095
Obtaining depth information of a projection point of a first map point p on an image to be positioned;
wherein, TiEstimated pose, x, for the device to be positionedwIs the three-dimensional world coordinate of the first map point p, ()zIs the third number of vectors taken, i.e., the z-axis coordinate.
Optionally, the reprojection error and the depth error of all first map points and corresponding first feature points of the image to be positioned are minimized simultaneously through a beam square error algorithm, so as to obtain a first optimized pose T of the equipment to be positionedi′:
Optionally by calculation
Figure BDA0002487756730000096
Obtaining a first optimized pose T of a device to be positionedi′;
Wherein e isp=(er,ed)TP is a first map point set on the image to be positioned, P is a positive integer, T identifies the transposition operation of the matrix, wpIs the error weight of the first map point p.
Therefore, the first map point of the image to be positioned is obtained through the extracted second feature point of the image to be positioned, the estimated pose of the device to be positioned is optimized according to the reprojection error and the depth error of the first map point and the first feature point, and the first optimized pose of the device to be positioned is obtained. The method can improve the robustness of scenes with illumination change and large-amplitude interframe movement, and further improve the accuracy of obtaining the pose of the equipment to be positioned, thereby being capable of more accurately positioning.
According to the method for simultaneous positioning and map building, the estimation pose of the estimation camera is directly obtained through the reference key frame, and then the estimation pose is optimized based on the characteristic point extraction method, so that positioning can be still accurately performed in scenes with sparse textures and changed illumination, and the robustness of the algorithm is enhanced.
Optionally, the method for simultaneous localization and mapping further comprises:
constructing a navigation map according to the pose of the equipment to be positioned in the first image to be positioned and the depth information of the first image to be positioned;
and updating the navigation map.
Optionally, the position and the depth of the first image to be positioned are selected according to the position and the depth of the first image to be positioned of the selected equipment to be positionedRecovering three-dimensional point cloud information of the scene; filtering three-dimensional points on the ground by ground plane segmentation; filtering three-dimensional points which can pass through the equipment to be positioned according to the actual height of the equipment to be positioned, and filtering three-dimensional points on part of moving objects through the statistical probability of a continuous preset number of images; projecting the rest three-dimensional points to a ground plane to obtain the grid occupancy rate of the navigation map; setting the occupancy rate to be greater than a fifth set threshold value Th based on the grid occupancy probability of the navigation mapoccIs a grid of an obstacle, and the occupancy is smaller than a sixth set threshold ThsThe grid of (2) is used as a grid without obstacles, and the grid with the occupancy rate of-1 is an unknown area, so that a navigation map is obtained.
Optionally, the pose of the first image to be positioned is a preset pose.
Optionally, the depth information of the first image to be located is obtained from its corresponding depth map. Optionally, the depth map of the first image to be positioned is placed into a preset depth network model for optimization, so as to obtain depth information of the first image to be positioned.
Optionally, the updating the navigation map includes:
selecting an image to be positioned which meets a fourth set condition as a global key frame;
acquiring the pose of the equipment to be positioned in the global key frame;
and updating the navigation map according to the pose of the equipment to be positioned in the global key frame and the depth information of the global key frame.
Optionally, the selected global key frames are sequentially sent to the local feature map for optimization.
Optionally, the fourth setting condition includes: comparing the number of the first map points of the image to be positioned with the number of the first map points in the reference key frame to meet a fifth set condition, for example, the number of the first map points of the image to be positioned is 75% less than the number of the first map points in the reference key frame and is more than n;
optionally, the fourth setting condition further includes any one of the following conditions, that is, an image to be positioned satisfying any one of the following conditions is selected as the global key frame:
the frame number of the image to be positioned away from the reference key frame is greater than a seventh set threshold Thmax(ii) a Or the like, or, alternatively,
the number of the first map points of the image to be positioned is less than the eighth set threshold ThminAnd the number of second feature points which have depth information but do not correspond to the first map point on the image to be positioned is larger than a ninth set threshold Thdepth(ii) a Or the like, or, alternatively,
in the case that the local map optimization thread can receive a new global key frame, the number of first map points of the image to be located is compared with the number of first map points of the reference key frame, and a sixth setting condition is satisfied, for example: the number of first map points of the image to be positioned is less than 25% of the number of first map points of the reference key frame.
Therefore, a proper global key frame can be selected, and a proper first map point is recovered, so that the positioned and constructed map model is more accurate.
Optionally, establishing a common view relationship graph for the global key frame includes: under the condition that a first map point corresponding to the first feature point exists in each global key frame, adding the global key frame into an observation key frame set of the first map point, and updating attribute information of the first map point; and acquiring observation key frames with the common first map point with the global key frame according to the first map point of the global key frame, taking all the observation key frames with the common first map point with the global key frame as the common-view key frames of the global key frame, and establishing a common-view relation graph of the global key frame according to the common-view key frames.
Optionally, according to the pose of the global key frame and the depth information of the global key frame, restoring a corresponding first map point for a second feature point which does not correspond to the first map point, and adding the restored first map point as the second map point into the local feature map.
Optionally, for a second feature point which is not in the visual range of the depth camera of the device to be positioned, restoring a corresponding first map point by a triangle method; and adding the recovered first map point as a second map point into the local feature map. Optionally, acquiring a first map point satisfying epipolar geometric constraint from second feature points of the global keyframe and the common-view keyframe thereof; recovering the three-dimensional coordinates of the first map point which meets epipolar geometric constraint through a triangle method; selecting the first map point which meets a seventh set condition from the first map points which meet epipolar geometric constraint as a recovered first map point; alternatively, the seventh setting condition is that the projection error in the global key frame and its common view key frame is less than the tenth setting threshold.
In some embodiments, the unstable second map point is deleted from the local feature map, so that the stable second map point can better represent the currently observed environmental features. Optionally, the second map point satisfying the following condition is an unstable second map point:
the observation probability of the second map point is smaller than an eleventh set threshold, for example, 0.25, or the number of the global key frames for observing the second map point is smaller than a twelfth set threshold, for example, 3; and the combination of (a) and (b),
the difference between the depths of the second neighborhood point and the surrounding 8 neighborhood points is unstable, that is, the variance of the difference of the continuous preset number of images is greater than a thirteenth set threshold Thvar
Optionally, the obtaining of the pose of the device to be positioned in the global key frame includes: and obtaining the pose of the equipment to be positioned in the global key frame and the three-dimensional world coordinate of the second map point in the local feature map according to the reprojection error and the depth error of the second map point in the local feature map in the global key frame. Optionally, the second map points in the local feature map further include all the first map points of the global key frame and the common-view key frame thereof, that is, all the first map points of the global key frame and the common-view key frame thereof are also taken as the second map points in the local feature map.
Optionally, the reprojection error and the depth error of all second map points in the local feature map in the global key frame are minimized, and the pose T of the device to be positioned in the global key is obtainedhAnd three-dimensional world coordinates x 'of a second map point in the local feature map'w(ii) a Optionally, the local feature map is minimized by a beam-squared error algorithmThe reprojection error and the depth error of all the second map points p' in the global key frame;
alternatively, the computer program may be executed by, for example,
Figure BDA0002487756730000121
obtaining the pose T of the equipment to be positioned in the global key framehAnd the three-dimensional world coordinates of the second map point p';
wherein, ThPose, x 'of device to be positioned at h global key frame'wThree-dimensional world coordinates, O, for a second map point pp′(P ') is a global key frame set for observing a point P' in the local feature map, P 'is all second map point sets in the local feature map, and h and P' are positive integers; t denotes the transposition of the matrix, whp′Error weight for the second map point p'; wherein e ishp′=(ehr,ehd)T,ehrFor the reprojection error of the second map point p' in the h-th global key frame, ehdDepth error in the h-th global key frame for the second map point p';
optionally, by calculating ehd=||dhp′-(Thx′w)z||2Obtaining the depth error e of the second map point p' in the h global key framehd
Wherein d ishp′Depth information for the corresponding first feature point for the second map point p' in the h-th global key frame, ()zA third number representing the number of the sampling vector, i.e. the coordinate value of the z-axis;
optionally by calculation
Figure BDA0002487756730000131
Reprojection error e in h global key frame for second map point phr
Wherein u ishp′Corresponding to the pixel coordinate, pi, of the first feature point in the h-th global key frame for the second map point pcIs an internal parameter matrix of a depth camera of a device to be positioned,
Figure BDA0002487756730000132
is the feature variance of the second map point p'; optionally, the feature variance of the second map point p
Figure BDA0002487756730000133
And calculating by extracting the image pyramid layer number of the second feature point and the corresponding scaling coefficient.
Optionally, deleting redundant global key frames or common view key frames; optionally, more than 90% of second map points on the global key frame or the common view key frame in the local feature map are observed by two or more global key frames or the common view key frames, and if the global key frame or the common view key frame is considered to be redundant, the global key frame or the common view key frame is deleted from the global map.
Thus, the global key frame is optimized through the local feature map, and the positioning precision, the global relocation and the like can be improved.
In some embodiments, where the image to be located is selected as both a local key frame and a global key frame, the frame is first optimized within a sliding window and then optimized in a put-in local feature map.
Therefore, a new local key frame is selected from the image to be positioned and optimized in the sliding window, a new global key frame is selected and optimized in the local feature map, the two algorithms are independently used for optimizing the key frame, the two algorithms are not easily interfered by noise, and the robustness of the algorithms is enhanced.
Optionally, updating the navigation map according to the pose of the device to be positioned in the global key frame and the depth information of the global key frame, including: recovering three-dimensional point cloud information of the scene according to the pose of the equipment to be positioned in the global key frame and the depth information of the global key frame; filtering three-dimensional points on the ground by ground plane segmentation; filtering three-dimensional points which can pass through the equipment to be positioned according to the actual height of the equipment to be positioned, and filtering points on part of moving objects through the statistical probability of a continuous preset number of images; projecting the rest three-dimensional points to the ground plane, and updating the guideGrid occupancy of the chart; setting the occupancy rate to be greater than a fifth set threshold value Th based on the grid occupancy probability of the navigation mapoccIs a grid of an obstacle, and the occupancy is smaller than a sixth set threshold ThsThe grid of (2) is used as a grid without obstacles, the grid with the occupancy of-1 is used as an unknown area, and the grid occupancy of the navigation map is updated, so that the updated navigation map is obtained.
Optionally, the depth information of the global key frame is obtained through a corresponding depth map. Optionally, the depth map corresponding to the global key frame is placed in a preset depth network model for optimization, so as to obtain the depth information of the global key frame.
Optionally, after the pose of the device to be positioned in the global key frame is obtained, the method further includes:
performing closed loop detection on the global key frame;
under the condition that a closed-loop key frame exists, closed-loop fusion processing is carried out on the closed-loop key frame, the pose of the equipment to be positioned in the global key frame is optimized, and a second optimized pose of the equipment to be positioned in the global key frame is obtained;
and updating the navigation map according to the second optimization pose and the depth information of the global key frame.
Optionally, the selected global key frame is sent to a closed-loop detection thread for closed-loop detection, and whether the closed-loop key frame exists is searched. Optionally, according to a Bag-of-words model (BOW), a keyframe with a similarity greater than a fourteenth set threshold with the selected global keyframe is searched from the historical keyframes as a closed-loop keyframe.
Optionally, in a case that a closed-loop key frame is not detected, updating a closed-loop detection condition, and waiting for a next global key frame;
optionally, in the presence of a closed-loop key frame, performing closed-loop fusion processing on the closed-loop key frame, including: and executing a pose graph optimization algorithm based on Sim3 constraint on the closed-loop key frame to perform closed-loop fusion processing. The accumulated error in the local feature map line layer optimization process is eliminated.
Optionally, optimizing the pose of the device to be positioned in the global key frame includes: and performing global optimization on the global key frame, optimizing the poses of all the global key frames in the local feature map and the position information of the second map point through a global BA algorithm, namely, the three-dimensional world coordinate value of the second map point, and obtaining a second optimized pose of the equipment to be positioned in the key frame.
In some embodiments, after the global BA algorithm is optimized, the pose-optimized global keyframe is placed back into the sliding window, i.e., the sliding window is initialized, thereby eliminating the accumulated error in the sliding window.
Optionally, updating the navigation map according to the second optimized pose and the depth information of the global key frame includes: and sending the newly selected global key frame into a navigation map construction thread, and updating the navigation map. Optionally, recovering three-dimensional point cloud information of the scene according to a second optimized pose of the equipment to be positioned in the global key frame and the depth information of the global key frame; filtering three-dimensional points on the ground by ground plane segmentation; filtering three-dimensional points which can pass through the equipment to be positioned according to the actual height of the equipment to be positioned, and filtering points on part of moving objects through the statistical probability of a continuous preset number of images; projecting the rest three-dimensional points to the ground plane, and updating the grid occupancy rate of the navigation map; setting the occupancy rate to be greater than a fifth set threshold value Th based on the grid occupancy probability of the navigation mapoccIs a grid of an obstacle, and the occupancy is smaller than a sixth set threshold ThsThe grid of (2) is a grid without obstacles, the grid with the occupancy of-1 is an unknown area, and the grid occupancy of the navigation map is updated, thereby updating the navigation map.
Therefore, the image depth information and the pose of the global key frame are obtained through the depth camera with the positioning equipment, the expandable two-dimensional grid navigation map is built in real time, and the navigation map information is updated in real time after the global optimization is completed, so that the real-time performance and the accuracy of the navigation map are guaranteed.
As shown in fig. 2, an apparatus for simultaneous localization and mapping according to an embodiment of the present disclosure includes a processor (processor)100 and a memory (memory)101 storing program instructions. Optionally, the apparatus may also include a Communication Interface (Communication Interface)102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via a bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call program instructions in the memory 101 to perform the method for simultaneous localization and mapping of the above embodiments.
Further, the program instructions in the memory 101 may be implemented in the form of software functional units and stored in a computer readable storage medium when sold or used as a stand-alone product.
The memory 101, which is a computer-readable storage medium, may be used for storing software programs, computer-executable programs, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing, i.e., implements the methods for simultaneous localization and mapping in the above embodiments, by executing program instructions/modules stored in the memory 101.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal device, and the like. In addition, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
By adopting the device for simultaneous positioning and map building provided by the embodiment of the disclosure, the estimated pose of the device to be positioned is directly obtained through the image to be positioned, then the first map point is obtained from the image to be positioned, and the estimated pose of the device to be positioned is optimized according to the reprojection error and the depth error of the first map point and the corresponding first feature point, so as to obtain the optimized pose. The acquired pose can be more accurate, so that the positioning accuracy of the equipment to be positioned is improved, and a more accurate navigation map can be obtained when the navigation map is constructed. And the image depth information and the pose of the global key frame are acquired by the depth camera with the positioning equipment, the expandable two-dimensional grid navigation map is constructed, the navigation map information is updated in real time, and the real-time performance and the accuracy of the navigation map are ensured.
The embodiment of the disclosure provides equipment to be positioned, which comprises the device for simultaneously positioning and mapping.
Optionally, the device to be positioned is a robot or the like.
By adopting the device provided by the embodiment of the disclosure, the estimation pose of the device to be positioned is directly obtained through the image to be positioned, then the first map point is obtained from the image to be positioned, and the estimation pose of the device to be positioned is optimized according to the reprojection error and the depth error of the first map point and the corresponding first characteristic point, so as to obtain the optimized pose. The acquired pose can be more accurate, so that the positioning accuracy of the equipment to be positioned is improved, and a more accurate navigation map can be obtained when the navigation map is constructed. And the image depth information and the pose of the global key frame are acquired by the depth camera with the positioning equipment, the expandable two-dimensional grid navigation map is constructed in real time, and the navigation map information is updated in real time after the global optimization is completed, so that the real-time performance and the accuracy of the navigation map are ensured.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for simultaneous localization and mapping.
Embodiments of the present disclosure provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described method for simultaneous localization and mapping.
The computer-readable storage medium described above may be a transitory computer-readable storage medium or a non-transitory computer-readable storage medium.
The technical solution of the embodiments of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes one or more instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium comprising: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes, and may also be a transient storage medium.
The above description and drawings sufficiently illustrate embodiments of the disclosure to enable those skilled in the art to practice them. Other embodiments may incorporate structural, logical, electrical, process, and other changes. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. Furthermore, the words used in the specification are words of description only and are not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this application is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, the terms "comprises" and/or "comprising," when used in this application, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising an …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element. In this document, each embodiment may be described with emphasis on differences from other embodiments, and the same and similar parts between the respective embodiments may be referred to each other. For methods, products, etc. of the embodiment disclosures, reference may be made to the description of the method section for relevance if it corresponds to the method section of the embodiment disclosure.
Those of skill in the art would appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software may depend upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed embodiments. It can be clearly understood by the skilled person that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments disclosed herein, the disclosed methods, products (including but not limited to devices, apparatuses, etc.) may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units may be merely a logical division, and in actual implementation, there may be another division, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form. The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to implement the present embodiment. In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than disclosed in the description, and sometimes there is no specific order between the different operations or steps. For example, two sequential operations or steps may in fact be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (10)

1. A method for simultaneous localization and mapping, comprising:
acquiring an image to be positioned and depth information corresponding to the image to be positioned;
acquiring an estimated pose of the equipment to be positioned according to the image to be positioned and the depth information corresponding to the image to be positioned;
acquiring a first map point according to the image to be positioned;
acquiring a first feature point corresponding to the first map point on the image to be positioned;
and optimizing the estimated pose according to the reprojection error and the depth error of the first map point and the first feature point to obtain a first optimized pose.
2. The method of claim 1, wherein obtaining an estimated pose of a device to be positioned from the image to be positioned and its corresponding depth information comprises:
acquiring depth information corresponding to the reference key frame and the pose of the equipment to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned;
and obtaining the estimated pose of the equipment to be positioned according to the pose of the equipment to be positioned in the reference key frame and the depth information corresponding to the reference key frame.
3. The method of claim 2, wherein obtaining depth information corresponding to the reference keyframe and the pose of the device to be positioned at the reference keyframe from the image to be positioned and the depth information corresponding thereto comprises:
taking a first image to be positioned as a reference key frame, and taking the depth information of the first image to be positioned as the depth information of the reference key frame;
and the pose of the equipment to be positioned in the reference key frame is a preset pose.
4. The method of claim 2, wherein obtaining depth information corresponding to the reference keyframe and the pose of the device to be positioned at the reference keyframe from the image to be positioned and the depth information corresponding thereto comprises:
selecting an image to be positioned meeting a first set condition as a local key frame, and taking the depth information of the image to be positioned meeting the first set condition as the depth information of the local key frame;
sequentially putting the local key frames into a sliding window;
taking the pixel points meeting a second set condition in the sliding window as first effective key points;
comparing the pixel points in each local key frame with a third set condition, and determining a reference key frame and depth information corresponding to the reference key frame according to a comparison result;
acquiring photometric errors of the first effective key points on all local key frames according to the depth information of the reference key frame;
and obtaining the pose of the equipment to be positioned in the reference key frame according to the photometric errors of the first effective key point on all local key frames.
5. The method of claim 2, wherein obtaining the estimated pose of the device to be positioned according to the pose of the device to be positioned in the reference key frame and the depth information corresponding to the reference key frame comprises:
acquiring the luminosity error of the image to be positioned according to the depth information corresponding to the reference key frame;
acquiring the relative pose of the equipment to be positioned in the image to be positioned relative to the equipment to be positioned in the reference key frame according to the photometric error of the image to be positioned;
and obtaining the estimated pose of the equipment to be positioned according to the relative pose and the pose of the equipment to be positioned in the reference key frame.
6. The method of claim 1, wherein optimizing the estimated pose according to a reprojection error and a depth error of the first map point and the first feature point to obtain a first optimized pose comprises:
obtaining a reprojection error and a depth error of the first map point and the first feature point according to the estimated pose;
and minimizing the reprojection error and the depth error to obtain the first optimized pose.
7. The method of any of claims 1 to 6, further comprising:
constructing a navigation map according to the pose of the equipment to be positioned in the first image to be positioned and the depth information of the first image to be positioned;
and updating the navigation map.
8. The method of claim 7, wherein updating the navigational map comprises:
selecting an image to be positioned which meets a fourth set condition as a global key frame;
acquiring the pose of the equipment to be positioned in a global key frame;
and updating a navigation map according to the pose of the equipment to be positioned in the global key frame and the depth information of the global key frame.
9. The method of claim 8, wherein obtaining the pose of the device to be positioned after the global keyframe, further comprises:
performing closed loop detection on the global key frame;
under the condition that a closed-loop key frame exists, performing closed-loop fusion processing on the closed-loop key frame, and optimizing the pose of the equipment to be positioned in the global key frame to obtain a second optimized pose of the equipment to be positioned in the global key frame;
and updating the navigation map according to the second optimization pose and the depth information of the global key frame.
10. An apparatus for simultaneous localization and mapping comprising a processor and a memory storing program instructions, characterized in that the processor is configured to perform the method for simultaneous localization and mapping according to any one of claims 1 to 9 when executing the program instructions.
CN202010396437.7A 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping Active CN111583331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010396437.7A CN111583331B (en) 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010396437.7A CN111583331B (en) 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping

Publications (2)

Publication Number Publication Date
CN111583331A true CN111583331A (en) 2020-08-25
CN111583331B CN111583331B (en) 2023-09-01

Family

ID=72122935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010396437.7A Active CN111583331B (en) 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping

Country Status (1)

Country Link
CN (1) CN111583331B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381828A (en) * 2020-11-09 2021-02-19 Oppo广东移动通信有限公司 Positioning method, device, medium and equipment based on semantic and depth information

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN108776976A (en) * 2018-06-07 2018-11-09 驭势科技(北京)有限公司 A kind of while positioning and the method, system and storage medium for building figure
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN110070615A (en) * 2019-04-12 2019-07-30 北京理工大学 A kind of panoramic vision SLAM method based on polyphaser collaboration
US20200005487A1 (en) * 2018-06-28 2020-01-02 Ubtech Robotics Corp Ltd Positioning method and robot using the same
US20200043189A1 (en) * 2017-01-13 2020-02-06 Zhejiang University Simultaneous positioning and dense three-dimensional reconstruction method
CN110866496A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and mapping method and device based on depth image
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200043189A1 (en) * 2017-01-13 2020-02-06 Zhejiang University Simultaneous positioning and dense three-dimensional reconstruction method
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN108776976A (en) * 2018-06-07 2018-11-09 驭势科技(北京)有限公司 A kind of while positioning and the method, system and storage medium for building figure
US20200005487A1 (en) * 2018-06-28 2020-01-02 Ubtech Robotics Corp Ltd Positioning method and robot using the same
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN110070615A (en) * 2019-04-12 2019-07-30 北京理工大学 A kind of panoramic vision SLAM method based on polyphaser collaboration
CN110866496A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and mapping method and device based on depth image
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
BAOFU FANG等: "A visual SLAM method based on point-line fusion in weak-matching scene" *
KE LIU等: "Semi-direct Tracking and Mapping with RGB-D Camera" *
QINGHUA YU等: "Hybrid-Residual-Based RGBD Visual Odometry" *
孔德慧等: "一种改进的面向SLAM系统的相机位姿估计方法" *
彭清漪等: "基于光度与点线特征融合的半直接单目视觉定位算法" *
李虎民: "融合直接法和特征法的单目SLAM技术研究" *
王化友等: "CFD-SLAM: 融合特征法与直接法的快速鲁棒 SLAM 系统" *
王晓珊: "基于双目视觉的SLAM技术研究" *
谷晓琳等: "一种基于半直接视觉里程计的 RGB-D SLAM 算法" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381828A (en) * 2020-11-09 2021-02-19 Oppo广东移动通信有限公司 Positioning method, device, medium and equipment based on semantic and depth information
CN112381828B (en) * 2020-11-09 2024-06-07 Oppo广东移动通信有限公司 Positioning method, device, medium and equipment based on semantic and depth information

Also Published As

Publication number Publication date
CN111583331B (en) 2023-09-01

Similar Documents

Publication Publication Date Title
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN112304307B (en) Positioning method and device based on multi-sensor fusion and storage medium
US10953545B2 (en) System and method for autonomous navigation using visual sparse map
CN110631554B (en) Robot posture determining method and device, robot and readable storage medium
CN105843223B (en) A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
JP6854780B2 (en) Modeling of 3D space
WO2021035669A1 (en) Pose prediction method, map construction method, movable platform, and storage medium
US8913055B2 (en) Online environment mapping
CN108010081B (en) RGB-D visual odometer method based on Census transformation and local graph optimization
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
WO2021119024A1 (en) Interior photographic documentation of architectural and industrial environments using 360 panoramic videos
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
JP2021518622A (en) Self-location estimation, mapping, and network training
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
CN112785705B (en) Pose acquisition method and device and mobile equipment
KR101869605B1 (en) Three-Dimensional Space Modeling and Data Lightening Method using the Plane Information
CN110599545B (en) Feature-based dense map construction system
Armagan et al. Learning to align semantic segmentation and 2.5 d maps for geolocalization
CN110570474B (en) Pose estimation method and system of depth camera
CN112802096A (en) Device and method for realizing real-time positioning and mapping
CN110070578B (en) Loop detection method
CN111998862A (en) Dense binocular SLAM method based on BNN
Moreau et al. Crossfire: Camera relocalization on self-supervised features from an implicit representation
CN111583331B (en) Method and device for simultaneous localization and mapping
CN113269876B (en) Map point coordinate optimization method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant