CN111583331B - Method and device for simultaneous localization and mapping - Google Patents

Method and device for simultaneous localization and mapping Download PDF

Info

Publication number
CN111583331B
CN111583331B CN202010396437.7A CN202010396437A CN111583331B CN 111583331 B CN111583331 B CN 111583331B CN 202010396437 A CN202010396437 A CN 202010396437A CN 111583331 B CN111583331 B CN 111583331B
Authority
CN
China
Prior art keywords
key frame
pose
image
map
equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010396437.7A
Other languages
Chinese (zh)
Other versions
CN111583331A (en
Inventor
谷晓琳
杨敏
张燚
曾峥
刘科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Sunwise Space Technology Ltd
Original Assignee
Beijing Sunwise Space Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Sunwise Space Technology Ltd filed Critical Beijing Sunwise Space Technology Ltd
Priority to CN202010396437.7A priority Critical patent/CN111583331B/en
Publication of CN111583331A publication Critical patent/CN111583331A/en
Application granted granted Critical
Publication of CN111583331B publication Critical patent/CN111583331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of robots and intelligent equipment, and discloses a method for simultaneously positioning and constructing a map. Comprising the following steps: acquiring an image to be positioned and depth information corresponding to the image to be positioned; acquiring an estimated pose of equipment to be positioned according to the image to be positioned and the corresponding depth information; acquiring a first map point according to the image to be positioned; acquiring a first characteristic point corresponding to the first map point on the image to be positioned; and optimizing the estimated pose according to the reprojection errors and the depth errors of the first map points and the first feature points to obtain a first optimized pose. The method can enable the acquired pose to be more accurate, thereby improving the positioning accuracy of the equipment to be positioned and enabling the navigation map to be more accurate when the navigation map is constructed. The application also discloses a device for simultaneously positioning and constructing the map.

Description

Method and device for simultaneous localization and mapping
Technical Field
The application relates to the technical field of robots and intelligent equipment, for example to a method and a device for simultaneous positioning and map construction.
Background
Currently, SLAM (Simultaneous Localization And Mapping ) technology is used to generate an environment map and autonomously localize a robot, which provides a solid foundation for subsequent robot path planning, autonomous exploration, and navigation, and is an important component of a robot. Vision-based SLAM techniques can provide rich scene information and enable localization. Visual SLAM refers to SLAM technology with a camera as the primary sensor.
In the process of implementing the embodiments of the present disclosure, it is found that at least the following problems exist in the related art:
the existing visual SLAM technology is difficult to accurately acquire pose information of a camera, so that positioning is inaccurate.
Disclosure of Invention
The following presents a simplified summary in order to provide a basic understanding of some aspects of the disclosed embodiments. This summary is not an extensive overview, and is intended to neither identify key/critical elements nor delineate the scope of such embodiments, but is intended as a prelude to the more detailed description that follows.
The embodiment of the disclosure provides a method and a device for simultaneous positioning and map construction, so that more accurate pose can be obtained in the simultaneous positioning and map construction process.
In some embodiments, the method comprises:
acquiring an image to be positioned and depth information corresponding to the image to be positioned;
acquiring an estimated pose of equipment to be positioned according to the image to be positioned and the corresponding depth information;
acquiring a first map point according to the image to be positioned;
acquiring a first characteristic point corresponding to the first map point on the image to be positioned;
and optimizing the estimated pose according to the reprojection errors and the depth errors of the first map points and the first feature points to obtain a first optimized pose.
In some embodiments, the apparatus comprises: comprising a processor and a memory storing program instructions, the processor being configured to perform the above-described method for simultaneous localization and mapping when the program instructions are executed.
The method and the device for simultaneous positioning and map construction provided by the embodiment of the disclosure can realize the following technical effects: the estimated pose of the equipment to be positioned is directly obtained through the image to be positioned, then a first map point is obtained from the image to be positioned, and the estimated pose of the equipment to be positioned is optimized according to the reprojection error and the depth error of the first map point and the corresponding first characteristic point, so that the optimized pose is obtained. The obtained pose is more accurate, so that the positioning accuracy of the equipment to be positioned is improved, and a more accurate navigation map can be obtained when the navigation map is constructed.
The foregoing general description and the following description are exemplary and explanatory only and are not restrictive of the application.
Drawings
One or more embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements, and in which like reference numerals refer to similar elements, and in which:
FIG. 1 is a schematic diagram of a method for simultaneous localization and mapping provided by embodiments of the present disclosure;
fig. 2 is a schematic diagram of an apparatus for simultaneous localization and mapping provided by an embodiment of the present disclosure.
Detailed Description
So that the manner in which the features and techniques of the disclosed embodiments can be understood in more detail, a more particular description of the embodiments of the disclosure, briefly summarized below, may be had by reference to the appended drawings, which are not intended to be limiting of the embodiments of the disclosure. In the following description of the technology, for purposes of explanation, numerous details are set forth in order to provide a thorough understanding of the disclosed embodiments. However, one or more embodiments may still be practiced without these details. In other instances, well-known structures and devices may be shown simplified in order to simplify the drawing.
The terms first, second and the like in the description and in the claims of the embodiments of the disclosure and in the above-described figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate in order to describe embodiments of the present disclosure. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion.
The term "plurality" means two or more, unless otherwise indicated.
In the embodiment of the present disclosure, the character "/" indicates that the front and rear objects are an or relationship. For example, A/B represents: a or B.
The term "and/or" is an associative relationship that describes an object, meaning that there may be three relationships. For example, a and/or B, represent: a or B, or, A and B.
As shown in conjunction with fig. 1, an embodiment of the present disclosure provides a method for simultaneous localization and mapping, comprising:
step S101, obtaining an image to be positioned and depth information corresponding to the image to be positioned;
step S102, obtaining an estimated pose of equipment to be positioned according to an image to be positioned and corresponding depth information;
Step S103, acquiring a first map point according to the image to be positioned;
step S104, acquiring a first characteristic point corresponding to a first map point on an image to be positioned;
step S105, optimizing the estimated pose according to the re-projection error and the depth error of the first map point and the first feature point, and obtaining a first optimized pose of the equipment to be positioned.
By adopting the method for simultaneous localization and map construction provided by the embodiment of the disclosure, the estimated pose of the equipment to be localized is directly obtained through the image to be localized, then the first map point is obtained from the image to be localized, and the estimated pose of the equipment to be localized is optimized according to the reprojection error and the depth error of the first map point and the corresponding first feature point, so as to obtain the optimized pose. The acquired pose can be more accurate, so that the accuracy of positioning the equipment to be positioned is improved.
In some embodiments, the image to be positioned and its corresponding depth map are acquired by a depth camera of the device to be positioned. In some embodiments, the obtained depth map is sent to a preset depth network model for optimization, so as to obtain depth information, i.e. a depth value, of the image to be positioned. Optionally, the image to be positioned is a color map.
Optionally, the image to be located and the depth map are preprocessed. Optionally, the preprocessing includes distortion correction processing, alignment processing of the color map and the depth map, filtering processing of the depth map, and gray-scale conversion processing of the color map. In this way, the second feature point of the image to be positioned is conveniently extracted and the estimated pose is conveniently obtained.
Optionally, acquiring the estimated pose of the device to be positioned according to the image to be positioned and the depth information corresponding to the image to be positioned includes:
acquiring depth information corresponding to the reference key frame and the pose of the equipment to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned;
and obtaining the estimated pose of the equipment to be positioned according to the pose of the equipment to be positioned in the reference key frame and the depth information corresponding to the reference key frame.
Optionally, obtaining depth information corresponding to the reference key frame and the pose of the device to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned, including:
taking the first image to be positioned as a reference key frame, and taking the depth information of the first image to be positioned as the depth information of the reference key frame;
the pose of the equipment to be positioned in the reference key frame is a preset pose.
In some embodiments, a first image to be positioned acquired after the device to be positioned is started is selected as a reference key frame.
Optionally, obtaining depth information corresponding to the reference key frame and the pose of the device to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned, including:
selecting the image to be positioned meeting the first setting condition as a local key frame, and taking the depth information of the image to be positioned meeting the first setting condition as the depth information of the local key frame;
sequentially placing the local key frames into a sliding window;
taking the pixel points meeting the second setting condition in the sliding window as first effective key points;
comparing the pixel points in each local key frame with a third set condition, and determining a reference key frame and depth information corresponding to the reference key frame according to a comparison result;
acquiring luminosity errors of the first effective key points on all local key frames according to the depth information of the reference key frames;
and obtaining the pose of the equipment to be positioned in the reference key frame according to the photometric errors of the first effective key points in all the local key frames.
Optionally, the first setting condition is any one of the following conditions, and the to-be-localized image satisfying any one of the following conditions is selected as the local key frame:
Selecting the image to be positioned as a local key frame under the condition that the number of the local key frames in the sliding window is smaller than a third set threshold Th 1; or alternatively, the first and second heat exchangers may be,
the distance between the pose of the image to be positioned and the pose of the nearest local key frame is larger than a fourth set threshold Th2; or alternatively, the first and second heat exchangers may be,
the sum of the photometric errors of the second effective key points on the image to be positioned is larger than the set multiple of the photometric errors of the existing reference key frame.
Therefore, proper local key frames can be selected, proper key points can be recovered, and the map model which is positioned and constructed is more accurate.
Optionally, the second effective key point on the image to be positioned is a pixel point on the image to be positioned, where the pixel gradient is greater than the first set threshold.
Optionally, the second setting condition is: the pixel gradient is greater than a first set threshold. And taking the pixel point with the pixel gradient larger than the first set threshold value in the sliding window as a first effective key point.
Optionally, the third setting condition is: the pixel point is not projected and the pixel gradient is greater than a first set threshold. Comparing the pixel points in each local key frame with a third set condition, and determining a reference key frame according to the comparison result, wherein the method comprises the following steps:
and comparing the pixel points in each local key frame with a third set condition, projecting the first effective key points on the previous frame local key frame to obtain projection pixel points aiming at each local key frame, taking the local key frame, namely the pixel points, except the projection pixel points, of the projected local key frame, with the pixel gradient larger than a first set threshold value as reference key points, and taking the local key frame containing the reference key points as the reference key frame. Optionally, depth information corresponding to the local key frame including the reference key point is used as the depth information corresponding to the reference key frame.
Optionally, removing, from the sliding window, a local key frame having a number of first valid key points less than the second set threshold, and removing the first valid key points on the local key frame.
Optionally, depth information of the reference key point relative to the local key frame is obtained according to the depth map of the local key frame, and attribute information of the reference key point is updated.
Optionally, obtaining the pose of the equipment to be positioned in the reference key frame and the depth information of the first effective key point in the reference key frame according to the luminosity errors of the first effective key point on all the local key frames;
by calculation ofAcquiring pose T of equipment to be positioned in reference key frame j And depth information d of the first significant key point k in the reference key frame j
Wherein T is j Pose of equipment to be positioned in reference key frame, d j Depth information of a first effective key point k in a reference key frame is obtained, and F is a reference key frame set in a sliding window; k (K) n Is a set of first effective key points, O k (k) For all local keyframe sets in which the first effective keypoint k can be observed in the sliding window, m and j are positive integers;the photometric error of the first valid key point k on the mth local key frame where the point k can be observed.
By calculation ofAnd obtaining the luminosity error of the first effective key point k on the m-th local key frame in which the point k can be observed.
Wherein omega k Is the weight of the first effective key point k, I m [k m ]For the gray value of the first effective key point k on the m-th local key frame capable of observing the point k, I j [k']For the first effective key point k in the reference key frame I j Gray value of t m A is the exposure time in the mth local keyframe where point k can be observed m B for the first photometric correction parameter in the mth local keyframe where point k can be observed m To second photometric correction parameter in mth local keyframe where point k can be observed, t j A for exposure time in a reference key frame j B for the first photometric correction parameter in the reference key frame j For the second photometric correction parameter in the reference key frame, e is a natural constant, k m World coordinates of a local keyframe at the mth observable point k for the first valid keypoint k.
By calculation ofObtaining world coordinates of a local key frame of the first effective key point k at the mth observable point k, d j Depth information of a first effective key point k in a reference key frame;
wherein T is m N is the pose of the m-th local key frame capable of observing point k c K' is the world coordinates of the first valid key point k in the reference key frame, which is the reference matrix of the depth camera.
In this way, the local key frames are optimized through the sliding window, the position of the equipment to be positioned in the reference key frame and the depth information of the first effective key point are obtained through the luminosity errors of the first effective key point in the sliding window on all the local key frames, and the obtained position of the equipment to be positioned in the reference key frame can be more accurate.
Optionally, obtaining the estimated pose of the device to be positioned according to the pose of the device to be positioned in the reference key frame and the depth information corresponding to the reference key frame, including:
acquiring luminosity errors of the image to be positioned according to the depth information corresponding to the reference key frame;
acquiring the relative pose of the image to be positioned of the equipment to be positioned in a reference key frame relative to the equipment to be positioned according to the luminosity error of the image to be positioned;
and obtaining the estimated pose of the equipment to be positioned according to the relative pose and the pose of the equipment to be positioned in the reference key frame. Therefore, for the environment with sparse textures, the accurate initial estimated pose can be obtained, and the situation of positioning failure is avoided.
Alternatively, by calculating T i =T ji T j Obtaining an estimated pose T of a device to be positioned i
Wherein T is i For estimating the pose of the equipment to be positioned, T j For the pose of the equipment to be positioned in the reference key frame, T ji The relative pose of the image to be positioned on the reference key frame relative to the device to be positioned on the device to be positioned is determined.
Optionally, the relative pose T is obtained by minimizing the photometric error of the image to be positioned ji For example, minimizing the photometric error of the image to be positioned by the gauss-newton dip method; the photometric error of the image to be positioned is the sum of photometric errors of key points on the image to be positioned;
optionally by calculationObtaining the luminosity error of an image to be positioned;
wherein S is n E, as a second effective key point set on the image to be positioned s And (3) the luminosity error of a second effective key point s on the image to be positioned, wherein s is an integer.
Optionally by calculationObtaining the luminosity error of the second effective key point s;
wherein omega s Weights of the second effective key points s, t i For the exposure time in the image to be localized, a i B for a first photometric correction parameter in the image to be localized i For the second photometric correction parameter in the image to be located, t j A for exposure time in a reference key frame j B for the first photometric correction parameter in the reference key frame j For the second photometric correction parameter in the reference key frame, I and j are both positive integers, e is a natural constant, I j [P s′ ]For the pixel gray value of the second effective key point s in the reference key frame, P s′ For world coordinates of the second valid key point s in the reference key frame, I i [P s ]For the pixel gray value, P, of the second effective key point s in the image to be localized s The world coordinate value of the second effective key point s in the image to be localized is obtained by projection.
Optionally by calculationObtaining world coordinates of a second effective key point s in the image to be localized;
wherein, pi c An internal reference matrix T of a depth camera of the equipment to be positioned ji Is the relative pose; d, d j For depth information of second effective key point s in reference key frame, P s′ Is the world coordinate of the second valid key point s in the reference key frame. Optionally, the relative pose is calculated according to a uniform motion model.
In this way, the relative pose is obtained through the luminosity error of the image to be positioned, and the estimated pose of the equipment to be positioned on the image to be positioned is directly obtained through the relative pose and the pose of the reference key frame, so that the sparse texture scene is better in robustness, and more accurate estimated pose is provided for subsequent pose optimization.
Optionally, optimizing the estimated pose according to the reprojection error and the depth error of the first map point and the first feature point to obtain a first optimized pose of the device to be positioned, including:
obtaining a re-projection error and a depth error of the first map point and the first feature point according to the estimated pose;
and minimizing the reprojection error and the depth error to obtain a first optimized pose. In this way, by minimizing the re-projection error and the depth error of the first map point and the first feature point, the accuracy of obtaining the pose of the equipment to be positioned can be improved, especially the accuracy of obtaining the pose is improved for the environment with changed illumination, and the robustness of the algorithm is enhanced.
Optionally, acquiring the first map point according to the image to be localized includes: extracting second feature points from the image to be positioned through a feature extraction thread; and recovering the first map points corresponding to the second characteristic points according to the estimated pose of the image to be positioned and the depth information of the image to be positioned.
Optionally, a first feature point corresponding to the first map point on the image to be positioned is obtained through a matching algorithm. Optionally, according to the estimated pose T of the image to be positioned i And projecting the first map point into the image to be positioned, obtaining a projection point of the first map point on the image to be positioned, and taking a second characteristic point closest to each projection point on the image to be positioned as a first characteristic point corresponding to the first map point.
Optionally, the re-projection error between the first map point p and the first feature point of the image to be positioned is a distance error between the position of the projection point of the first map point p on the image to be positioned and the position of the first feature point:
alternatively, by calculationObtaining a reprojection error e of a first map point p and a first characteristic point of an image to be positioned r
Wherein u is p To image I in situ i Pixel coordinates, x of the upper first feature point w Pi is the three-dimensional world coordinate of the first map point p c An internal reference matrix T of a depth camera of the equipment to be positioned i For the estimated pose of the device to be positioned,is the characteristic variance of the first map point p.
In some embodiments, the obtained depth map is sent to a pre-trained depth network model for optimization, and depth information is obtained.
Optionally by calculationObtaining a depth error between a first map point p and a first feature point;
wherein d p For image I to be localized i Depth information of the first feature point;the depth information of the projection point of the first map point p on the image to be positioned is obtained;
optionally by calculationObtaining depth information of a projection point of a first map point p on an image to be positioned;
wherein T is i For estimating pose of equipment to be positioned, x w Three-dimensional world coordinates of the first map point p () z The third number of the vector is taken, namely the coordinate of the z-axis.
Optionally, obtaining a first optimized pose T of the device to be positioned by simultaneously minimizing the reprojection errors and depth errors of all the first map points of the image to be positioned and the corresponding first feature points through a beam square difference algorithm i ′:
Optionally by calculationObtaining a first optimized pose T of the equipment to be positioned i ′;
Wherein e p =(e r ,e d ) T P is a first map point set on an image to be positioned, P is a positive integer, T identifies transpose operation of a matrix, and w p Is the firstAn error weight of map point p.
In this way, a first map point of the image to be positioned is obtained through the extracted second characteristic point of the image to be positioned, and the estimated pose of the device to be positioned is optimized according to the reprojection error and the depth error of the first map point and the first characteristic point, so that a first optimized pose of the device to be positioned is obtained. The method can improve the robustness of the scene with illumination change and large-amplitude motion between frames, and further improve the accuracy of obtaining the pose of the equipment to be positioned, so that the equipment to be positioned can be positioned more accurately.
According to the method for simultaneous localization and map construction provided by the embodiment of the disclosure, firstly, the estimated pose of the estimated camera is directly obtained through the reference key frame, and then the estimated pose is optimized based on the extraction characteristic point method, so that the scene with sparse texture and changed illumination can still be accurately localized, and the robustness of the algorithm is enhanced.
Optionally, the method for simultaneous localization and mapping further comprises:
constructing a navigation map according to the pose of the to-be-positioned equipment in the first to-be-positioned image and the depth information of the first to-be-positioned image;
and updating the navigation map.
Optionally, according to the pose of the equipment to be positioned in the first image to be positioned and the depth information of the first image to be positioned, recovering the three-dimensional point cloud information of the scene; filtering out three-dimensional points on the ground through ground plane segmentation; according to the actual height of the equipment to be positioned, three-dimensional points which can pass through the equipment to be positioned are filtered, and the three-dimensional points on part of the moving objects are filtered through the statistical probability of continuous preset number of images; projecting the rest three-dimensional points to a ground plane to obtain the grid occupancy of the navigation map; according to the grid occupancy probability of the navigation map, the occupancy rate is larger than a fifth set threshold Th occ Is used as a barrier, and the occupancy is smaller than the sixth set threshold Th s As a grid without an obstacle, a grid with an occupancy of-1 is an unknown region, thereby obtaining a navigation map.
Optionally, the pose of the first image to be positioned is a preset pose.
Optionally, the depth information of the first image to be located is obtained through its corresponding depth map. Optionally, the depth map of the first image to be positioned is put into a preset depth network model for optimization, so that the depth information of the first image to be positioned is obtained.
Optionally, updating the navigation map includes:
selecting the image to be localized meeting the fourth setting condition as a global key frame;
acquiring the pose of equipment to be positioned in a global key frame;
and updating the navigation map according to the pose of the equipment to be positioned in the global key frame and the depth information of the global key frame.
Optionally, the selected global key frames are sequentially sent to the local feature map for optimization.
Optionally, the fourth setting condition includes: the number of the first map points of the image to be positioned meets a fifth setting condition compared with the number of the first map points in the reference key frame, for example, the number of the first map points of the image to be positioned is 75% less than the number of the first map points in the reference key frame and is greater than n;
optionally, the fourth setting condition further includes any one of the following conditions, namely selecting the to-be-localized image satisfying any one of the following conditions as the global key frame:
the number of frames of the image to be positioned from the reference key frame is greater than a seventh set threshold Th max The method comprises the steps of carrying out a first treatment on the surface of the Or alternatively, the first and second heat exchangers may be,
the number of first map points of the image to be positioned is smaller than an eighth set threshold Th min And the number of second characteristic points which have depth information but do not correspond to the first map points on the image to be positioned is larger than a ninth set threshold Th depth The method comprises the steps of carrying out a first treatment on the surface of the Or alternatively, the first and second heat exchangers may be,
in the case that the local map optimization thread is able to receive a new global key frame, the first number of map points of the image to be located is compared with the first number of map points of the reference key frame, satisfying a sixth set condition, for example: the number of first map points of the image to be located is less than 25% of the number of first map points of the reference key frame.
Therefore, a proper global key frame can be selected, and a proper first map point can be recovered, so that a map model which is positioned and constructed is more accurate.
Optionally, establishing a co-view relationship graph for the global keyframe includes: under the condition that a first map point corresponding to a first feature point exists in each global key frame, adding the global key frame into an observation key frame set of the first map point, and updating attribute information of the first map point; and according to the first map point of the global key frame, obtaining an observation key frame which has a first map point in common with the global key frame, taking all the observation key frames which have the first map point in common with the global key frame as the common view key frames of the global key frame, and establishing a common view relation diagram of the global key frame according to the common view key frames.
Optionally, according to the pose of the global key frame and the depth information of the global key frame, recovering the corresponding first map point for the second feature point without the corresponding first map point, and adding the recovered first map point as the second map point into the local feature map.
Optionally, for the second feature points which are not in the visual range of the depth camera of the device to be positioned, recovering the corresponding first map points by a triangle method; and adding the recovered first map points as second map points into the local feature map. Optionally, acquiring a first map point meeting epipolar geometry constraint from second feature points of the global keyframe and the common view keyframe thereof; restoring the three-dimensional coordinates of the first map points meeting epipolar geometric constraints by a triangle method; selecting a first map point meeting a seventh setting condition from the first map points meeting epipolar geometric constraints as a recovered first map point; optionally, the seventh setting condition is that a projection error in the global key frame and the co-view key frame thereof is smaller than a tenth setting threshold.
In some embodiments, the unstable second map points are deleted from the local feature map, so that the stable second map points can better represent the currently observed environmental features. Optionally, the second map point satisfying the following condition is an unstable second map point:
the second map point observation probability is less than an eleventh set threshold, for example 0.25, or the global key frame number at which the second map point is observed is less than a twelfth set threshold, for example 3; and, a step of, in the first embodiment,
The difference between the depths of the second map point and the surrounding 8 neighborhood points is unstable, i.e. the variance of the difference between the continuous preset number of images is larger than the thirteenth set threshold Th var
Optionally, acquiring the pose of the device to be located in the global key frame includes: and obtaining the pose of the equipment to be positioned in the global key frame and the three-dimensional world coordinates of the second map point in the local feature map according to the reprojection error and the depth error of the second map point in the local feature map in the global key frame. Optionally, the second map points in the local feature map further comprise all first map points of the global key frame and its co-view key frame, i.e. all first map points of the global key frame and its co-view key frame are also taken as second map points in the local feature map.
Optionally, the reprojection errors and depth errors of all second map points in the local feature map in the global key frame are minimized, and the pose T of the equipment to be positioned in the global key is obtained h And three-dimensional world coordinates x 'of a second map point in the local feature map' w The method comprises the steps of carrying out a first treatment on the surface of the Optionally, minimizing the reprojection errors and depth errors of all second map points p' in the local feature map in the global key frame by a beam square difference algorithm;
Alternatively, the information may be used, by calculation,obtaining the pose T of the equipment to be positioned in the global key frame h And the three-dimensional world coordinates of the second map point p';
wherein T is h For the pose of the equipment to be positioned in the h global key frame, x' w Is the three-dimensional world coordinate of the second map point p', O p′ (P ') is a global keyframe set in which P ' points are observed within the local feature map, P ' is all second maps in the local feature mapA point set, wherein h and p' are positive integers; t represents the transpose operation of the matrix, w hp′ Error weight for the second map point p'; wherein e hp′ =(e hr ,e hd ) T ,e hr E, re-projection error of the second map point p' in the h global key frame hd Depth error of the second map point p' in the h global key frame;
alternatively, by calculating e hd =||d hp′ -(T h x′ w ) z || 2 Obtaining depth error e of second map point p' in h global key frame hd
Wherein d hp′ Depth information for the second map point p' corresponding to the first feature point in the h global key frame () z A third number representing the vector, namely the coordinate value of the z axis;
optionally by calculationReprojection error e in the h global key frame for the second map point p hr
Wherein u is hp′ Pi is the pixel coordinate of the second map point p' corresponding to the first feature point in the h global key frame c As an internal reference matrix for the depth camera of the device to be positioned,is the feature variance of the second map point p'; alternatively, the characteristic variance of the second map point p ∈ ->And obtaining the image pyramid layer number and the corresponding scaling coefficient by extracting the second characteristic points through calculation.
Optionally, deleting the redundant global key frame or the common view key frame; optionally, if more than 90% of the second map points on the global key frame or the common view key frame in the local feature map are observed by two or more global key frames or common view key frames, the global key frame or the common view key frame is considered redundant, and the global key frame or the common view key frame is deleted from the global map.
In this way, the positioning accuracy, global repositioning and the like can be improved by optimizing the global key frame through the local feature map.
In some embodiments, where the image to be localized is selected as both a local key frame and a global key frame, the frame is optimized within a sliding window and then placed into a local feature map.
In this way, the new local key frame is selected from the image to be localized and optimized in the sliding window, the new global key frame is selected and optimized in the local feature map, and the two algorithms are used for optimizing the key frames independently, so that the two algorithms are not easy to be interfered by noise, and the robustness of the algorithms is enhanced.
Optionally, updating the navigation map according to the pose of the device to be positioned in the global key frame and the depth information of the global key frame includes: restoring three-dimensional point cloud information of a scene according to the pose of equipment to be positioned in the global key frame and the depth information of the global key frame; filtering out three-dimensional points on the ground through ground plane segmentation; according to the actual height of the equipment to be positioned, filtering three-dimensional points through which the equipment to be positioned can pass, and filtering points on part of moving objects through the statistical probability of continuous preset number of images; projecting the rest three-dimensional points to a ground plane, and updating the grid occupancy of the navigation map; according to the grid occupancy probability of the navigation map, the occupancy rate is larger than a fifth set threshold Th occ Is used as a barrier, and the occupancy is smaller than the sixth set threshold Th s As a grid without an obstacle, a grid with an occupancy of-1 is an unknown area, and the grid occupancy of the navigation map is updated, thereby obtaining an updated navigation map.
Optionally, the depth information of the global key frame is obtained through a corresponding depth map. Optionally, the depth map corresponding to the global key frame is put into a preset depth network model for optimization, so that the depth information of the global key frame is obtained.
Optionally, after obtaining the pose of the device to be located in the global key frame, the method further includes:
performing closed loop detection on the global key frame;
under the condition that a closed-loop key frame exists, performing closed-loop fusion processing on the closed-loop key frame, and optimizing the pose of the equipment to be positioned in the global key frame to obtain a second optimized pose of the equipment to be positioned in the global key frame;
and updating the navigation map according to the second optimized pose and the depth information of the global key frame.
Optionally, the selected global key frame is sent to a closed loop detection thread to carry out closed loop detection, and whether the closed loop key frame exists or not is searched. Optionally, according to a Bag-of-words model (BOW) model, searching a key frame with the similarity with the selected global key frame being greater than a fourteenth set threshold value from the historical key frames as a closed-loop key frame.
Optionally, in the case that no closed-loop key frame is detected, updating the closed-loop detection condition, and waiting for the next global key frame;
optionally, in the case that the closed-loop key frame exists, performing closed-loop fusion processing on the closed-loop key frame includes: and executing a pose diagram optimization algorithm based on Sim3 constraint on the closed-loop key frame to perform closed-loop fusion processing. Accumulated errors in the local feature map layer optimization process are eliminated.
Optionally, optimizing the pose of the device to be located in the global key frame includes: and carrying out global optimization on the global key frames, and optimizing the pose of all the global key frames in the local feature map and the position information of the second map points, namely the three-dimensional world coordinate values of the second map points, through a global BA algorithm to obtain the second optimized pose of the equipment to be positioned in the key frames.
In some embodiments, after optimization by the global BA algorithm, the pose-optimized global keyframes are re-placed into the sliding window, i.e., the sliding window is initialized, thereby eliminating accumulated errors in the sliding window.
Optionally, updating the navigation map according to the second optimized pose and the depth information of the global key frame includes: global key to be newly selectedThe frame is sent to a navigation map construction thread to update the navigation map. Optionally, recovering three-dimensional point cloud information of the scene according to the second optimized pose of the equipment to be positioned in the global key frame and the depth information of the global key frame; filtering out three-dimensional points on the ground through ground plane segmentation; according to the actual height of the equipment to be positioned, filtering three-dimensional points through which the equipment to be positioned can pass, and filtering points on part of moving objects through the statistical probability of continuous preset number of images; projecting the rest three-dimensional points to a ground plane, and updating the grid occupancy of the navigation map; according to the grid occupancy probability of the navigation map, the occupancy rate is larger than a fifth set threshold Th occ Is used as a barrier, and the occupancy is smaller than the sixth set threshold Th s As a grid without an obstacle, a grid with an occupancy of-1 is an unknown area, and the occupancy of the grid of the navigation map is updated, thereby updating the navigation map.
In this way, the depth camera with the positioning equipment acquires the image depth information and the pose of the global key frame, an expandable two-dimensional grid navigation map is constructed in real time, and after global optimization is completed, navigation map information is updated in real time, so that the instantaneity and the accuracy of the navigation map are ensured.
As shown in connection with fig. 2, an embodiment of the present disclosure provides an apparatus for simultaneous localization and mapping, including a processor (processor) 100 and a memory (memory) 101 storing program instructions. Optionally, the apparatus may further comprise a communication interface (Communication Interface) 102 and a bus 103. The processor 100, the communication interface 102, and the memory 101 may communicate with each other via the bus 103. The communication interface 102 may be used for information transfer. The processor 100 may call program instructions in the memory 101 to perform the method for simultaneous localization and mapping of the above-described embodiments.
Further, the program instructions in the memory 101 described above may be implemented in the form of software functional units and sold or used as a separate product, and may be stored in a computer-readable storage medium.
The memory 101 is a computer readable storage medium that can be used to store a software program, a computer executable program, such as program instructions/modules corresponding to the methods in the embodiments of the present disclosure. The processor 100 executes functional applications and data processing by running program instructions/modules stored in the memory 101, i.e. implements the method for simultaneous localization and mapping in the above-described embodiments.
The memory 101 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the terminal device, etc. Further, the memory 101 may include a high-speed random access memory, and may also include a nonvolatile memory.
By adopting the device for simultaneous positioning and map construction provided by the embodiment of the disclosure, the estimated pose of the equipment to be positioned is directly obtained through the image to be positioned, then the first map point is obtained from the image to be positioned, and the estimated pose of the equipment to be positioned is optimized according to the reprojection error and the depth error of the first map point and the corresponding first characteristic point, so that the optimized pose is obtained. The obtained pose is more accurate, so that the positioning accuracy of the equipment to be positioned is improved, and a more accurate navigation map can be obtained when the navigation map is constructed. And the depth camera with the positioning equipment is used for acquiring the image depth information and the pose of the global key frame, constructing an expandable two-dimensional grid navigation map, updating the navigation map information in real time and ensuring the real-time performance and accuracy of the navigation map.
The embodiment of the disclosure provides equipment to be positioned, which comprises the device for simultaneously positioning and mapping.
Optionally, the device to be positioned is a robot or the like.
By adopting the equipment provided by the embodiment of the disclosure, the estimated pose of the equipment to be positioned is directly obtained through the image to be positioned, then the first map point is obtained from the image to be positioned, and the estimated pose of the equipment to be positioned is optimized according to the reprojection error and the depth error of the first map point and the corresponding first characteristic point, so that the optimized pose is obtained. The obtained pose is more accurate, so that the positioning accuracy of the equipment to be positioned is improved, and a more accurate navigation map can be obtained when the navigation map is constructed. And the depth camera with the positioning equipment is used for acquiring the image depth information and the pose of the global key frame, an expandable two-dimensional grid navigation map is constructed in real time, and after global optimization is completed, navigation map information is updated in real time, so that the instantaneity and the accuracy of the navigation map are ensured.
Embodiments of the present disclosure provide a computer-readable storage medium storing computer-executable instructions configured to perform the above-described method for simultaneous localization and mapping.
The disclosed embodiments provide a computer program product comprising a computer program stored on a computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the above-described method for simultaneous localization and mapping.
The computer readable storage medium may be a transitory computer readable storage medium or a non-transitory computer readable storage medium.
Embodiments of the present disclosure may be embodied in a software product stored on a storage medium, including one or more instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of a method of embodiments of the present disclosure. And the aforementioned storage medium may be a non-transitory storage medium including: a plurality of media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or a transitory storage medium.
The above description and the drawings illustrate embodiments of the disclosure sufficiently to enable those skilled in the art to practice them. Other embodiments may involve structural, logical, electrical, process, and other changes. The embodiments represent only possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in, or substituted for, those of others. Moreover, the terminology used in the present application is for the purpose of describing embodiments only and is not intended to limit the claims. As used in the description of the embodiments and the claims, the singular forms "a," "an," and "the" (the) are intended to include the plural forms as well, unless the context clearly indicates otherwise. Similarly, the term "and/or" as used in this disclosure is meant to encompass any and all possible combinations of one or more of the associated listed. Furthermore, when used in the present disclosure, the terms "comprises," "comprising," and/or variations thereof, mean that the recited features, integers, steps, operations, elements, and/or components are present, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Without further limitation, an element defined by the phrase "comprising one …" does not exclude the presence of other like elements in a process, method or apparatus comprising such elements. In this context, each embodiment may be described with emphasis on the differences from the other embodiments, and the same similar parts between the various embodiments may be referred to each other. For the methods, products, etc. disclosed in the embodiments, if they correspond to the method sections disclosed in the embodiments, the description of the method sections may be referred to for relevance.
Those of skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. The skilled artisan may use different methods for each particular application to achieve the described functionality, but such implementation should not be considered to be beyond the scope of the embodiments of the present disclosure. It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the embodiments disclosed herein, the disclosed methods, articles of manufacture (including but not limited to devices, apparatuses, etc.) may be practiced in other ways. For example, the apparatus embodiments described above are merely illustrative, and for example, the division of the units may be merely a logical function division, and there may be additional divisions when actually implemented, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. In addition, the coupling or direct coupling or communication connection shown or discussed with each other may be through some interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form. The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to implement the present embodiment. In addition, each functional unit in the embodiments of the present disclosure may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. In the description corresponding to the flowcharts and block diagrams in the figures, operations or steps corresponding to different blocks may also occur in different orders than that disclosed in the description, and sometimes no specific order exists between different operations or steps. For example, two consecutive operations or steps may actually be performed substantially in parallel, they may sometimes be performed in reverse order, which may be dependent on the functions involved. Each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims (7)

1. A method for simultaneous localization and mapping comprising:
acquiring an image to be positioned and depth information corresponding to the image to be positioned;
acquiring an estimated pose of equipment to be positioned according to the image to be positioned and the corresponding depth information;
acquiring a first map point according to the image to be positioned;
acquiring a first characteristic point corresponding to the first map point on the image to be positioned;
optimizing the estimated pose according to the reprojection errors and the depth errors of the first map points and the first feature points to obtain a first optimized pose;
obtaining the estimated pose of the equipment to be positioned according to the image to be positioned and the corresponding depth information thereof, wherein the estimated pose comprises the following steps: acquiring depth information corresponding to a reference key frame and the pose of the equipment to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned; obtaining an estimated pose of the equipment to be positioned according to the pose of the equipment to be positioned in the reference key frame and the depth information corresponding to the reference key frame;
acquiring depth information corresponding to a reference key frame and the pose of the equipment to be positioned in the reference key frame according to the image to be positioned and the depth information corresponding to the image to be positioned, wherein the method comprises the following steps: selecting an image to be positioned meeting a first setting condition as a local key frame, and taking depth information of the image to be positioned meeting the first setting condition as the depth information of the local key frame; sequentially placing the local key frames into a sliding window; taking the pixel points meeting the second setting condition in the sliding window as first effective key points; comparing the pixel points in each local key frame with a third set condition, and determining a reference key frame and depth information corresponding to the reference key frame according to a comparison result; acquiring luminosity errors of the first effective key points on all local key frames according to the depth information of the reference key frames; and obtaining the pose of the equipment to be positioned in the reference key frame according to the photometric errors of the first effective key point on all the local key frames.
2. The method of claim 1, wherein obtaining the estimated pose of the device to be located according to the pose of the device to be located in a reference key frame and depth information corresponding to the reference key frame, comprises:
acquiring the luminosity error of the image to be positioned according to the depth information corresponding to the reference key frame;
acquiring the relative pose of the to-be-positioned equipment in the to-be-positioned image relative to the to-be-positioned equipment in the reference key frame according to the photometric error of the to-be-positioned image;
and obtaining the estimated pose of the equipment to be positioned according to the relative pose and the pose of the equipment to be positioned in the reference key frame.
3. The method of claim 1, wherein optimizing the estimated pose according to the re-projection error and the depth error of the first map point and the first feature point to obtain a first optimized pose comprises:
acquiring a re-projection error and a depth error of the first map point and the first feature point according to the estimated pose;
and minimizing the reprojection error and the depth error to obtain the first optimized pose.
4. A method according to any one of claims 1 to 3, further comprising:
Constructing a navigation map according to the pose of the to-be-positioned equipment on the first to-be-positioned image and the depth information of the first to-be-positioned image;
and updating the navigation map.
5. The method of claim 4, wherein updating the navigation map comprises:
selecting the image to be localized meeting the fourth setting condition as a global key frame;
acquiring the pose of the equipment to be positioned in the global key frame;
and updating a navigation map according to the pose of the equipment to be positioned in the global key frame and the depth information of the global key frame.
6. The method of claim 5, wherein acquiring the pose of the device to be located after the global keyframe further comprises:
performing closed loop detection on the global key frame;
under the condition that a closed-loop key frame exists, performing closed-loop fusion processing on the closed-loop key frame, and optimizing the pose of the equipment to be positioned in the global key frame to obtain a second optimized pose of the equipment to be positioned in the global key frame;
and updating the navigation map according to the second optimized pose and the depth information of the global key frame.
7. An apparatus for simultaneous localization and mapping comprising a processor and a memory storing program instructions, wherein the processor is configured to perform the method for simultaneous localization and mapping of any one of claims 1 to 6 when the program instructions are executed.
CN202010396437.7A 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping Active CN111583331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010396437.7A CN111583331B (en) 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010396437.7A CN111583331B (en) 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping

Publications (2)

Publication Number Publication Date
CN111583331A CN111583331A (en) 2020-08-25
CN111583331B true CN111583331B (en) 2023-09-01

Family

ID=72122935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010396437.7A Active CN111583331B (en) 2020-05-12 2020-05-12 Method and device for simultaneous localization and mapping

Country Status (1)

Country Link
CN (1) CN111583331B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381828B (en) * 2020-11-09 2024-06-07 Oppo广东移动通信有限公司 Positioning method, device, medium and equipment based on semantic and depth information

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN108776976A (en) * 2018-06-07 2018-11-09 驭势科技(北京)有限公司 A kind of while positioning and the method, system and storage medium for building figure
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN110070615A (en) * 2019-04-12 2019-07-30 北京理工大学 A kind of panoramic vision SLAM method based on polyphaser collaboration
CN110866496A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and mapping method and device based on depth image
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018129715A1 (en) * 2017-01-13 2018-07-19 浙江大学 Simultaneous positioning and dense three-dimensional reconstruction method
CN110657803B (en) * 2018-06-28 2021-10-29 深圳市优必选科技有限公司 Robot positioning method, device and storage device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107369183A (en) * 2017-07-17 2017-11-21 广东工业大学 Towards the MAR Tracing Registration method and system based on figure optimization SLAM
CN108986037A (en) * 2018-05-25 2018-12-11 重庆大学 Monocular vision odometer localization method and positioning system based on semi-direct method
CN108776976A (en) * 2018-06-07 2018-11-09 驭势科技(北京)有限公司 A kind of while positioning and the method, system and storage medium for building figure
CN109544636A (en) * 2018-10-10 2019-03-29 广州大学 A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN110070615A (en) * 2019-04-12 2019-07-30 北京理工大学 A kind of panoramic vision SLAM method based on polyphaser collaboration
CN110866496A (en) * 2019-11-14 2020-03-06 合肥工业大学 Robot positioning and mapping method and device based on depth image
CN111080659A (en) * 2019-12-19 2020-04-28 哈尔滨工业大学 Environmental semantic perception method based on visual information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谷晓琳等.一种基于半直接视觉里程计的 RGB-D SLAM 算法.《机器人》.2019,第1-10页. *

Also Published As

Publication number Publication date
CN111583331A (en) 2020-08-25

Similar Documents

Publication Publication Date Title
CN112304307B (en) Positioning method and device based on multi-sensor fusion and storage medium
US20220028163A1 (en) Computer Vision Systems and Methods for Detecting and Modeling Features of Structures in Images
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
WO2021035669A1 (en) Pose prediction method, map construction method, movable platform, and storage medium
CN108010081B (en) RGB-D visual odometer method based on Census transformation and local graph optimization
US8913055B2 (en) Online environment mapping
CN108682027A (en) VSLAM realization method and systems based on point, line Fusion Features
WO2015135323A1 (en) Camera tracking method and device
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
WO2016210227A1 (en) Aligning 3d point clouds using loop closures
CN111209770A (en) Lane line identification method and device
CN112785705B (en) Pose acquisition method and device and mobile equipment
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
CN108776989B (en) Low-texture planar scene reconstruction method based on sparse SLAM framework
CN110599545B (en) Feature-based dense map construction system
Alidoost et al. An image-based technique for 3D building reconstruction using multi-view UAV images
CN110570474B (en) Pose estimation method and system of depth camera
CN112802096A (en) Device and method for realizing real-time positioning and mapping
CN111998862A (en) Dense binocular SLAM method based on BNN
Alcantarilla et al. Large-scale dense 3D reconstruction from stereo imagery
CN111583331B (en) Method and device for simultaneous localization and mapping
CN117367404A (en) Visual positioning mapping method and system based on SLAM (sequential localization and mapping) in dynamic scene
CN115239776A (en) Point cloud registration method, device, equipment and medium
Yang et al. Road detection by RANSAC on randomly sampled patches with slanted plane prior
Luo et al. Comparison of an L1-regression-based and a RANSAC-based planar segmentation procedure for urban terrain data with many outliers

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant