CN112631271A - Map generation method based on robot perception fusion - Google Patents

Map generation method based on robot perception fusion Download PDF

Info

Publication number
CN112631271A
CN112631271A CN202011075455.1A CN202011075455A CN112631271A CN 112631271 A CN112631271 A CN 112631271A CN 202011075455 A CN202011075455 A CN 202011075455A CN 112631271 A CN112631271 A CN 112631271A
Authority
CN
China
Prior art keywords
robot
ultrasonic
frame
pose
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011075455.1A
Other languages
Chinese (zh)
Inventor
芦嘉毅
徐李雪
胡秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Linghua Microelectronics Technology Co ltd
Original Assignee
Nanjing Linghua Microelectronics Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Linghua Microelectronics Technology Co ltd filed Critical Nanjing Linghua Microelectronics Technology Co ltd
Priority to CN202011075455.1A priority Critical patent/CN112631271A/en
Publication of CN112631271A publication Critical patent/CN112631271A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Theoretical Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a map generation method based on robot perception fusion, relates to the technical field of robot control, and aims to construct a closed-loop environment map through self perception of a robot. The invention comprises the following steps: generating an ultrasonic difference value and a reprojection difference value by adopting ultrasonic data and image data monitored by an ultrasonic radar and a visible light camera to form an adjacent frame joint difference function; closed loop detection is carried out by utilizing the characteristics of image data, and the pose of the robot is calculated by utilizing a closed loop frame and a current frame; adding the robot pose as a vertex and the adjacent frame combined difference function as an edge into an image optimization frame to obtain an optimized robot pose; and obtaining a closed-loop grid map by using the optimized pose and ultrasonic data of the robot. The invention provides an SLAM framework combining ultrasonic vision, overcomes the defects of the existing positioning and composition technology, and solves the problems that a grid map is difficult to identify closed loops and a closed loop detection technology of a single model is difficult to distinguish similar scenes while ensuring the positioning precision.

Description

Map generation method based on robot perception fusion
Technical Field
The invention relates to the technical field of robot control, in particular to a map generation method based on robot perception fusion.
Background
With the development of science and technology, robots have been widely used all over the world. The research on key technologies of robots has become a hot topic of science and technology and industry of various countries, and the development of robots has also become an important index for measuring the state science and technology level and the industrialization advanced level.
From the global perspective, artificial intelligence is the top scientific technology with leading edge and feasibility in the twenty-first century, and robots are one of the artificial intelligence directions, have great development potential and market development, and can change the human life style and change a branch of the world. The indoor service robot is the robot closest to life, the positioning technology of the indoor service robot is also a scientific research and application hotspot, and the scientific research and application values of the indoor service robot are generally determined.
The indoor service robot is the scene with the most use, nowadays, the indoor service robot has started to walk into the field of vision of common people, widely applied to sweeping robots in families, welcome robots equipped in public places such as banks and hotels, warehousing robots for logistics transportation, security robots for patrol security inspection and the like. Under the pressure of intense competition in the society at present, the indoor service robot can effectively save precious resources such as manpower, material resources, time and the like, and can replace people to complete some simple works. With the development of science and technology, single repetitive labor is slowly and thoroughly replaced by the robot, and the demand of the indoor robot is increased in future.
In order to further expand the working range of the indoor robot, the requirement on the positioning accuracy of the indoor robot is also higher and higher. The positioning problem is derived from the map problem, and the position and the map are often closely related, i.e. the positioning and the map are interdependent. To construct a map for the robot by an indoor robot, the position of the robot itself must be known. Because articles such as tables, chairs, boxes, cabinets and the like are often placed indoors, the environment is complex, the engineering blueprint of the robot does not completely conform to the reality, and the robot cannot directly use a given artificial map.
Disclosure of Invention
The invention provides a map generation method based on robot perception fusion, which can construct a closed-loop environment map through the self-perception of a robot.
In order to achieve the purpose, the invention adopts the following technical scheme:
a map generation method based on robot perception fusion comprises the following steps:
s1, collecting monitoring results of the ultrasonic radar and the visible light camera to obtain ultrasonic data and image data;
s2, constructing a key frame of the ultrasonic data, and obtaining an ultrasonic difference value according to adjacent frames;
s3, carrying out ORB (organized Fast and Rotated Brief feature extraction algorithm) feature extraction on the image data, and obtaining a reprojection difference value according to the extracted ORB feature;
s4, converting the ORB characteristics into words, carrying out closed-loop detection by using the words, and calculating by using the detected closed-loop frame and the current frame to obtain the pose of the robot;
s5, obtaining a combined difference function of adjacent frames by utilizing the ultrasonic difference and the reprojection difference;
s6, adding the robot pose as a vertex and the adjacent frame combined difference function as an edge into a graph optimization frame to obtain an optimized robot pose;
and S7, obtaining a closed-loop grid map by using the optimized pose of the robot and the ultrasonic data.
Further, in S2, the keyframe construction rule of the ultrasound data is:
the current-time key frame has undergone translation and rotation that satisfy a preset amount compared to the previous-time key frame, and the current-time key frame includes more than 48 visual feature points.
Further, the synchronization rule of the ultrasound data and the image data is as follows:
and taking the data of which the difference between the sampling time stamps of the ultrasonic radar and the visible light camera is less than a specific threshold value as a group of data at the same moment.
Further, in S3, the reprojection difference is:
ev=Pv2-pv2^=Pv2-(RPv1+t)
wherein P isv2Representing three-dimensional feature points, P, of the current framev1And representing the three-dimensional feature points in the adjacent frames, wherein R is the position and posture rotation matrix of the visible light camera.
Further, the ultrasonic difference is:
e1=f(x)=P12-P'12=P12-(RP11+t)
wherein P is12For the current frame ultrasound point coordinates, P11Is the ultrasonic point coordinate, P 'of the adjacent frame at the last moment'12Is P12The reprojection point of (f), (x) is an ultrasonic difference function, R is the visible light camera pose rotation matrix, and t is the visible light camera pose translation matrix.
Further, the adjacent frame joint difference function is:
Figure BDA0002715162620000031
RP+t=exp(ξ^)P
wherein m represents the ORB feature point number on the matching of the current frame and the adjacent frame, n represents the ultrasound point number on the matching of the current frame and the adjacent frame, alpha represents the weight, P represents the weightiIs the actual coordinate of the ith feature point, PjAnd the actual coordinate of the jth characteristic point is shown, xi is the lie algebra representation of the camera pose, P is a three-dimensional space point, exp (xi ^) is an exponential function of xi ^ and ^ represents the power operation.
The invention has the beneficial effects that:
the invention provides an ultrasonic and visual combined SLAM framework aiming at the respective advantages of ultrasonic SLAM (simultaneous localization and mapping) and visual SLAM, overcomes the defects of the existing positioning and mapping technology, ensures the positioning precision and solves the problems that a grid map is difficult to identify a closed loop and a closed loop detection technology based on a single model is difficult to distinguish similar scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic flow diagram of ultrasound visual fusion;
FIG. 2 is a schematic diagram of a single vision SLAM framework;
FIG. 3 is a schematic diagram of joint optimization;
FIG. 4 is the joint optimization results.
Detailed Description
In order that those skilled in the art will better understand the technical solutions of the present invention, the present invention will be further described in detail with reference to the following detailed description.
The embodiment of the invention provides a map generation method based on robot perception fusion, which comprises the following steps:
and S1, acquiring monitoring results of the ultrasonic radar and the visible light camera to obtain ultrasonic data and image data.
And S2, constructing a key frame of the ultrasonic data, and obtaining an ultrasonic difference value according to the adjacent frames.
Compared with the visual SLAM, the construction of the ultrasonic SLAM key frame is simpler, only the translation amount and the rotation amount of the robot are considered, and the construction of the key frame is more of an engineering problem. Therefore, the key frame screening strategy adopted by the embodiment considers both translation and rotation and the number of visual feature points, and the judgment condition is as follows: (1) enough translation occurs from the last key frame construction; (2) enough rotation occurs from the last key frame constructed; (3) including more than 48 visual feature points. The translation and rotation are set according to actual requirements.
S3, ORB (Oriented Fast and Rotated Brief feature extraction algorithm) feature extraction is carried out on the image data, and a reprojection difference value is obtained according to the extracted ORB feature.
And S4, converting the ORB characteristics into words, carrying out closed-loop detection by using the words, and calculating by using the detected closed-loop frame and the current frame to obtain the pose of the robot.
A lexicon model is a way to describe an image by the form of words. When a person sees a graph, the person can see information of one person, one vehicle, one tree and the like in the image, the word stock model is similar to the description mode, the characteristics are used as words, the image is equivalent to the combination of a plurality of words, and the similarity of the two graphs is detected by comparing whether the two graphs contain enough similar words. In this embodiment, the image features are ORB features, and therefore, to implement an ORB feature-based lexicon model, which ORB features are extracted first, and which are the same word. Many words together form a dictionary, and the first step in using a lexicon model is to train the dictionary.
The closed-loop detection algorithm based on multi-information fusion in the present embodiment is as follows. Closed loop detection is always an important part in SLAM, and the identification of a walking scene can effectively control the size of a map, reduce accumulated difference values and prevent the problems of map fracture, non-closure and the like.
The grid map constructed by the ultrasonic SLAM is not closed, so that the map is not matched with the actual environment, the positioning is not accurate, and the like. However, for the currently popular low-cost ultrasonic radar, the constructed map often has the conditions of fracture, non-closure and the like, so the method is added into the identification of scenes where the model in the visual SLAM walks, closed-loop constraints are added into an optimization frame after the closed-loop constraints are identified, the poses of a series of robots are globally optimized, and the grid map is reconstructed by the optimized poses of the robots and the corresponding ultrasonic data to realize the closed loop of the map.
And S5, obtaining a joint difference function of adjacent frames by using the ultrasonic difference and the reprojection difference.
And S6, adding a graph optimization frame by taking the robot pose as a vertex and taking a joint difference function of adjacent frames as an edge, and performing global optimization on a series of poses of the robot to obtain the optimized robot pose.
And S7, obtaining an updated closed-loop grid map by using the optimized pose and ultrasonic data of the robot, wherein the updating mode is to use new data to cover the old data to realize updating.
The visual SLAM framework based on feature points mainly comprises a front end, a back end and a closed loop detection part, wherein the front end comprises the functions of feature extraction and matching and adjacent frame motion estimation, and is also called as a visual odometer. The back end is mainly used for optimizing the calculation result of the front end, the closed loop detection is mainly used for detecting whether the camera returns to the position where the camera has once walked, if the closed loop is identified, global optimization is executed for reducing the accumulated difference value, the visual SLAM system without the closed loop detection module has the problem that the drift is larger and larger along with the time, and the whole framework is shown in FIG. 2.
The joint optimization is mainly to add ultrasonic information and visual information to the same frame together, and the robot pose which minimizes the difference function is obtained through a least square method. The core of the least square method is the construction of a joint difference function for adjacent frames. The joint optimization considers both visual observations and ultrasound observations, and a schematic diagram of the joint optimization is shown in fig. 3.
In fig. 3, X represents the pose of the robot, O represents a visual feature point, P represents ultrasonic point data, a dotted line with inconsistent length above represents the observation relationship between the robot and the visual feature point, a dotted line below represents the observation relationship between the robot and the ultrasonic data, and a solid line represents the relative pose of an adjacent frame.
Compared with single ultrasonic data, the greatest advantage of the visual feature point is that an accurate point-to-point matching result can be obtained. On one hand, due to the characteristics of the ultrasonic sensor, the observation data of adjacent frames of the ultrasonic sensor can only be guessed at the mutual relation, and the fact that two points are exactly the same point in the actual environment cannot be accurately judged. The low-cost ultrasonic radar observation points are few in number, the range finding is short, the frequency is low, only partial ultrasonic data of adjacent frames are the same points in the actual environment, the condition of partial point mismatching often appears when the ultrasonic matching of the adjacent frames is carried out, although huge deviation can not appear under the prediction of a mileometer, a certain difference value can still be generated and accumulated continuously. On the other hand, the ultrasound has a problem of long corridor, and when each frame of ultrasound data is very similar, whether the robot moves cannot be distinguished only by the ultrasound data, so that the positioning deviation occurs. The visual information can effectively solve the problems, and visual constraint is added into the rear-end optimization framework, so that the system can obtain more sufficient observation information, and the positioning precision can be effectively ensured.
The meaning of the difference function of the traditional visual adjacent frames is that a pixel point of a feature point re-projected in an image is not coincident with an originally matched pixel point.
However, when the ultrasonic difference value is combined, a unit needs to be unified, and the ultrasonic difference value function is the distance of an actual observation point, so that a visual part needs to convert a two-dimensional point into a three-dimensional point, and the actual distance needs to be unified through the misalignment of the actual three-dimensional point. The visual difference function in the joint optimization is:
ev=Pv2-pv2^=Pv2-(RPv1+t)
wherein P isv2Representing three-dimensional feature points, P, of the current framev1And representing three-dimensional feature points in adjacent frames, wherein R is the position and posture rotation matrix of the visible light camera. Meanwhile, the robot only walks on a two-dimensional plane, the camera does not move in the vertical direction, and the Z-axis direction data of three-dimensional points in a robot coordinate system need to be ignored when the distance between two points is actually calculated.
Compared with the visual reprojection difference value, the acquisition of the ultrasonic difference value is simpler. The ultrasonic realizes pose transformation estimation of adjacent frames in a scanning matching mode, however, the estimation value (R, t) cannot ensure that the effective ultrasonic data of the previous frame completely coincides with the effective ultrasonic point of the next frame after the transformation. Thus, the difference in ultrasound is defined as follows:
e1=f(x)=P12-P'12=P12-(RP11+t)
wherein P is12For the current frame ultrasound point coordinates, P11And the default target obstacle point set is the same ultrasonic point as the adjacent closest point of the default target obstacle point set, wherein the ultrasonic point coordinate of the adjacent frame at the last moment is the same ultrasonic point coordinate. Is composed ofThe rotation matrix and the translation matrix are often written into an algebraic form by convenient actual calculation, and the transformation formula is as follows:
RP+t=exp(ξ^)P
the ultrasound difference function and the visual difference function are added together to construct a neighboring frame joint difference function as follows:
Figure BDA0002715162620000071
where m represents the number of ORB feature points on the adjacent frame match, n represents the number of ultrasound points on the adjacent frame match, and α represents an artificially given weight, generally speaking, ultrasound data is more accurate, and α is often less than 0.45. Only partial feature points and ultrasonic points are effective in actual calculation. PiIs the actual coordinate of the ith feature point, PjAnd the actual coordinate of the jth characteristic point is shown, xi is algebraic representation of the pose of the camera, P is a three-dimensional space point, exp (xi ^) is an exponential function of xi ^ and ^ represents power operation.
When optimization is not considered, the robot obtains the current pose incrementally by continuously obtaining pose transformation matrixes of adjacent frames, the pose of the robot mainly comprises three quantities (x, y and theta), wherein x is a horizontal coordinate, y is a vertical coordinate, and theta is a positioning angle. The robot pose is also represented by a pose transformation matrix (R, t), the relationship of the adjacent frame pose to the adjacent frame pose transformation matrix is as follows:
Tt=TrTt-1
wherein T represents a homogeneous form of the transformation matrix (R, T), TtTransformation matrix (R, T) T representing time TrRepresenting a relative pose transformation, Tt-1The pose of (a) is obtained by the relative pose transformation at the previous moment.
The robot pose of the first frame is set to (0,0,0), so that the current robot pose can be obtained by incremental multiplication gradually. To simplify understanding, let the relationship between the robot pose x and the transformation matrix in algebraic form be:
x=g(ξ)
Figure BDA0002715162620000081
g (xi) is an unbound algebraic relation function of the pose x of the robot,
Figure BDA0002715162620000082
a set function constructed for linear combinations. The difference function formula can be written as:
Figure BDA0002715162620000083
the above formula abbreviated f (φ (x)) to the more intuitive form f (x). After obtaining the combined difference function, extending the function to multiple frames, writing the target function into a least square form and performing first-order Taylor expansion:
Figure BDA0002715162620000084
wherein K represents the number of frames and also represents the number of poses to be optimized, and J represents the derivative of the independent variable. Δ x is the set increment, Δ xtIs an increment over time, JtThe derivative of the independent variable over time is represented.
And solving the above formula by a nonlinear optimization method to obtain the optimal robot pose. The result of ultrasonic scan matching is often better than the result of visual pose estimation, because the accuracy of ultrasound itself is higher than that of vision, so the pose estimation result obtained by ultrasonic correlation scan matching is selected as the initial value of the optimization part.
As can be seen from fig. 4, the local positioning accuracy is better after the ultrasound constraint is added, and the end point positioning error is less than 0.3%.
The invention has the beneficial effects that:
the invention provides a method for detecting and identifying a scene which passes by adopting an ORB visual model, wherein after a closed loop is identified, the relative position relation between a current frame and a closed loop frame is added into a graph optimization frame to optimize the pose of a global robot so as to reduce an accumulated difference value, and then a map is updated by utilizing a series of optimized poses of the robot and ultrasonic data, so that the closed loop of a grid map is realized;
in order to ensure the positioning precision, an ultrasonic-visual combined difference function is constructed and optimized, an ultrasonic correlation scanning matching result is used as an initial optimization value, a new difference function is constructed by the phenomenon that an ultrasonic point and a visual characteristic point acquired by a previous frame cannot be completely overlapped with the same point acquired by a next frame after adjacent frame posture transformation is carried out, the posture is continuously corrected by using a method for searching the gradient descending direction of the difference function in nonlinear optimization to enable the difference to be converged, so that the optimal robot posture is found, the local positioning precision is better after ultrasonic constraint is added, and the end point positioning error is less than 0.3%.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (6)

1. A map generation method based on robot perception fusion is characterized by comprising the following steps:
s1, collecting monitoring results of the ultrasonic radar and the visible light camera to obtain ultrasonic data and image data;
s2, constructing a key frame of the ultrasonic data, and obtaining an ultrasonic difference value according to adjacent frames;
s3, carrying out ORB (organized Fast and Rotated Brief feature extraction algorithm) feature extraction on the image data, and obtaining a reprojection difference value according to the extracted ORB feature;
s4, converting the ORB characteristics into words, carrying out closed-loop detection by using the words, and calculating by using the detected closed-loop frame and the current frame to obtain the pose of the robot;
s5, obtaining a combined difference function of adjacent frames by utilizing the ultrasonic difference and the reprojection difference;
s6, adding the robot pose as a vertex and the adjacent frame combined difference function as an edge into a graph optimization frame to obtain an optimized robot pose;
and S7, obtaining a closed-loop grid map by using the optimized pose of the robot and the ultrasonic data.
2. The method for generating a map based on robot perception fusion according to claim 1, wherein in S2, the keyframe construction rule of the ultrasound data is:
the current-time key frame has undergone translation and rotation that satisfy a preset amount compared to the previous-time key frame, and the current-time key frame includes more than 48 visual feature points.
3. The map generation method based on robot perception fusion according to claim 1, wherein the synchronization rule of the ultrasound data and the image data is:
and taking the data of which the difference between the sampling time stamps of the ultrasonic radar and the visible light camera is less than a specific threshold value as a group of data at the same moment.
4. The method for generating a map based on robot perception fusion according to claim 1, wherein in S3, the reprojection difference value is:
ev=Pv2-pv2^=Pv2-(RPv1+t)
wherein P isv2Representing three-dimensional feature points, P, of the current framev1And representing the three-dimensional feature points in the adjacent frames, wherein R is the position and posture rotation matrix of the visible light camera.
5. The method for generating a map based on robot perception fusion according to claim 1, wherein the ultrasound difference value is:
e1=f(x)=P12-P′12=P12-(RP11+t)
wherein P is12For the current frame ultrasound point coordinates, P11Is the ultrasonic point coordinate, P 'of the adjacent frame at the last moment'12Is P12The reprojection point of (f), (x) is an ultrasonic difference function, R is the visible light camera pose rotation matrix, and t is the visible light camera pose translation matrix.
6. The method of claim 1, wherein the adjacent frame joint difference function is:
Figure FDA0002715162610000021
RP+t=exp(ξ^)P
wherein m represents the ORB feature point number on the matching of the current frame and the adjacent frame, n represents the ultrasound point number on the matching of the current frame and the adjacent frame, alpha represents the weight, P represents the weightiIs the actual coordinate of the ith feature point, PjAnd the actual coordinate of the jth characteristic point is shown, xi is the lie algebra representation of the camera pose, P is a three-dimensional space point, exp (xi ^) is an exponential function of xi ^ and ^ represents the power operation.
CN202011075455.1A 2020-10-09 2020-10-09 Map generation method based on robot perception fusion Pending CN112631271A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011075455.1A CN112631271A (en) 2020-10-09 2020-10-09 Map generation method based on robot perception fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011075455.1A CN112631271A (en) 2020-10-09 2020-10-09 Map generation method based on robot perception fusion

Publications (1)

Publication Number Publication Date
CN112631271A true CN112631271A (en) 2021-04-09

Family

ID=75302893

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011075455.1A Pending CN112631271A (en) 2020-10-09 2020-10-09 Map generation method based on robot perception fusion

Country Status (1)

Country Link
CN (1) CN112631271A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN110060277A (en) * 2019-04-30 2019-07-26 哈尔滨理工大学 A kind of vision SLAM method of multiple features fusion
CN110223348A (en) * 2019-02-25 2019-09-10 湖南大学 Robot scene adaptive bit orientation estimation method based on RGB-D camera
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN110223348A (en) * 2019-02-25 2019-09-10 湖南大学 Robot scene adaptive bit orientation estimation method based on RGB-D camera
CN110060277A (en) * 2019-04-30 2019-07-26 哈尔滨理工大学 A kind of vision SLAM method of multiple features fusion
CN111275763A (en) * 2020-01-20 2020-06-12 深圳市普渡科技有限公司 Closed loop detection system, multi-sensor fusion SLAM system and robot
CN111563442A (en) * 2020-04-29 2020-08-21 上海交通大学 Slam method and system for fusing point cloud and camera image data based on laser radar

Similar Documents

Publication Publication Date Title
WO2021233029A1 (en) Simultaneous localization and mapping method, device, system and storage medium
Churchill et al. Practice makes perfect? managing and leveraging visual experiences for lifelong navigation
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
Eade et al. Monocular graph SLAM with complexity reduction
CN103247040B (en) Based on the multi-robot system map joining method of hierarchical topology structure
CN111060924B (en) SLAM and target tracking method
CN108564616A (en) Method for reconstructing three-dimensional scene in the rooms RGB-D of fast robust
Borrmann et al. The project thermalmapper–thermal 3d mapping of indoor environments for saving energy
CN107967473A (en) Based on picture and text identification and semantic robot autonomous localization and navigation
CN110097584A (en) The method for registering images of combining target detection and semantic segmentation
CN111161334B (en) Semantic map construction method based on deep learning
CN117456136A (en) Digital twin scene intelligent generation method based on multi-mode visual recognition
Booij et al. Efficient data association for view based SLAM using connected dominating sets
Li et al. An efficient point cloud place recognition approach based on transformer in dynamic environment
Zhang et al. Dense 3d mapping for indoor environment based on feature-point slam method
Huang et al. Overview of LiDAR point cloud target detection methods based on deep learning
Lai et al. 3D semantic map construction system based on visual SLAM and CNNs
CN110163915B (en) Spatial three-dimensional scanning method and device for multiple RGB-D sensors
CN112631271A (en) Map generation method based on robot perception fusion
Suzuki et al. SLAM using ICP and graph optimization considering physical properties of environment
Gu et al. Surveying and mapping of large-scale 3D digital topographic map based on oblique photography technology
Jeon et al. Learning shape-based representation for visual localization in extremely changing conditions
Dong et al. R-LIOM: reflectivity-aware LiDAR-inertial odometry and mapping
Wang et al. ULSM: Underground Localization and Semantic Mapping with Salient Region Loop Closure under Perceptually-Degraded Environment
CN117537803B (en) Robot inspection semantic-topological map construction method, system, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210409