CN109325979B - Robot loop detection method based on deep learning - Google Patents

Robot loop detection method based on deep learning Download PDF

Info

Publication number
CN109325979B
CN109325979B CN201810804671.1A CN201810804671A CN109325979B CN 109325979 B CN109325979 B CN 109325979B CN 201810804671 A CN201810804671 A CN 201810804671A CN 109325979 B CN109325979 B CN 109325979B
Authority
CN
China
Prior art keywords
picture
objects
category
frame
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810804671.1A
Other languages
Chinese (zh)
Other versions
CN109325979A (en
Inventor
魏国亮
罗顺心
严龙
宋天中
耿双乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201810804671.1A priority Critical patent/CN109325979B/en
Publication of CN109325979A publication Critical patent/CN109325979A/en
Application granted granted Critical
Publication of CN109325979B publication Critical patent/CN109325979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a robot loop detection method based on deep learning, which comprises the steps of acquiring a frame of picture through a depth camera, using the frame of picture as the input of a convolutional neural network, acquiring the class information and the position information of an object and the quantity information of the object from the output end of the convolutional neural network, establishing a dictionary model consisting of common objects, acquiring the objects in the picture by using the objects in the dictionary model and adopting class, position and total quantity characteristic vectors to describe and store the picture; according to the object types appearing in the two pictures, the characteristics of judging whether the two pictures are the same or not are used, and meanwhile, the position information and the quantity information are used as auxiliary characteristics, and a function for judging the similarity degree of the two pictures is constructed; loop detection is implemented according to this function. The method realizes the loop detection function by using a deep learning mode, reduces the position and pose drift error, realizes accurate positioning and mapping, greatly reduces the operation amount, and has more excellent performance in real-time.

Description

Robot loop detection method based on deep learning
Technical Field
The invention relates to a robot loop detection method based on deep learning.
Background
With the rise of the robot industry, the positions of instant positioning and mapping SLAM (singular localization and mapping) in the robot become more and more important, and in recent years, due to the development of depth cameras, SLAM makes a significant breakthrough, and gradually changes from traditional laser radar SLAM and inertial sensor SLAM to vision SLAM. Visual SLAM mainly addresses the positioning of cameras in space, as well as creating environmental maps. In some industries which are popular at present, the body shadow can be seen, for example, in the aspect of VR/AR, a map is obtained according to a visual SLAM, and the overlapped virtual object is correspondingly rendered according to the current visual angle, so that the overlapped virtual object looks relatively real and has no sense of incongruity; in the field of unmanned aerial vehicles, a local map can be constructed by using a visual SLAM to assist the unmanned aerial vehicle in autonomous obstacle avoidance and path planning; in the aspect of unmanned driving, visual SLAM technology can be used for providing a visual odometer function, and then the visual odometer function is fused with other positioning modes; in terms of mobile robot positioning and navigation, the visual SLAM can be used to generate an environment map based on which the mobile robot performs tasks such as path planning, autonomous exploration, navigation, etc.
The loop detection is to solve the problem that the pose drifts with time as time changes in the process of positioning and mapping, and a common method is bag of words (bag of words model), which is an abstract and unsupervised learning method with large computation amount, and as time changes, the error accumulation of the pose of the robot is larger and larger, so that the accuracy of positioning and mapping of the robot is reduced, and the accuracy of autonomous navigation of the robot is seriously influenced.
Disclosure of Invention
The invention aims to solve the technical problem of providing a robot loop detection method based on deep learning, which overcomes the defect of loop detection of the traditional bag-of-words model, realizes the loop detection function by using a deep learning mode, reduces the pose drift error, realizes accurate positioning and mapping, ensures that the robot is more accurate in autonomous navigation, greatly reduces the computation amount, and has more excellent performance in real-time.
In order to solve the technical problem, the robot loop detection method based on deep learning comprises the following steps:
the method comprises the steps that firstly, a robot obtains a frame of picture through a depth camera in the moving process, the picture is used as the input of a convolutional neural network of a deep learning target detection algorithm, and the picture containing the class information, the position information and the quantity information of each class of objects is obtained from the output end of the convolutional neural network;
establishing a dictionary model composed of common objects, wherein the objects in the dictionary model comprise objects in a picture obtained by a convolutional neural network output end, describing and storing the picture by adopting category, position and total number characteristic vectors, wherein the total number characteristic vectors are the total number of the objects in the picture, namely the sum of the category characteristic vectors, the category characteristic vectors are the number of each kind of object in the picture, and the position characteristic vectors are composed of pixel coordinates of diagonal vertexes of all object bounding boxes in the picture;
comparing and judging the characteristic vectors of the total number of the objects of the current frame and the historical frame of the picture, if not, comparing the characteristic vectors with the next historical frame, judging whether the characteristic vectors of the total number of the objects of the current frame and the next historical frame are equal, and if so, judging whether the subtraction between the characteristic vector of the category of the current frame and the characteristic vector of the category of the historical frame is zero by adopting a formula (1), namely
Figure GDA0003074052340000021
In formula (1): c1 is the class feature vector of the current frame, C2 is the class feature vector of the historical frame, C1iThe ith value representing the feature vector of the current frame class, C2iThe ith value representing the historical frame category feature vector represents the number of ith objects, and n is the set number of object categories; f is the result of whether the subtraction of the category characteristic vector of the current frame and the category characteristic vector of the historical frame is zero, if f is not zero, the current frame is compared with the next historical frame, and the step is executed again;
step four, when f in the formula (1) is zero, calculating the similarity degree of the two frames of pictures by adopting a formula (2):
Figure GDA0003074052340000022
in formula (2): p is the ratio of the sum of the pixel areas of all the objects in the two frames of pictures, SijThe pixel area of the ith object of the ith kind,
Figure GDA0003074052340000023
The upper right coordinate of the jth object bounding box of the ith class,
Figure GDA0003074052340000024
the lower left coordinate of the ith object bounding box of the ith category S1ijWhere 1 denotes the current frame, S1ijIt indicates the pixel area occupied by the jth object of the ith category in the current frame, similarly to S2ijThen representing the pixel area occupied by the jth object of the ith category in the history frame; (ii) a
If the value of P is larger than 1, taking the reciprocal of the P, if the value of P is smaller than or equal to 1, keeping the P unchanged, if the two frames of pictures are the same, the P is a value close to 1, and if the areas of the pixels of the sum of the objects in the two frames of pictures are different, the P is a value smaller than 1;
and fifthly, judging that the P is larger than or equal to a similarity threshold, if the P is equal to or larger than the similarity threshold, judging that the two frames of pictures are similar, and judging that the robot generates a loop, otherwise, judging that the two frames of pictures are not similar, and judging that the robot does not generate a loop, wherein the similarity threshold is a constant set by judging the similarity through practical experience.
Further, a dictionary model is adopted to train a convolutional neural network of a deep learning target detection algorithm, and the collected picture is input into the convolutional neural network to obtain the class information and the position information of the objects in the picture and the quantity information of the objects.
The robot loop detection method based on the deep learning adopts the technical scheme, namely the method obtains a frame of picture through a depth camera to be used as the input of a convolutional neural network, obtains the class information and the position information of an object and the quantity information of the class of the object from the output end of the convolutional neural network, establishes a dictionary model consisting of common objects, obtains the objects in the picture from the output end of the convolutional neural network by the objects in the dictionary model, and describes and stores the picture by adopting class, position and total quantity eigenvectors; according to the object type appearing in the two pictures, the feature of judging whether the two pictures are the same or not is used, and meanwhile, the position information and the quantity information of the object in the pictures are used as auxiliary features to construct a function for judging the similarity degree of the pictures and the previously stored key frame pictures; and when the function value is larger than the preset value, the robot is considered to return to the original position, otherwise, the robot does not detect a loop. The method overcomes the defect of loop detection of the traditional bag-of-words model, realizes the loop detection function by using a deep learning mode, reduces the position and pose drift error, realizes accurate positioning and mapping, ensures that the robot is more accurate in autonomous navigation, greatly reduces the computation load, and has more excellent real-time performance.
Drawings
The invention is described in further detail below with reference to the following figures and embodiments:
fig. 1 is a schematic block diagram of a robot loop detection method based on deep learning according to the present invention.
Detailed Description
Embodiment as shown in fig. 1, the robot loop detection method based on deep learning of the present invention includes the following steps:
the method comprises the steps that firstly, a robot obtains a frame of picture through a depth camera in the moving process, the picture is used as the input of a convolutional neural network of a deep learning target detection algorithm, and the picture containing the class information, the position information and the quantity information of each class of objects is obtained from the output end of the convolutional neural network;
establishing a dictionary model composed of common objects, wherein the objects in the dictionary model comprise objects in a picture obtained by a convolutional neural network output end, describing and storing the picture by adopting category, position and total number characteristic vectors, wherein the total number characteristic vectors are the total number of the objects in the picture, namely the sum of the category characteristic vectors, the category characteristic vectors are the number of each kind of object in the picture, and the position characteristic vectors are composed of pixel coordinates of diagonal vertexes of all object bounding boxes in the picture; the image obtained from the output end of the convolutional neural network forms a COCO data set model, the data set model has not only category and position information but also semantic text description of the image for the labeling information of the image, the open source of the COCO data set enables the image segmentation semantic understanding to make great progress, and the COCO data set also almost becomes a standard data set for evaluating the performance of the image semantic understanding algorithm, and the method can accurately obtain the category, position and total number feature vectors of objects in the image by applying the COCO data set model;
comparing and judging the characteristic vectors of the total number of the objects of the current frame and the historical frame of the picture, if not, comparing the characteristic vectors with the next historical frame, judging whether the characteristic vectors of the total number of the objects of the current frame and the next historical frame are equal, and if so, judging whether the subtraction between the characteristic vector of the category of the current frame and the characteristic vector of the category of the historical frame is zero by adopting a formula (1), namely
Figure GDA0003074052340000044
In formula (1): c1 is the class feature vector of the current frame, C2 is the class feature vector of the historical frame, C1iThe ith value representing the feature vector of the current frame class, C2iThe ith value representing the historical frame category feature vector represents the number of ith objects, and n is the set number of object categories; f is the result of whether the subtraction of the category characteristic vector of the current frame and the category characteristic vector of the historical frame is zero, if f is not zero, the current frame is compared with the next historical frame, and the step is executed again;
step four, when f in the formula (1) is zero, calculating the similarity degree of the two frames of pictures by adopting a formula (2):
Figure GDA0003074052340000041
in formula (2): p is the ratio of the sum of the pixel areas of all the objects in the two frames of pictures, SijThe pixel area of the ith object of the ith kind,
Figure GDA0003074052340000042
The upper right coordinate of the jth object bounding box of the ith class,
Figure GDA0003074052340000043
the bottom left coordinate of the jth object bounding box of the ith class. S1ijWhere 1 denotes the current frame, S1ijIt indicates the pixel area occupied by the jth object of the ith category in the current frame, similarly to S2ijThen representThe pixel area occupied by the ith object of the ith category in the history frame;
if the value of P is larger than 1, taking the reciprocal of the P, if the value of P is smaller than or equal to 1, keeping the P unchanged, if the two frames of pictures are the same, the P is a value close to 1, and if the areas of the pixels of the sum of the objects in the two frames of pictures are different, the P is a value smaller than 1;
and fifthly, judging that the P is larger than or equal to a similarity threshold, if the P is equal to or larger than the similarity threshold, judging that the two frames of pictures are similar, and judging that the robot generates a loop, otherwise, judging that the two frames of pictures are not similar, and judging that the robot does not generate a loop, wherein the similarity threshold is a constant set by judging the similarity through practical experience. .
Preferably, a dictionary model is adopted to train a convolutional neural network of a deep learning target detection algorithm, and the acquired picture is input into the convolutional neural network to obtain the class information and the position information of the object in the picture and the quantity information of the object.
For loop detection of the robot in the visual SLAM, whether the robot returns to the original position or not is judged, whether two identical pictures appear in an acquired key frame or not is judged, the object type appearing in the two pictures is used as a characteristic for judging whether the two pictures are identical or not, meanwhile, position information and quantity information of the object in the pictures are used as auxiliary characteristics, a characteristic vector for judging the similarity degree of the pictures and the key frame pictures stored before is constructed, and then the characteristic vector is compared to judge whether the robot has loop or not.
The method uses a deep learning method to detect the position, the type and the quantity of the object in the picture, and realizes the detection of the position, the type and the quantity of the object in the picture based on a deep learning target detection algorithm (SSD). The SSD input is a picture, image information of different scales is obtained on a feature map of each layer through a convolutional neural network of the SSD, deviation values of a default boundary box of an object and scores of object categories are predicted in the feature map of each scale, a series of confidence scores of the object and the boundary box containing the object are obtained, and as the same object can be contained by a plurality of boundary boxes, a non-maximum suppression algorithm is adopted to achieve the optimal result.
And in practical application, the result obtained by inputting the picture into the SSD is equivalent to that of describing the picture by looking up the object in the dictionary model, namely, the picture is described by adopting the object type, the position and the total quantity, and the similarity of the picture is convenient to calculate through the characteristics.
For example, when the robot is operated indoors, a dictionary model of common indoor objects is established, including objects such as a mobile phone, a mouse, a chair, a keyboard, a display, a table and the like, and because the convolutional neural network is not only a classification problem but also needs to obtain the position of the object, the convolutional neural network is also used as a regression problem in the SSD to solve, an output result is obtained by training the value of the minimization loss function, but because many bounding boxes surround the same object in the output result, in order to solve the problem, an optimal result is selected by adopting a non-maximum suppression method. Therefore, when the picture is input into the convolutional neural network of the SSD, the object class contained in the picture can be obtained, and the position of the object class is also contained, and when the class, the position and the number of the object are obtained, the object class, the position and the total number are stored in the form of a feature vector, wherein the total number of the object is actually the sum of the class feature vectors.
In the visual SLAM, when the pose of the robot is calculated by the front-end visual odometer, the pose of the robot is calculated by the interframe calculation of the robot, only the pose relation of two adjacent key frames is considered, and the constraint of a historical frame is not considered, so that the error accumulation of the pose of the robot is larger and larger along with the change of time. According to the method, a deep learning mode is used, a loop detection function is realized, errors of the robot in position and posture drifting along with time can be reduced, the positioning and mapping accuracy of the robot is more accurate, and the robot is more accurate in autonomous navigation. Different from the bag-of-words model, the method adopts a supervised learning mode, and the mode is increased from an abstract characteristic point level to an understandable object level, so that the robot can identify whether the scenes are the same like human beings. Meanwhile, the description form of the object feature vector is adopted, so that the operation amount is greatly reduced, and the real-time performance is more excellent.

Claims (2)

1. A robot loop detection method based on deep learning is characterized by comprising the following steps:
the method comprises the steps that firstly, a robot obtains a frame of picture through a depth camera in the moving process, the picture is used as the input of a convolutional neural network of a deep learning target detection algorithm, and the picture containing the class information, the position information and the quantity information of each class of objects is obtained from the output end of the convolutional neural network;
establishing a dictionary model composed of common objects, wherein the objects in the dictionary model comprise objects in a picture obtained by a convolutional neural network output end, describing and storing the picture by adopting category, position and total number characteristic vectors, wherein the total number characteristic vectors are the total number of the objects in the picture, namely the sum of the category characteristic vectors, the category characteristic vectors are the number of each kind of object in the picture, and the position characteristic vectors are composed of pixel coordinates of diagonal vertexes of all object bounding boxes in the picture;
comparing and judging the characteristic vectors of the total number of the objects of the current frame and the historical frame of the picture, if not, comparing the characteristic vectors with the next historical frame, judging whether the characteristic vectors of the total number of the objects of the current frame and the next historical frame are equal, and if so, judging whether the subtraction between the characteristic vector of the category of the current frame and the characteristic vector of the category of the historical frame is zero by adopting a formula (1), namely
Figure FDA0003074052330000011
In formula (1): c1 is the class feature vector of the current frame, C2 is the class feature vector of the historical frame, C1iFeature vector representing current frame categoryC2iThe ith value representing the historical frame category feature vector represents the number of ith objects, and n is the set number of object categories; f is the result of whether the subtraction of the category characteristic vector of the current frame and the category characteristic vector of the historical frame is zero, if f is not zero, the current frame is compared with the next historical frame, and the step is executed again;
step four, when f in the formula (1) is zero, calculating the similarity degree of the two frames of pictures by adopting a formula (2):
Figure FDA0003074052330000012
in formula (2): p is the ratio of the sum of the pixel areas of all the objects in the two frames of pictures, SijThe pixel area of the ith object of the ith kind,
Figure FDA0003074052330000013
The upper right coordinate of the jth object bounding box of the ith class,
Figure FDA0003074052330000014
the lower left coordinate of the ith object bounding box of the ith category S1ijWhere 1 denotes the current frame, S1ijIt indicates the pixel area occupied by the jth object of the ith category in the current frame, similarly to S2ijThen representing the pixel area occupied by the jth object of the ith category in the history frame;
if the value of P is larger than 1, taking the reciprocal of the P, if the value of P is smaller than or equal to 1, keeping the P unchanged, if the two frames of pictures are the same, the P is a value close to 1, and if the areas of the pixels of the sum of the objects in the two frames of pictures are different, the P is a value smaller than 1;
and fifthly, judging that the P is larger than or equal to a similarity threshold, if the P is equal to or larger than the similarity threshold, judging that the two frames of pictures are similar, and judging that the robot generates a loop, otherwise, judging that the two frames of pictures are not similar, and judging that the robot does not generate a loop, wherein the similarity threshold is a constant set by judging the similarity through practical experience.
2. The robot loop detection method based on deep learning of claim 1, wherein a dictionary model is adopted to train a convolutional neural network of a deep learning target detection algorithm, and the collected picture is input to the convolutional neural network to obtain the class information and the position information of the objects in the picture and the quantity information of the objects.
CN201810804671.1A 2018-07-20 2018-07-20 Robot loop detection method based on deep learning Active CN109325979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810804671.1A CN109325979B (en) 2018-07-20 2018-07-20 Robot loop detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810804671.1A CN109325979B (en) 2018-07-20 2018-07-20 Robot loop detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN109325979A CN109325979A (en) 2019-02-12
CN109325979B true CN109325979B (en) 2021-11-02

Family

ID=65264079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810804671.1A Active CN109325979B (en) 2018-07-20 2018-07-20 Robot loop detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN109325979B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871803B (en) * 2019-02-18 2020-12-08 清华大学 Robot loop detection method and device
CN110069995A (en) * 2019-03-16 2019-07-30 浙江师范大学 A kind of service plate moving state identification method based on deep learning
CN110135377B (en) * 2019-05-21 2022-10-14 北京百度网讯科技有限公司 Method and device for detecting motion state of object in vehicle-road cooperation and server
CN110880010A (en) * 2019-07-05 2020-03-13 电子科技大学 Visual SLAM closed loop detection algorithm based on convolutional neural network
CN111401123B (en) * 2019-12-29 2024-04-19 的卢技术有限公司 SLAM loop detection method and system based on deep learning
CN111860297A (en) * 2020-07-17 2020-10-30 厦门理工学院 SLAM loop detection method applied to indoor fixed space
CN113377987B (en) * 2021-05-11 2023-03-28 重庆邮电大学 Multi-module closed-loop detection method based on ResNeSt-APW
CN115200588B (en) * 2022-09-14 2023-01-06 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107403163A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of laser SLAM closed loop detection methods based on deep learning
CN108108764A (en) * 2017-12-26 2018-06-01 东南大学 A kind of vision SLAM winding detection methods based on random forest
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107403163A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of laser SLAM closed loop detection methods based on deep learning
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern
CN108108764A (en) * 2017-12-26 2018-06-01 东南大学 A kind of vision SLAM winding detection methods based on random forest

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Lightweight Unsupervised Deep Loop Closure;Nate Merrill 等;《arXiv:1805.07703v2 [cs.RO]》;20180524;第1-10页 *
基于精简卷积神经网络的快速闭环检测方法;何元烈 等;《计算机工程》;20180630;第44卷(第06期);第182-187页 *

Also Published As

Publication number Publication date
CN109325979A (en) 2019-02-12

Similar Documents

Publication Publication Date Title
CN109325979B (en) Robot loop detection method based on deep learning
CN112258618B (en) Semantic mapping and positioning method based on fusion of prior laser point cloud and depth map
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN109682381B (en) Omnidirectional vision based large-view-field scene perception method, system, medium and equipment
CN112859859B (en) Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping
Huang et al. Visual odometry and mapping for autonomous flight using an RGB-D camera
CN105843223B (en) A kind of mobile robot three-dimensional based on space bag of words builds figure and barrier-avoiding method
CN107160395B (en) Map construction method and robot control system
US20220277515A1 (en) Structure modelling
CN110717927A (en) Indoor robot motion estimation method based on deep learning and visual inertial fusion
CN110866927A (en) Robot positioning and composition method based on EKF-SLAM algorithm combined with dotted line characteristics of foot
Otsu et al. Where to look? Predictive perception with applications to planetary exploration
CN115388902B (en) Indoor positioning method and system, AR indoor positioning navigation method and system
CN110260866A (en) A kind of robot localization and barrier-avoiding method of view-based access control model sensor
CN112750161B (en) Map updating method for mobile robot
Pütz et al. Continuous shortest path vector field navigation on 3d triangular meshes for mobile robots
Bavle et al. Stereo visual odometry and semantics based localization of aerial robots in indoor environments
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
Liu et al. Hybrid metric-feature mapping based on camera and Lidar sensor fusion
CN111198563B (en) Terrain identification method and system for dynamic motion of foot type robot
CN117367427A (en) Multi-mode slam method applicable to vision-assisted laser fusion IMU in indoor environment
CN117152228A (en) Self-supervision image depth estimation method based on channel self-attention mechanism
Giordano et al. 3D structure identification from image moments
Nandkumar et al. Simulation of Indoor Localization and Navigation of Turtlebot 3 using Real Time Object Detection
Muravyev et al. Evaluation of topological mapping methods in indoor environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant