CN109325979A - Robot winding detection method based on deep learning - Google Patents

Robot winding detection method based on deep learning Download PDF

Info

Publication number
CN109325979A
CN109325979A CN201810804671.1A CN201810804671A CN109325979A CN 109325979 A CN109325979 A CN 109325979A CN 201810804671 A CN201810804671 A CN 201810804671A CN 109325979 A CN109325979 A CN 109325979A
Authority
CN
China
Prior art keywords
frame
picture
feature vector
robot
convolutional neural
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810804671.1A
Other languages
Chinese (zh)
Other versions
CN109325979B (en
Inventor
魏国亮
罗顺心
严龙
宋天中
耿双乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201810804671.1A priority Critical patent/CN109325979B/en
Publication of CN109325979A publication Critical patent/CN109325979A/en
Application granted granted Critical
Publication of CN109325979B publication Critical patent/CN109325979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The robot winding detection method based on deep learning that the invention discloses a kind of, this method obtains the picture of a frame frame by depth camera, input as convolutional neural networks, the quantity information of the classification information of object, location information and the type objects is obtained from the output end of convolutional neural networks, establish the dictionary model being made of familiar object, object includes the object in convolutional neural networks output end acquisition picture in dictionary model, describes the picture using classification, position, total quantity feature vector and stores;It is used as according to the object category occurred in two frame pictures and judges the whether identical feature of two pictures, while using location information and quantity information as supplemental characteristic, building judges the function of two frame picture similarity degrees;Winding detection is realized according to the function.This method realizes winding detection function using the mode of deep learning, reduces pose drift error, realizes precise positioning and builds figure, greatly reduces operand, what is showed in terms of real-time is more outstanding.

Description

Robot winding detection method based on deep learning
Technical field
The present invention relates to the robot winding detection methods based on deep learning.
Background technique
With the rise of robot industry, positioning immediately and map structuring SLAM (simultaneous Localization and mapping) status in robot is more and more important, in recent years, due to the hair of depth camera Exhibition gradually turns to vision by traditional laser radar SLAM and inertial sensor SLAM so that SLAM achieves great breakthrough SLAM.Vision SLAM master is to solve camera positioning in space and creation environmental map.It is more popular currently In some industries, it can be seen that its figure obtains map and current visual angle pair according to vision SLAM such as in terms of VR/AR Superposition dummy object does corresponding rendering, the dummy object of superposition can be made to seem that comparison is true in this way, without indisposed sense;? Unmanned plane field, can be used vision SLAM building local map, and auxiliary unmanned plane carries out automatic obstacle avoiding, planning path;In nothing People drives aspect, and vision SLAM technology can be used and provide visual odometry function, then merge with other positioning methods;It moves Mobile robot positioning navigation, vision SLAM can be used for build environment map, be based on the map, and mobile robot executes road The tasks such as diameter planning, autonomous exploration, navigation.
Winding detection is to solve during figure is built in positioning, and as the time changes, the drift of pose at any time is asked Topic, common method is bag of words (bag of words), it be it is a kind of abstract, unsupervised learning, operand is larger, And with the variation of time, robot can be increasing for the accumulation of error of pose, so that robot localization and the essence for building figure Exactness reduces, and seriously affects the accuracy of robot autonomous navigation.
Summary of the invention
The robot winding detection method based on deep learning that technical problem to be solved by the invention is to provide a kind of, this Method overcomes the defect of traditional bag of words winding detection, realizes winding detection function using the mode of deep learning, reduces position Appearance drift error realizes and precise positioning and builds figure, so that it is more accurate when robot autonomous navigation, and greatly reduce operation Amount, what is showed in terms of real-time is more outstanding.
In order to solve the above technical problems, including following step the present invention is based on the robot winding detection method of deep learning It is rapid:
Step 1: robot is during the motion, the picture of a frame frame is obtained by depth camera, the picture is as depth The input of the convolutional neural networks of learning objective detection algorithm, the picture obtained from the output end of convolutional neural networks include object Classification information, location information and this kind of type objects quantity information;
Step 2: establishing the dictionary model being made of familiar object, object includes that convolutional neural networks are defeated in dictionary model Outlet obtains the object in picture, describes the picture using classification, position, total quantity feature vector and stores, wherein object is total Quantity is the sum of category feature vector, and position feature vector is made of the pixel coordinate of all objects bounding box in the picture;
Step 3: the object total quantity feature vector of picture present frame and historical frames is compared judgement, if not phase Deng, then compared with next historical frames, then judge present frame and next historical frames object total quantity feature vector whether phase Deng, when equal, use type judge the category feature vector of present frame and the category feature vector of historical frames subtract each other whether as Zero, i.e.,
In formula (1): C1 is that category feature vector, the C2 of present frame are category feature vector, the C1 of historical framesiExpression is worked as I-th of value of previous frame category feature vector, the quantity of as i-th object, C2iIndicate the i-th of historical frames category feature vector A value, the object category quantity that n is setting;F is that the category feature vector of present frame and the category feature vector of historical frames subtract each other It whether is zero as a result, compared with next historical frames, re-executing this step if f is not zero;
Step 4: calculating the similarity degree of two frame pictures using formula (2) when f is zero in formula (1):
In formula (2): P is the ratio of the sum of all objects elemental area, S in two frame picturesijFor i-th of type, j-th of object The elemental area of body,For the upper right coordinate of j-th of object boundary frame of i-th of type,It is i-th Lower-left coordinate, the S1 of j-th of object boundary frame of typeijIn 1 indicate present frame, S1ijThen indicate i-th of type in present frame The elemental area that is occupied of j-th of object, similarly S2ijThen indicate what j-th of object of i-th of type in historical frames was occupied Elemental area;;
If the value of P is greater than 1, its inverse is taken, it is constant if the value of P is less than or equal to 1, when two frame pictures are identical When, P is one close to 1 value, and when the object summation elemental area in two frame pictures is different, P is less than 1 Value;
Step 5: judge P >=similarity threshold, if true, then two frame pictures are similar, judge that robot generates winding, instead It, then two frame pictures are dissimilar, judge that robot is generated without winding, and wherein similarity threshold is to judge similarity by practical experience One constant of setting.
Further, using the convolutional neural networks of dictionary model training deep learning algorithm of target detection, by the one of acquisition Serial picture is input to convolutional neural networks, obtains the classification information of object in picture, the quantity of location information and the type objects Information.
Since the present invention is based on the robot winding detection methods of deep learning to use above-mentioned technical proposal, i.e. this method The picture that a frame frame is obtained by depth camera, as the input of convolutional neural networks, from the output end of convolutional neural networks The quantity information of the classification information of object, location information and the type objects is obtained, the dictionary model being made of familiar object is established, Object includes the object in convolutional neural networks output end acquisition picture in dictionary model, using classification, position, total measure feature The vector description picture simultaneously stores;It is used as according to the object category occurred in two frame pictures and judges the whether identical spy of two pictures Sign, while use location information of the object in picture and quantity information as supplemental characteristic constructs and judges the picture and before The function of the key frame picture similarity degree of preservation;When functional value is greater than preset value, it is believed that robot has returned to original position, Conversely, the robot does not detect winding.This method overcomes the defect of traditional bag of words winding detection, uses depth The mode of habit realizes winding detection function, reduces pose drift error, realizes precise positioning and builds figure, so that robot autonomous lead Endurance is more accurate, and greatly reduces operand, and what is showed in terms of real-time is more outstanding.
Detailed description of the invention
The present invention will be further described in detail below with reference to the accompanying drawings and embodiments:
Fig. 1 is that the present invention is based on the functional block diagrams of the robot winding detection method of deep learning.
Specific embodiment
Embodiment is as shown in Figure 1, the robot winding detection method the present invention is based on deep learning includes the following steps:
Step 1: robot is during the motion, the picture of a frame frame is obtained by depth camera, the picture is as depth The input of the convolutional neural networks of learning objective detection algorithm, the picture obtained from the output end of convolutional neural networks include object Classification information, location information and this kind of type objects quantity information;
Step 2: establishing the dictionary model being made of familiar object, object includes that convolutional neural networks are defeated in dictionary model Outlet obtains the object in picture, describes the picture using classification, position, total quantity feature vector and stores, wherein object is total Quantity is the sum of category feature vector, and position feature vector is made of the pixel coordinate of all objects bounding box in the picture; Wherein, the picture that convolutional neural networks output end obtains constitutes the data set model of coco, and the data set model is for image Markup information not only has classification, location information, and there are also the semantic texts to image to describe, and the open source of COCO data set is so that image Segmentation semantic understanding achieves huge progress, also almost becomes the normal data that image, semantic understands algorithm performance evaluation Collection, the data set model of this method application coco can accurately obtain the classification of object in picture, position, total quantity feature vector;
Step 3: the object total quantity feature vector of picture present frame and historical frames is compared judgement, if not phase Deng, then compared with next historical frames, then judge present frame and next historical frames object total quantity feature vector whether phase Deng, when equal, use type judge the category feature vector of present frame and the category feature vector of historical frames subtract each other whether as Zero, i.e.,
In formula (1): C1 is that category feature vector, the C2 of present frame are category feature vector, the C1 of historical framesiExpression is worked as I-th of value of previous frame category feature vector, the quantity of as i-th object, C2iIndicate the i-th of historical frames category feature vector A value, the object category quantity that n is setting;F is that the category feature vector of present frame and the category feature vector of historical frames subtract each other It whether is zero as a result, compared with next historical frames, re-executing this step if f is not zero;
Step 4: calculating the similarity degree of two frame pictures using formula (2) when f is zero in formula (1):
In formula (2): P is the ratio of the sum of all objects elemental area, S in two frame picturesijFor i-th of type, j-th of object The elemental area of body,For the upper right coordinate of j-th of object boundary frame of i-th of type,It is i-th The lower-left coordinate of j-th of object boundary frame of type.S1ijIn 1 indicate present frame, S1ijThen indicate i-th of type in present frame The elemental area that is occupied of j-th of object, similarly S2ijThen indicate what j-th of object of i-th of type in historical frames was occupied Elemental area;
If the value of P is greater than 1, its inverse is taken, it is constant if the value of P is less than or equal to 1, when two frame pictures are identical When, P is one close to 1 value, and when the object summation elemental area in two frame pictures is different, P is less than 1 Value;
Step 5: judge P >=similarity threshold, if true, then two frame pictures are similar, judge that robot generates winding, instead It, then two frame pictures are dissimilar, judge that robot is generated without winding, and wherein similarity threshold is to judge similarity by practical experience One constant of setting.
Preferably, using the convolutional neural networks of dictionary model training deep learning algorithm of target detection, by the one of acquisition Serial picture is input to convolutional neural networks, obtains the classification information of object in picture, the quantity of location information and the type objects Information.
For in vision SLAM robot winding detection for, judge robot whether return to original position it is necessary to Judge whether robot there are two identical pictures to occur in the key frame of acquisition, according to the object occurred in this two picture Classification, which is used as, judges the whether identical feature of two pictures, while location information and quantity information work using object in picture For supplemental characteristic, the feature vector for judging the picture and previously stored key frame picture similarity degree is constructed, then Feature vector is compared, judges whether robot has winding generation.
This method detects the position of object in the picture, type and quantity using the method for deep learning, based on deep It spends learning objective detection algorithm (SSD) and realizes that classification and quantity are detected to the position of object in picture.SSD is inputted One picture obtains the image information of different scale in each layer of characteristic pattern, each by the convolutional neural networks of SSD The deviant of default boundary frame and the score of object category of object are predicted in the characteristic pattern of kind scale, obtained a series of Object confidence level score and bounding box comprising object, since the same object may include by multiple bounding boxes, because This realizes to obtain optimal result using non-maxima suppression algorithm.
The dictionary model being made of familiar object is established, object is obtained comprising convolutional neural networks output end in dictionary model Object in picture describes the picture using classification, position, total quantity feature vector and stores, in practical application, picture input The result that SSD is obtained, which is equivalent to through the object in the model that consults a dictionary, describes the width picture, that is, uses object category, position It sets, picture is described in total quantity, the convenient similarity degree that picture is calculated by this feature.
Such as when robot is to operate indoors, the dictionary model of an indoor familiar object is established, includes mobile phone, mouse The objects such as mark, chair, keyboard, display, desk since convolutional neural networks are not only a classification problem, while also needing The position of object is obtained, therefore it is also treated as regression problem in SSD and goes to solve, by minimizing loss function in training Value is being exported as a result, still the same object is surrounded due to will appear many bounding boxes in the result of output, to understand The certainly problem selects optimal result using the method for non-maxima suppression.Therefore when the convolutional neural networks of picture input SSD After, so that it may object category included in picture is obtained, while also containing its position, when having obtained the class of object Not, position when quantity, object category, position, total quantity is saved in the form of feature vector, wherein the sum of object Amount is actually the sum of category feature vector.
It is all the interframe by robot when due to front-end vision odometer calculating robot's pose in vision SLAM It calculates, only considered the position orientation relation of two neighboring key frame, the constraint without considering historical frames, therefore with the change of time Change, robot is increasing for the accumulation of error of pose.This method realizes winding detection by using the mode of deep learning Function, the error that robot drifts about with time pose can be reduced, so that the positioning of robot and building the accuracy of figure more Add precisely, so that robot is more accurate in independent navigation.Different from the mode of bag of words, this method, which uses, prison The mode of learning superintended and directed rises to the object level being understood that from abstract characteristic point level, allows robot as the mankind Whether equally identification scene is identical.Operand is greatly reduced using the description form of object features vector simultaneously, in real-time Aspect shows more outstanding.

Claims (2)

1. a kind of robot winding detection method based on deep learning, it is characterised in that this method includes the following steps:
Step 1: robot is during the motion, the picture of a frame frame is obtained by depth camera, the picture is as deep learning The input of the convolutional neural networks of algorithm of target detection, the picture obtained from the output end of convolutional neural networks include the class of object The quantity information of other information, location information and this kind of type objects;
Step 2: establishing the dictionary model being made of familiar object, object includes convolutional neural networks output end in dictionary model The object in picture is obtained, which is described using classification, position, total quantity feature vector and is stored, wherein object total quantity The as sum of category feature vector, position feature vector are made of the pixel coordinate of all objects bounding box in the picture;
Step 3: the object total quantity feature vector of picture present frame and historical frames is compared judgement, if unequal, Compared with next historical frames, then judge whether the object total quantity feature vector of present frame and next historical frames is equal, when When equal, type is used to judge whether for zero, i.e., the category feature vector of present frame and the category feature vector of historical frames subtract each other
(1)
In formula (1): C1 be present frame category feature vector, C2 be historical frames category feature vector,Indicate present frame class I-th of value of other feature vector, the quantity of as i-th object,I-th of value of expression historical frames category feature vector, N is the object category quantity of setting;F is the category feature vector of present frame and the category feature vector of historical frames subtract each other whether be Zero as a result, compared with next historical frames, re-executing this step if f is not zero;
Step 4: calculating the similarity degree of two frame pictures using formula (2) when f is zero in formula (1):
(2)
In formula (2): P be the ratio of the sum of all objects elemental area in two frame pictures,For the picture of i-th of type, j-th of object Vegetarian noodles product,For the upper right coordinate of j-th of object boundary frame of i-th of type,It is j-th of i-th of type The lower-left coordinate of object boundary frame,In 1 indicate present frame,Then indicate j-th of object institute of i-th of type in present frame The elemental area occupied, similarlyThen indicate the elemental area that j-th of object of i-th of type in historical frames is occupied;
If the value of P is greater than 1, its inverse is taken, it is constant if the value of P is less than or equal to 1, when two frame pictures are identical, P is one close to 1 value, and when the object summation elemental area in two frame pictures is different, P is less than 1 value;
Step 5: judge P >=similarity threshold, if true, then two frame pictures are similar, judge that robot generates winding, conversely, then Two frame pictures are dissimilar, judge that robot is generated without winding, and wherein similarity threshold is to judge that similarity is set by practical experience A constant.
2. the robot winding detection method according to claim 1 based on deep learning, it is characterised in that: use dictionary The a series of pictures of acquisition is input to convolutional Neural net by the convolutional neural networks of model training deep learning algorithm of target detection Network obtains the quantity information of the classification information of object, location information and the type objects in picture.
CN201810804671.1A 2018-07-20 2018-07-20 Robot loop detection method based on deep learning Active CN109325979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810804671.1A CN109325979B (en) 2018-07-20 2018-07-20 Robot loop detection method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810804671.1A CN109325979B (en) 2018-07-20 2018-07-20 Robot loop detection method based on deep learning

Publications (2)

Publication Number Publication Date
CN109325979A true CN109325979A (en) 2019-02-12
CN109325979B CN109325979B (en) 2021-11-02

Family

ID=65264079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810804671.1A Active CN109325979B (en) 2018-07-20 2018-07-20 Robot loop detection method based on deep learning

Country Status (1)

Country Link
CN (1) CN109325979B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871803A (en) * 2019-02-18 2019-06-11 清华大学 Robot winding detection method and device
CN110069995A (en) * 2019-03-16 2019-07-30 浙江师范大学 A kind of service plate moving state identification method based on deep learning
CN110135377A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Object moving state detection method, device, server and computer-readable medium
CN110880010A (en) * 2019-07-05 2020-03-13 电子科技大学 Visual SLAM closed loop detection algorithm based on convolutional neural network
CN111401123A (en) * 2019-12-29 2020-07-10 的卢技术有限公司 S L AM loop detection method and system based on deep learning
CN111860297A (en) * 2020-07-17 2020-10-30 厦门理工学院 SLAM loop detection method applied to indoor fixed space
CN113377987A (en) * 2021-05-11 2021-09-10 重庆邮电大学 Multi-module closed-loop detection method based on ResNeSt-APW
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107403163A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of laser SLAM closed loop detection methods based on deep learning
CN108108764A (en) * 2017-12-26 2018-06-01 东南大学 A kind of vision SLAM winding detection methods based on random forest
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107330357A (en) * 2017-05-18 2017-11-07 东北大学 Vision SLAM closed loop detection methods based on deep neural network
CN107403163A (en) * 2017-07-31 2017-11-28 武汉大学 A kind of laser SLAM closed loop detection methods based on deep learning
CN108133496A (en) * 2017-12-22 2018-06-08 北京工业大学 A kind of dense map creating method based on g2o Yu random fern
CN108108764A (en) * 2017-12-26 2018-06-01 东南大学 A kind of vision SLAM winding detection methods based on random forest

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
NATE MERRILL 等: "Lightweight Unsupervised Deep Loop Closure", 《ARXIV:1805.07703V2 [CS.RO]》 *
何元烈 等: "基于精简卷积神经网络的快速闭环检测方法", 《计算机工程》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871803A (en) * 2019-02-18 2019-06-11 清华大学 Robot winding detection method and device
CN110069995A (en) * 2019-03-16 2019-07-30 浙江师范大学 A kind of service plate moving state identification method based on deep learning
CN110135377A (en) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 Object moving state detection method, device, server and computer-readable medium
CN110880010A (en) * 2019-07-05 2020-03-13 电子科技大学 Visual SLAM closed loop detection algorithm based on convolutional neural network
CN111401123A (en) * 2019-12-29 2020-07-10 的卢技术有限公司 S L AM loop detection method and system based on deep learning
CN111401123B (en) * 2019-12-29 2024-04-19 的卢技术有限公司 SLAM loop detection method and system based on deep learning
CN111860297A (en) * 2020-07-17 2020-10-30 厦门理工学院 SLAM loop detection method applied to indoor fixed space
CN113377987A (en) * 2021-05-11 2021-09-10 重庆邮电大学 Multi-module closed-loop detection method based on ResNeSt-APW
CN115200588A (en) * 2022-09-14 2022-10-18 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot
CN115200588B (en) * 2022-09-14 2023-01-06 煤炭科学研究总院有限公司 SLAM autonomous navigation method and device for mobile robot

Also Published As

Publication number Publication date
CN109325979B (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN109325979A (en) Robot winding detection method based on deep learning
CN111563442B (en) Slam method and system for fusing point cloud and camera image data based on laser radar
CN112859859B (en) Dynamic grid map updating method based on three-dimensional obstacle object pixel object mapping
Stachniss et al. Simultaneous localization and mapping
Eresen et al. Autonomous quadrotor flight with vision-based obstacle avoidance in virtual environment
CN107967457A (en) A kind of place identification for adapting to visual signature change and relative positioning method and system
CN106780484A (en) Robot interframe position and orientation estimation method based on convolutional neural networks Feature Descriptor
KR20210064049A (en) System and method for object trajectory prediction in an autonomous scenario
KR102608473B1 (en) Method and apparatus for aligning 3d model
JP2020119523A (en) Method for detecting pseudo-3d bounding box and device using the same
Huang et al. Network algorithm real-time depth image 3D human recognition for augmented reality
Chen et al. Design and Implementation of AMR Robot Based on RGBD, VSLAM and SLAM
Kawanishi et al. Parallel line-based structure from motion by using omnidirectional camera in textureless scene
Ali et al. SIFT based monocular SLAM with multi-clouds features for indoor navigation
Yi et al. Map representation for robots
CN113570713B (en) Semantic map construction method and device for dynamic environment
CN114202701A (en) Unmanned aerial vehicle vision repositioning method based on object semantics
Chen et al. State-based SHOSLIF for indoor visual navigation
Zhang Deep learning applications in simultaneous localization and mapping
Juang Humanoid robot runs maze mode using depth-first traversal algorithm
Craciunescu et al. Towards the development of autonomous wheelchair
Cadena et al. Recursive inference for prediction of objects in urban environments
Atoui et al. Visual-based semantic simultaneous localization and mapping for Robotic applications: A review
Kayalvizhi et al. A Comprehensive Study on Supermarket Indoor Navigation for Visually Impaired using Computer Vision Techniques
Lee et al. Visual route navigation using an adaptive extension of rapidly-exploring random trees

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant