CN110852211A - Neural network-based method and device for filtering obstacles in SLAM - Google Patents

Neural network-based method and device for filtering obstacles in SLAM Download PDF

Info

Publication number
CN110852211A
CN110852211A CN201911039051.4A CN201911039051A CN110852211A CN 110852211 A CN110852211 A CN 110852211A CN 201911039051 A CN201911039051 A CN 201911039051A CN 110852211 A CN110852211 A CN 110852211A
Authority
CN
China
Prior art keywords
image
obstacle
rcnn
mask
error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911039051.4A
Other languages
Chinese (zh)
Inventor
姬晓晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yingpu Technology Co Ltd
Original Assignee
Beijing Yingpu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yingpu Technology Co Ltd filed Critical Beijing Yingpu Technology Co Ltd
Priority to CN201911039051.4A priority Critical patent/CN110852211A/en
Publication of CN110852211A publication Critical patent/CN110852211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a neural network-based method and device for filtering obstacles in an SLAM, and relates to the field of SLAMs. The method comprises the following steps: acquiring an image from a dynamic dataset; performing obstacle detection on the image by using Mask-RCNN, and separating an obstacle from a static scene; from the image after the obstacle separation, localization and mapping are performed using the ORB-SLAM2 based on an estimation process of feature matching. The device includes: the device comprises an acquisition module, a detection module and a positioning module. The method and the device solve the problem that the moving barrier appears in the vSLAM, realize the filtering of the barrier by adopting a mode of combining Mask-RCNN and ORB-SLAM2, and obtain more stable and accurate positioning performance by introducing the Mask-RCNN.

Description

Neural network-based method and device for filtering obstacles in SLAM
Technical Field
The application relates to the field of SLAM, in particular to a method and a device for filtering obstacles in SLAM based on a neural network.
Background
vSLAM (Visual Simultaneous Localization And Mapping) is one of the recent research hotspots, whose main task is to find out three questions, "where", "where to go", And "how". Feature point detection of digital images is an important component in computer vision research, and image matching problems in existing research are generally processed by using a traditional feature point detection method.
In 1999, Lowe proposed a SIFT (Scale-invariant feature transform) method, which finds extreme points in the spatial Scale, extracts the invariant of position, Scale and rotation, and summarizes them perfectly in 2004. The method has low real-time performance, and can obtain a good matching effect only under the condition of a small feature database. In 2006, a SURF (Speeded Up Robust Features) method was proposed by Bay et al. According to the method, approximate haar wavelet values are calculated in different two-dimensional space scales by using a spot detection method of a Senhessian determinant, so that the overall characteristic detection efficiency is improved. In 2011, ruble et al proposed an effective replacement method orb (organized FAST and Rotated BRIEF) for SIFT and SURF, which introduces a directional calculation method in combination with FEST (Features from accelerated segmented detection Features) and BRIEF (Binary Robust Independent basis Features), and uses a greedy search method to select a point pair with strong distinctiveness for comparison and judgment, thereby generating a Binary descriptor. Better results are obtained, and the method is also a method which is commonly used at present.
With the rise of artificial intelligence and the development of deep learning, the research based on the convolutional neural network becomes a research hotspot in the field of computer vision, and a better result is obtained mostly.
In a traditional visual odometer feature point detection method, pixel gray value information and gradient information in a picture are generally considered to be matched, the environment is considered to be static, and the influence of moving obstacles is ignored. In fact, in real environments, moving obstacles are unavoidable, and if there are a plurality of feature points on a moving object, the system uses the moving object as a landmark, which results in a wrong estimation of range.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to an aspect of the present application, there is provided a method for filtering obstacles in a SLAM based on a neural network, including:
acquiring an image from a dynamic dataset;
performing obstacle detection on the image by using Mask-RCNN, and separating an obstacle from a static scene;
from the image after the obstacle separation, localization and mapping are performed using the ORB-SLAM2 based on an estimation process of feature matching.
Optionally, performing obstacle detection on the image by using Mask-RCNN, and separating an obstacle from a static scene, including:
and modifying the model of the MSCOCO data set which is trained by Mask-RCNN into a human type which is regarded as an obstacle, detecting the image, and outputting the image without the obstacle after the obstacle is detected.
Optionally, the method further comprises:
and calculating the difference between the position x of the feature point and the position f (x) which should appear in the key frame matching image, taking the difference as an error, accumulating the error by using an error function, and carrying out algorithm evaluation.
Optionally, acquiring an image from the dynamic dataset comprises:
and selecting a video containing human walking from the TUM dynamic object data set as an experimental object and acquiring a corresponding image.
Optionally, the method further comprises:
when data association is performed, the currently extracted image features are associated with the previously extracted image features, and a landmark or a place passed before is identified.
According to another aspect of the present application, there is provided a neural network-based device for filtering obstacles in a SLAM, including:
an acquisition module configured to acquire an image from a dynamic dataset;
a detection module configured to perform obstacle detection on the image by using Mask-RCNN, and separate an obstacle from a static scene;
a localization module configured to localize and map using an ORB-SLAM2 feature matching based estimation process from the isolated obstacle images.
Optionally, the detection module is specifically configured to:
and modifying the model of the MSCOCO data set which is trained by Mask-RCNN into a human type which is regarded as an obstacle, detecting the image, and outputting the image without the obstacle after the obstacle is detected.
Optionally, the apparatus further comprises:
and an evaluation module configured to calculate a difference between the position x of the feature point and a position f (x) that should appear in the key frame matching image, take the difference as an error, and accumulate the error using an error function to perform an algorithmic evaluation.
Optionally, the obtaining module is specifically configured to:
and selecting a video containing human walking from the TUM dynamic object data set as an experimental object and acquiring a corresponding image.
Optionally, the apparatus further comprises:
and the association module is configured to associate the currently extracted image features with the previously extracted image features and identify the landmarks or places passed before when the data association is carried out.
According to yet another aspect of the application, there is provided a computing device comprising a memory, a processor and a computer program stored in the memory and executable by the processor, wherein the processor implements the method as described above when executing the computer program.
According to yet another aspect of the application, a computer-readable storage medium, preferably a non-volatile readable storage medium, is provided, having stored therein a computer program which, when executed by a processor, implements a method as described above.
According to yet another aspect of the application, there is provided a computer program product comprising computer readable code which, when executed by a computer device, causes the computer device to perform the method described above.
According to the technical scheme, the method comprises the steps of obtaining images from a dynamic data set, using Mask-RCNN to detect the obstacles of the images, separating the obstacles from a static scene, using ORB-SLAM2 to perform positioning and mapping based on an estimation process of feature matching according to the images after the obstacles are separated, solving the problem that moving obstacles appear in vSLAM, achieving filtering of the obstacles by adopting a mode of combining the Mask-RCNN and ORB-SLAM2, and obtaining more stable and accurate positioning performance by introducing the Mask-RCNN.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
fig. 1 is a flowchart of a method for filtering obstacles in a neural network based SLAM according to an embodiment of the present application;
fig. 2 is a flowchart of a method for filtering obstacles in a neural network based SLAM according to another embodiment of the present application;
fig. 3 is a structural diagram of an obstacle filtering apparatus in a neural network-based SLAM according to another embodiment of the present application;
FIG. 4 is a block diagram of a computing device according to another embodiment of the present application;
fig. 5 is a diagram of a computer-readable storage medium structure according to another embodiment of the present application.
Detailed Description
Fig. 1 is a flowchart of an obstacle filtering method in a neural network based SLAM according to an embodiment of the present application. Referring to fig. 1, the method includes:
101: acquiring an image from a dynamic dataset;
102: performing obstacle detection on the image by using Mask-RCNN, and separating an obstacle from a static scene;
103: from the image after the obstacle is separated, localization and mapping are performed using the ORB-SLAM2 feature matching based estimation process.
In this embodiment, optionally, the barrier detection is performed on the image by using Mask-RCNN, and separating the barrier from the static scene includes:
and modifying the model of the MSCOCO data set which is trained by Mask-RCNN into a human type which is regarded as an obstacle, detecting the image, and outputting the image without the obstacle after the obstacle is detected.
In this embodiment, optionally, the method further includes:
and calculating the difference between the position x of the feature point and the position f (x) which should appear in the key frame matching image, taking the difference as an error, accumulating the error by using an error function, and carrying out algorithm evaluation.
In this embodiment, optionally, acquiring an image from the dynamic data set includes:
and selecting a video containing human walking from the TUM dynamic object data set as an experimental object and acquiring a corresponding image.
In this embodiment, optionally, the method further includes:
when data association is performed, the currently extracted image features are associated with the previously extracted image features, and a landmark or a place passed before is identified.
According to the method provided by the embodiment, the image is acquired from the dynamic data set, the Mask-RCNN is used for carrying out obstacle detection on the image, the obstacle is separated from the static scene, the ORB-SLAM2 is used for positioning and mapping based on the estimation process of feature matching according to the image after the obstacle is separated, the situation that the moving obstacle occurs in the vSLAM is solved, the obstacle is filtered by combining the Mask-RCNN and the ORB-SLAM2, and more stable and accurate positioning performance is obtained by introducing the Mask-RCNN.
Fig. 2 is a flowchart of an obstacle filtering method in a neural network based SLAM according to another embodiment of the present application. Referring to fig. 2, the method includes:
201: selecting a video containing human walking from the TUM dynamic object data set as an experimental object and acquiring a corresponding image;
in this embodiment, optionally, the data set used is a TUM dynamic object data set. The TUM dynamic object data set contains an RGB image sequence, image depth information, and ground true trajectory.
In this embodiment, the video including human walking is considered as a dynamic data set of motion, and therefore, can be used as an experimental object.
202: modifying a model from an MSCOCO data set, which is trained by Mask-RCNN, into a human type, detecting the image, and outputting the image without the obstacle after detecting the obstacle;
in the embodiment, the Mask-RCNN has good positioning and classifying capability, and can filter out obstacles from a static scene.
203: according to the image after the obstacle is separated, the ORB-SLAM2 is used for positioning and mapping based on the estimation process of feature matching;
among them, ORB-SLAM2 is a keyframe based SLAM framework that utilizes an estimation process based on feature matching.
204: calculating the difference between the position x of the feature point and the position f (x) which should appear in the key frame matching image, taking the difference as an error, accumulating the error by using an error function, and carrying out algorithm evaluation;
wherein the quality of the algorithm can be assessed by means of an error function. The selection of error characteristic points is reduced through the Mask-RCNN, and the generation of errors is reduced, so that the performance of the system can be improved through the algorithm.
205: when data association is performed, the currently extracted image features are associated with the previously extracted image features, and a landmark or a place passed before is identified.
According to the method provided by the embodiment, the image is acquired from the dynamic data set, the Mask-RCNN is used for carrying out obstacle detection on the image, the obstacle is separated from the static scene, the ORB-SLAM2 is used for positioning and mapping based on the estimation process of feature matching according to the image after the obstacle is separated, the situation that the moving obstacle occurs in the vSLAM is solved, the obstacle is filtered by combining the Mask-RCNN and the ORB-SLAM2, and more stable and accurate positioning performance is obtained by introducing the Mask-RCNN.
Fig. 3 is a structural diagram of an obstacle filtering apparatus in a neural network-based SLAM according to another embodiment of the present application. Referring to fig. 3, the apparatus includes:
an acquisition module 301 configured to acquire an image from a dynamic dataset;
a detection module 302 configured to perform obstacle detection on the image using Mask-RCNN to separate an obstacle from the static scene;
a localization module 303 configured to localize and map based on an estimation process of feature matching using ORB-SLAM2 from the image after the obstacle is separated.
In this embodiment, optionally, the detection module is specifically configured to:
and modifying the model of the MSCOCO data set which is trained by Mask-RCNN into a human type which is regarded as an obstacle, detecting the image, and outputting the image without the obstacle after the obstacle is detected.
In this embodiment, optionally, the apparatus further includes:
and the evaluation module is configured to calculate the difference between the position x of the feature point and the position f (x) which should appear in the key frame matching image, take the difference as an error, and accumulate the error by using an error function to carry out algorithm evaluation.
In this embodiment, optionally, the obtaining module is specifically configured to:
and selecting a video containing human walking from the TUM dynamic object data set as an experimental object and acquiring a corresponding image.
In this embodiment, optionally, the apparatus further includes:
and the association module is configured to associate the currently extracted image features with the previously extracted image features and identify the landmarks or places passed before when the data association is carried out.
The apparatus provided in this embodiment may perform the method provided in any of the above method embodiments, and details of the process are described in the method embodiments and are not described herein again.
According to the device provided by the embodiment, the image is acquired from the dynamic data set, the Mask-RCNN is used for carrying out obstacle detection on the image, the obstacle is separated from the static scene, the ORB-SLAM2 is used for positioning and mapping based on the estimation process of feature matching according to the image after the obstacle is separated, the situation that the moving obstacle occurs in the vSLAM is solved, the obstacle is filtered by combining the Mask-RCNN and the ORB-SLAM2, and more stable and accurate positioning performance is obtained by introducing the Mask-RCNN.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Embodiments also provide a computing device, referring to fig. 4, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 5, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. Which, when run on a computer, causes the computer to carry out the steps of the method according to the invention.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for filtering obstacles in SLAM based on a neural network comprises the following steps:
acquiring an image from a dynamic dataset;
performing obstacle detection on the image by using Mask-RCNN, and separating an obstacle from a static scene;
from the image after the obstacle separation, localization and mapping are performed using the ORB-SLAM2 based on an estimation process of feature matching.
2. The method of claim 1, wherein performing obstacle detection on the image using Mask-RCNN to separate obstacles from a static scene comprises:
and modifying the mode of a model which is from the MS COCO data set and trained by Mask-RCNN into a human type and then detecting the image after the human type is regarded as the obstacle, and outputting the image after the obstacle is removed after the obstacle is detected.
3. The method of claim 1, further comprising:
and calculating the difference between the position x of the feature point and the position f (x) which should appear in the key frame matching image, taking the difference as an error, accumulating the error by using an error function, and carrying out algorithm evaluation.
4. The method of claim 1, wherein acquiring an image from a dynamic dataset comprises:
and selecting a video containing human walking from the TUM dynamic object data set as an experimental object and acquiring a corresponding image.
5. The method according to any one of claims 1-4, further comprising:
when data association is performed, the currently extracted image features are associated with the previously extracted image features, and a landmark or a place passed before is identified.
6. An obstacle filtering apparatus in SLAM based on a neural network, comprising:
an acquisition module configured to acquire an image from a dynamic dataset;
a detection module configured to perform obstacle detection on the image by using Mask-RCNN, and separate an obstacle from a static scene;
a localization module configured to localize and map using an ORB-SLAM2 feature matching based estimation process from the isolated obstacle images.
7. The apparatus of claim 6, wherein the detection module is specifically configured to:
and modifying the mode of a model which is from the MS COCO data set and trained by Mask-RCNN into a human type and then detecting the image after the human type is regarded as the obstacle, and outputting the image after the obstacle is removed after the obstacle is detected.
8. The apparatus of claim 6, further comprising:
and an evaluation module configured to calculate a difference between the position x of the feature point and a position f (x) that should appear in the key frame matching image, take the difference as an error, and accumulate the error using an error function to perform an algorithmic evaluation.
9. The apparatus of claim 6, wherein the acquisition module is specifically configured to:
and selecting a video containing human walking from the TUM dynamic object data set as an experimental object and acquiring a corresponding image.
10. The apparatus according to any one of claims 6-9, further comprising:
and the association module is configured to associate the currently extracted image features with the previously extracted image features and identify the landmarks or places passed before when the data association is carried out.
CN201911039051.4A 2019-10-29 2019-10-29 Neural network-based method and device for filtering obstacles in SLAM Pending CN110852211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911039051.4A CN110852211A (en) 2019-10-29 2019-10-29 Neural network-based method and device for filtering obstacles in SLAM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911039051.4A CN110852211A (en) 2019-10-29 2019-10-29 Neural network-based method and device for filtering obstacles in SLAM

Publications (1)

Publication Number Publication Date
CN110852211A true CN110852211A (en) 2020-02-28

Family

ID=69598948

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911039051.4A Pending CN110852211A (en) 2019-10-29 2019-10-29 Neural network-based method and device for filtering obstacles in SLAM

Country Status (1)

Country Link
CN (1) CN110852211A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699099A (en) * 2009-08-31 2015-06-10 Neato机器人技术公司 Method and apparatus for simultaneous localization and mapping of mobile robot environment
CN109145903A (en) * 2018-08-22 2019-01-04 阿里巴巴集团控股有限公司 A kind of image processing method and device
CN110009683A (en) * 2019-03-29 2019-07-12 北京交通大学 Object detecting method on real-time planar based on MaskRCNN

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104699099A (en) * 2009-08-31 2015-06-10 Neato机器人技术公司 Method and apparatus for simultaneous localization and mapping of mobile robot environment
CN109145903A (en) * 2018-08-22 2019-01-04 阿里巴巴集团控股有限公司 A kind of image processing method and device
CN110009683A (en) * 2019-03-29 2019-07-12 北京交通大学 Object detecting method on real-time planar based on MaskRCNN

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ZHONGQUN ZHANG, JINGTAO ZHANG, QIRONG TANG: ""Mask R-CNN Based Semantic RGB-D SLAM for Dynamic Scenes"", 《PROCEEDINGS OF THE 2019 IEEE/ASME INTERNATIONAL CONFERENCE ON ADVANCED INTELLIGENT MECHATRONICS》 *

Similar Documents

Publication Publication Date Title
CN109544615B (en) Image-based repositioning method, device, terminal and storage medium
Jung et al. Boundary enhancement semantic segmentation for building extraction from remote sensed image
EP2915138B1 (en) Systems and methods of merging multiple maps for computer vision based tracking
WO2020259481A1 (en) Positioning method and apparatus, electronic device, and readable storage medium
Hu et al. A novel object tracking algorithm by fusing color and depth information based on single valued neutrosophic cross-entropy
CN107424171B (en) Block-based anti-occlusion target tracking method
KR101399804B1 (en) Method and apparatus for tracking and recognition with rotation invariant feature descriptors
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
WO2018207426A1 (en) Information processing device, information processing method, and program
US20220398746A1 (en) Learning method and device for visual odometry based on orb feature of image sequence
Bashar et al. Multiple object tracking in recent times: A literature review
CN115018886B (en) Motion trajectory identification method, device, equipment and medium
CN110751163B (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN113869163B (en) Target tracking method and device, electronic equipment and storage medium
CN110852211A (en) Neural network-based method and device for filtering obstacles in SLAM
KR102426594B1 (en) System and method for estimating the location of object in crowdsourcing environment
CN114359915A (en) Image processing method, device and readable storage medium
CN114359332A (en) Target tracking method, device, equipment and medium based on depth image
JP2015184743A (en) Image processor and object recognition method
CN112614166A (en) Point cloud matching method and device based on CNN-KNN
Micheal et al. Comparative analysis of SIFT and SURF on KLT tracker for UAV applications
CN113570535A (en) Visual positioning method and related device and equipment
CN113763468A (en) Positioning method, device, system and storage medium
JP5626011B2 (en) Program and image processing apparatus
CN111460977B (en) Cross-view personnel re-identification method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200228

RJ01 Rejection of invention patent application after publication