CN110599522A - Method for detecting and removing dynamic target in video sequence - Google Patents

Method for detecting and removing dynamic target in video sequence Download PDF

Info

Publication number
CN110599522A
CN110599522A CN201910879106.6A CN201910879106A CN110599522A CN 110599522 A CN110599522 A CN 110599522A CN 201910879106 A CN201910879106 A CN 201910879106A CN 110599522 A CN110599522 A CN 110599522A
Authority
CN
China
Prior art keywords
dynamic
scene
mask
image
semi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910879106.6A
Other languages
Chinese (zh)
Other versions
CN110599522B (en
Inventor
马忠丽
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Original Assignee
Chengdu University of Information Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology filed Critical Chengdu University of Information Technology
Priority to CN201910879106.6A priority Critical patent/CN110599522B/en
Publication of CN110599522A publication Critical patent/CN110599522A/en
Application granted granted Critical
Publication of CN110599522B publication Critical patent/CN110599522B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a method for detecting and eliminating dynamic targets in a video sequence, which comprises the steps of firstly collecting a left image and a right image collected by a binocular camera carried on a robot, and then carrying out correction, denoising and graying pretreatment on the images; performing example segmentation on the left image by using Mask R-CNN, and extracting ORB characteristics from the left image and the right image; calculating a scene flow between two frames of images; then, carrying out dynamic target detection and dynamic target elimination; the method can perform pixel precision segmentation on the instances in the scene by using the instance segmentation network, and the subsequent elimination uses the result of the instance segmentation, so that the elimination precision of the dynamic target is high; the dynamic property of the instance in the scene is judged by using two methods, namely instance segmentation and scene flow, through a dynamic object detection and elimination method, the probability that the dynamic object is detected is ensured through two judgments, and the system omission ratio is low.

Description

Method for detecting and removing dynamic target in video sequence
Technical Field
The invention relates to the technical field of target detection, in particular to a method for detecting and removing dynamic targets in a video sequence.
Background
With the increasing performance of embedded devices, the computational resources required by visual SLAM technology can be satisfied on small platforms. The use of visual SLAM is also becoming more common in everyday life. For example, each large-technology internet company has application of visual SLAM technology in unmanned vehicles with a large amount of input resources, service robots in shopping malls, application of augmented reality and mixed reality, and most common sweeping robots, unmanned planes and the like.
Although the visual SLAM (simultaneous Localization And mapping) technology has been applied to such a great deal of maturity, there is currently no visual SLAM solution that can stably operate in all environments due to the shading of the scene, the camera motion causing the image to be smeared, And other reasons. Most of the existing visual SLAM technologies have a strong assumption that the operation scene of a robot is completely static, however, the scene in real life is generally highly dynamic, so for a complex dynamic environment, tasks such as accurate positioning and mapping still need to be solved.
In the existing visual SLAM system, one method is to eliminate dynamic objects by optical flow or scene flow clustering detection, and after obtaining a sparse scene flow in an image, an object can be segmented by using the scene flow. Because the scene flow represents the motion of points in the image in space, and the motion of the surface of a rigid object is consistent, the scene flow can be segmented into some geometric clusters by means of clustering. Finally, each geometric cluster represents a rigid body, and the motion of the rigid body is represented by the center of the geometric cluster. This makes use of spatial motion information of the scene stream to make it learn one spatial structure information, but has two problems:
the first problem is that if the scene flow calculated in the space is too little, that is, the optical flow information of too many pixels cannot be calculated in the absence of feature texture, the accuracy of segmenting the instances by using the scene flow clustering will be poor. If the extracted scene flow is too much, the calculation time for calculating the scene flow and the scene flow cluster can hardly reach the real time; the second problem is that there is a high probability that objects in the scene are non-rigid, and if non-rigid, the motion of the object surface may not be uniform.
The other method is to use RGB image and depth information to cluster and segment the rigid body in the image, then use the scene flow on the segmented rigid body in the image to calculate the motion information of the segmented object. Since the spatial and texture information is used for clustering segmentation, the number of clusters is important, and in fact, without prior knowledge, it is difficult to determine how many cluster centers exist, and the most ideal case is that the number of cluster centers is equal to the sum of the number of instances and the number of backgrounds in a scene, but in actual cases, too many cluster centers are usually used, resulting in excessive segmentation of an image.
Therefore, a scheme with high dynamic target rejection precision and low system omission factor is urgently needed at present to solve the problems.
Disclosure of Invention
The invention aims to provide a method for detecting and removing dynamic targets in a video sequence, which aims to solve the problems in the prior art and detects and removes all dynamic and static objects by using an example segmentation method.
In order to achieve the purpose, the invention provides the following scheme: the invention provides a method for detecting and removing dynamic targets in a video sequence, which comprises the following steps:
step one, acquiring a graph: a left image and a right image acquired by using a binocular camera;
step two, image preprocessing: preprocessing the left image and the right image such as correction, denoising, graying and the like;
step three, using Mask R-CNN example to divide: performing example segmentation on the left image by using Mask R-CNN, wherein a Mask obtained after the example segmentation is used as a first input of dynamic target detection;
firstly, extracting ORB characteristics from a left image and a right image, then matching the characteristics of the left image and the right image, the information of the front frame and the rear frame of the left image and the information of the front frame and the rear frame of the right image, calculating a scene flow between the two frames of images according to the matched relation, and taking the scene flow as the second input of the dynamic target detection;
step five, dynamic target detection is carried out: the dynamic object detection program judges the motion of each instance by utilizing instance class information, a segmentation mask and scene flow information in a scene output by instance segmentation;
step six, dynamic target elimination: and comparing the scene flow information of the semi-dynamic object with the scene flow information of the background according to the example category information to judge the dynamic property of the object, eliminating the dynamic target in the mask image of the dynamic object and outputting the image which shields the dynamic target.
Preferably, the instance splitting information in the step five is divided into three major categories: static, dynamic, and semi-dynamic; dynamic objects are instances that are assumed to have a high probability of moving in a scene, such as a human, bird, dog, etc. target; static objects are assumed to be objects that are approximately stationary in the scene, such as signal lights, beds, etc.; a semi-dynamic object is an object that remains stationary most of the time, but may move when in contact with a dynamic object, such as a book, bag, umbrella, etc.
Preferably, the motion method of each example is judged as follows:
for a dynamic object, directly combining the mask of the dynamic object into a shielding mask of an image frame, and shielding the effect of the part in the subsequent flow of the algorithm; regarding the static object as a static background without processing, and positioning and drawing the static object through the static background; and judging the intersection ratio of the mask and the dynamic mask of the semi-dynamic object, and regarding the semi-dynamic object as the dynamic object when the intersection ratio is larger than 0.3 and the mask area is smaller than the overlapped target.
Preferably, when detecting a part of dynamic objects which are not detected, judging by calculating a scene flow, that is, comparing scene flow information of a semi-dynamic object with scene flow information of a background to judge the dynamics of the object;
assume scene flow vector of background is SFb=(ub,vb,wb) The scene flow vector of the semi-dynamic object is SFhs=(uhs,vhs,whs) The invention sets two thresholds for judging the standard of two vector deviations:
one is the cosine distance of two vectors:
the other is the ratio of the two vector modulo lengths:
when cosine distance Sim (SF)b:SFhs) The ratio of the modulo lengths ratio _ m does not satisfy one of the two criteria:
or
ratio_m<1.2
I.e. the semi-dynamic object is considered dynamic with respect to the background.
The invention discloses the following technical effects:
(1) and the dynamic target removing precision is high. The example segmentation network used in the dynamic target detection and elimination method provided by the invention can carry out pixel precision segmentation on the examples in the scene, and the subsequent elimination uses the result of example segmentation, so that the precision of dynamic target elimination is high;
(2) the system has low omission factor. The dynamic target detection and elimination method provided by the invention judges the dynamic property of the instance in the scene by using the instance segmentation method and the scene flow method, and the probability of detecting the dynamic target is ensured by two judgments, so that the system omission ratio is low.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic view of indoor and outdoor optical flows, wherein FIG. 2a is a schematic view of outdoor optical flows and FIG. 2b is a schematic view of indoor optical flows;
fig. 3 is a schematic diagram of an indoor/outdoor scene flow, where fig. 3a is a schematic diagram of an indoor scene flow and fig. 3b is a schematic diagram of an outdoor scene flow.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1-3, the invention provides a method for detecting and removing dynamic targets in a video sequence, comprising the following steps:
step one, image acquisition: a left image and a right image acquired by using a binocular camera;
step two, image preprocessing: preprocessing the left image and the right image such as correction, denoising, graying and the like;
step three, using Mask R-CNN example to divide: performing example segmentation on the left image by using Mask R-CNN, wherein a Mask obtained after the example segmentation is used as a first input of dynamic target detection;
firstly, extracting ORB characteristics from a left image and a right image, then matching the characteristics of the left image and the right image, the information of the front frame and the rear frame of the left image and the information of the front frame and the rear frame of the right image, calculating a scene flow between the two frames of images according to the matched relation, and taking the scene flow as the second input of the dynamic target detection;
the motion method of each example is judged as follows:
for a dynamic object, directly combining the mask of the dynamic object into a shielding mask of an image frame, and shielding the effect of the part in the subsequent flow of the algorithm; regarding the static object as a static background without processing, and positioning and drawing the static object through the static background; and judging the intersection ratio of the mask and the dynamic mask of the semi-dynamic object, and regarding the semi-dynamic object as the dynamic object when the intersection ratio is larger than 0.3 and the mask area is smaller than the overlapped target.
When part of the dynamic objects which are not detected are detected, judging by calculating scene flow, namely comparing scene flow information of the semi-dynamic objects with scene flow information of the background to judge the dynamics of the objects;
assume scene flow vector of background is SFb=(ub,vb,wb) The scene flow vector of the semi-dynamic object is SFhs=(uhs,vhs,whs) The invention sets two thresholds for judging the standard of two vector deviations:
one is the cosine distance of two vectors:
the other is the ratio of the two vector modulo lengths:
when cosine distance Sim (SF)b:SFhs) The ratio of the modulo lengths ratio _ m does not satisfy one of the two criteria:
or
ratio_m<1.2
I.e. the semi-dynamic object is considered dynamic with respect to the background.
Step five, dynamic target detection is carried out: the dynamic object detection program judges the motion of each instance by utilizing instance class information, a segmentation mask and scene flow information in a scene output by instance segmentation;
example partitioning information is divided into three major categories: static, dynamic, and semi-dynamic; dynamic objects are instances that are assumed to have a high probability of moving in a scene, such as a human, bird, dog, etc. target; static objects are assumed to be objects that are approximately stationary in the scene, such as signal lights, beds, etc.; a semi-dynamic object is an object that remains stationary most of the time, but may move when in contact with a dynamic object, such as a book, bag, umbrella, etc.
Step six, dynamic target elimination: and comparing the scene flow information of the semi-dynamic object with the scene flow information of the background according to the example category information to judge the dynamic property of the object, eliminating the dynamic target in the mask image of the dynamic object and outputting the image which shields the dynamic target.
Optical flow and scene flow: optical flow is an estimate of the displacement between corresponding points between two images, as shown in FIG. 2. The optical flow estimates are the projection of the motion of objects in the scene onto the camera plane. When the motion of the motion vector of the object on the image plane is changed, the motion of the object can be detected, but when the moving object is moved relatively perpendicularly to the camera plane, the optical flow cannot judge the motion state thereof. The scene flow is the expansion of optical flow in stereoscopic vision, and describes the instantaneous motion of a point in space relative to the instantaneous motion of a point on an image described by the optical flow, as shown in fig. 3, which is a visual image of the scene flow, wherein the thinner the arrow lines indicate the smaller the parallax, and the thicker the lines indicate the larger the parallax. This is another important principle basis for the solution of the invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (4)

1. A method for detecting and removing dynamic targets in a video sequence is characterized by comprising the following steps:
step one, acquiring a graph: a left image and a right image acquired by using a binocular camera;
step two, image preprocessing: preprocessing the left image and the right image such as correction, denoising, graying and the like;
step three, using Mask R-CNN example to divide: performing example segmentation on the left image by using Mask R-CNN, wherein a Mask obtained after the example segmentation is used as a first input of dynamic target detection;
firstly, extracting ORB characteristics from a left image and a right image, then matching the characteristics of the left image and the right image, the information of the front frame and the rear frame of the left image and the information of the front frame and the rear frame of the right image, calculating a scene flow between the two frames of images according to the matched relation, and taking the scene flow as the second input of the dynamic target detection;
step five, dynamic target detection is carried out: the dynamic object detection program judges the motion of each instance by utilizing instance class information, a segmentation mask and scene flow information in a scene output by instance segmentation;
step six, dynamic target elimination: and comparing the scene flow information of the semi-dynamic object with the scene flow information of the background according to the example category information to judge the dynamic property of the object.
2. The method of claim 1, wherein the method comprises: the example segmentation information in the step five is divided into three categories: static, dynamic, and semi-dynamic; dynamic objects are instances that are assumed to have a high probability of moving in a scene, such as a human, bird, dog, etc. target; static objects are assumed to be objects that are approximately stationary in the scene, such as signal lights, beds, etc.; a semi-dynamic object is an object that remains stationary most of the time, but may move when in contact with a dynamic object, such as a book, bag, umbrella, etc.
3. The method of claim 2, wherein the method comprises: the motion method of each example is judged as follows:
for a dynamic object, directly combining the mask of the dynamic object into a shielding mask of an image frame, and shielding the effect of the part in the subsequent flow of the algorithm; regarding the static object as a static background without processing, and positioning and drawing the static object through the static background; and judging the intersection ratio of the mask and the dynamic mask of the semi-dynamic object, and regarding the semi-dynamic object as the dynamic object when the intersection ratio is larger than 0.3 and the mask area is smaller than the overlapped target.
4. The method according to claim 3, wherein the method comprises: when part of the dynamic objects which are not detected are detected, judging by calculating scene flow, namely comparing scene flow information of the semi-dynamic objects with scene flow information of the background to judge the dynamics of the objects;
assume scene flow vector of background is SFb=(ub,vb,wb) The scene flow vector of the semi-dynamic object is SFhs=(uhs,vhs,whs) The invention sets two thresholds for judging the standard of two vector deviations:
one is the cosine distance of two vectors:
the other is the ratio of the two vector modulo lengths:
when cosine distance Sim (SF)b,SFhs) The ratio of the modulo lengths ratio _ m does not satisfy one of the two criteria:
or
ratio_m<1.2
I.e. the semi-dynamic object is considered dynamic with respect to the background.
And finally, removing the dynamic target in the mask image of the dynamic object, and outputting the image with the dynamic target shielded.
CN201910879106.6A 2019-09-18 2019-09-18 Method for detecting and removing dynamic target in video sequence Active CN110599522B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910879106.6A CN110599522B (en) 2019-09-18 2019-09-18 Method for detecting and removing dynamic target in video sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910879106.6A CN110599522B (en) 2019-09-18 2019-09-18 Method for detecting and removing dynamic target in video sequence

Publications (2)

Publication Number Publication Date
CN110599522A true CN110599522A (en) 2019-12-20
CN110599522B CN110599522B (en) 2023-04-11

Family

ID=68860692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910879106.6A Active CN110599522B (en) 2019-09-18 2019-09-18 Method for detecting and removing dynamic target in video sequence

Country Status (1)

Country Link
CN (1) CN110599522B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797728A (en) * 2020-06-19 2020-10-20 浙江大华技术股份有限公司 Moving object detection method and device, computing device and storage medium
CN111814602A (en) * 2020-06-23 2020-10-23 成都信息工程大学 Intelligent vehicle environment dynamic target detection method based on vision
CN112883836A (en) * 2021-01-29 2021-06-01 中国矿业大学 Video detection method for deformation of underground coal mine roadway
CN112884835A (en) * 2020-09-17 2021-06-01 中国人民解放军陆军工程大学 Visual SLAM method for target detection based on deep learning
CN114170535A (en) * 2022-02-11 2022-03-11 北京卓翼智能科技有限公司 Target detection positioning method, device, controller, storage medium and unmanned aerial vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369622B1 (en) * 2009-10-29 2013-02-05 Hsu Shin-Yi Multi-figure system for object feature extraction tracking and recognition
CN103955948A (en) * 2014-04-03 2014-07-30 西北工业大学 Method for detecting space moving object in dynamic environment
CN105868745A (en) * 2016-06-20 2016-08-17 重庆大学 Weather identifying method based on dynamic scene perception
CN108596974A (en) * 2018-04-04 2018-09-28 清华大学 Dynamic scene robot localization builds drawing system and method
CN109816686A (en) * 2019-01-15 2019-05-28 山东大学 Robot semanteme SLAM method, processor and robot based on object example match

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8369622B1 (en) * 2009-10-29 2013-02-05 Hsu Shin-Yi Multi-figure system for object feature extraction tracking and recognition
CN103955948A (en) * 2014-04-03 2014-07-30 西北工业大学 Method for detecting space moving object in dynamic environment
CN105868745A (en) * 2016-06-20 2016-08-17 重庆大学 Weather identifying method based on dynamic scene perception
CN108596974A (en) * 2018-04-04 2018-09-28 清华大学 Dynamic scene robot localization builds drawing system and method
CN109816686A (en) * 2019-01-15 2019-05-28 山东大学 Robot semanteme SLAM method, processor and robot based on object example match

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
AISHWARYA A. PANCHPOR等: "A Survey of Methods for Mobile Robot Localization and Mapping in Dynamic Indoor Environments", 《2018 CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION ENGINEERING SYSTEMS》 *
RUNZHI WANG等: "A Point-Line Feature based Visual SLAM Method in Dynamic Indoor Scene", 《2018 UBIQUITOUS POSITIONING, INDOOR NAVIGATION AND LOCATION-BASED SERVICES》 *
曾湘峰: "车载多传感器融合下的动态目标检测与跟踪", 《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》 *
王泽民: "基于视觉的语义SLAM关键技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111797728A (en) * 2020-06-19 2020-10-20 浙江大华技术股份有限公司 Moving object detection method and device, computing device and storage medium
CN111814602A (en) * 2020-06-23 2020-10-23 成都信息工程大学 Intelligent vehicle environment dynamic target detection method based on vision
CN111814602B (en) * 2020-06-23 2022-06-17 成都信息工程大学 Intelligent vehicle environment dynamic target detection method based on vision
CN112884835A (en) * 2020-09-17 2021-06-01 中国人民解放军陆军工程大学 Visual SLAM method for target detection based on deep learning
CN112883836A (en) * 2021-01-29 2021-06-01 中国矿业大学 Video detection method for deformation of underground coal mine roadway
CN112883836B (en) * 2021-01-29 2024-04-16 中国矿业大学 Video detection method for deformation of underground coal mine roadway
CN114170535A (en) * 2022-02-11 2022-03-11 北京卓翼智能科技有限公司 Target detection positioning method, device, controller, storage medium and unmanned aerial vehicle

Also Published As

Publication number Publication date
CN110599522B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN110599522B (en) Method for detecting and removing dynamic target in video sequence
CN110298884B (en) Pose estimation method suitable for monocular vision camera in dynamic environment
CN110688905B (en) Three-dimensional object detection and tracking method based on key frame
CN107341815B (en) Violent motion detection method based on multi-view stereoscopic vision scene stream
US10803604B1 (en) Layered motion representation and extraction in monocular still camera videos
CN107403451B (en) Self-adaptive binary characteristic monocular vision odometer method, computer and robot
CN110570457A (en) Three-dimensional object detection and tracking method based on stream data
Cherian et al. Accurate 3D ground plane estimation from a single image
CN113744315B (en) Semi-direct vision odometer based on binocular vision
Xu et al. Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries
CN112446882A (en) Robust visual SLAM method based on deep learning in dynamic scene
Zhang et al. Image sequence segmentation using 3-D structure tensor and curve evolution
CN103077536B (en) Space-time mutative scale moving target detecting method
Singh et al. Fusing semantics and motion state detection for robust visual SLAM
CN107358624B (en) Monocular dense instant positioning and map reconstruction method
CN111161219B (en) Robust monocular vision SLAM method suitable for shadow environment
CN113592947B (en) Method for realizing visual odometer by semi-direct method
CN108694348B (en) Tracking registration method and device based on natural features
Morar et al. Time-consistent segmentation of indoor depth video frames
JP4201958B2 (en) Moving image object extraction device
Tistarelli Computation of coherent optical flow by using multiple constraints
Davies et al. Stereoscopic human detection in a natural environment
CN111932584A (en) Method and device for determining moving object in image
Matsumoto et al. Real-time enhancement of RGB-D point clouds using piecewise plane fitting
Lv et al. UCED-Detector: An Ultra-fast Corner Event Detector for Event Camera in Complex Scenes: UCED-Detector: Ultra-fast Detector An ultra-fast detector for detecting feature corner events in a high-speed event stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant