CN110599522B - Method for detecting and removing dynamic target in video sequence - Google Patents
Method for detecting and removing dynamic target in video sequence Download PDFInfo
- Publication number
- CN110599522B CN110599522B CN201910879106.6A CN201910879106A CN110599522B CN 110599522 B CN110599522 B CN 110599522B CN 201910879106 A CN201910879106 A CN 201910879106A CN 110599522 B CN110599522 B CN 110599522B
- Authority
- CN
- China
- Prior art keywords
- dynamic
- scene
- mask
- image
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 22
- 230000011218 segmentation Effects 0.000 claims abstract description 25
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 230000008030 elimination Effects 0.000 claims abstract description 11
- 238000003379 elimination reaction Methods 0.000 claims abstract description 11
- 238000012937 correction Methods 0.000 claims abstract description 4
- 230000003068 static effect Effects 0.000 claims description 22
- 239000013598 vector Substances 0.000 claims description 16
- 238000007781 pre-processing Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 claims description 4
- 238000012545 processing Methods 0.000 claims description 3
- 230000003287 optical effect Effects 0.000 description 11
- 230000000007 visual effect Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000013507 mapping Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
- G06T2207/10021—Stereoscopic video; Stereoscopic image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting and eliminating dynamic targets in a video sequence, which comprises the steps of firstly collecting a left image and a right image collected by a binocular camera carried on a robot, and then carrying out correction, denoising and graying pretreatment on the images; performing example segmentation on the left image by using Mask R-CNN, and extracting ORB characteristics from the left image and the right image; calculating a scene flow between two frames of images; then, carrying out dynamic target detection and dynamic target elimination; the method can carry out pixel precision segmentation on the examples in the scene by using the example segmentation network, and the subsequent elimination uses the result of example segmentation, so that the elimination precision of the dynamic target is high; the dynamic property of the instance in the scene is judged by using an instance segmentation method and a scene flow method through a dynamic object detection and elimination method, the probability that the dynamic object is detected is ensured through two judgments, and the missing rate of the system is low.
Description
Technical Field
The invention relates to the technical field of target detection, in particular to a method for detecting and removing dynamic targets in a video sequence.
Background
With the increasing performance of embedded devices, the computational resources required for visual SLAM technology can be met on small platforms. The use of visual SLAM is also becoming more common in everyday life. For example, each large-technology internet company has application of visual SLAM technology in unmanned vehicles with a large amount of input resources, service robots in shopping malls, application of augmented reality and mixed reality, and most common sweeping robots, unmanned planes and the like.
Although the visual SLAM (Simultaneous Localization And Mapping) technology has been applied to such a great deal of maturity, there is currently no visual SLAM solution that can stably operate in all environments due to the shading of the scene, the camera motion causing the image to be smeared, and other reasons. Most of the existing visual SLAM technologies have a strong assumption that the operation scene of a robot is completely static, however, the scene in real life is generally highly dynamic, so for a complex dynamic environment, tasks such as accurate positioning and mapping still need to be solved.
In the existing visual SLAM system, one method is to eliminate dynamic objects by optical flow or scene flow clustering detection, and after obtaining a sparse scene flow in an image, an object can be segmented by using the scene flow. Because the scene stream represents the motion of points in the image in space, and the motion of the surface of a rigid object is consistent, the scene stream can be divided into some geometric clusters by means of clustering. Finally, each geometric cluster represents a rigid body, and the motion of the rigid body is represented by using the center of the geometric cluster. This uses the spatial motion information of the scene stream to learn a spatial structure information, but has two problems:
the first problem is that if the scene flow calculated in the space is too little, that is, the optical flow information of too many pixels cannot be calculated in the absence of feature texture, the accuracy of segmenting the instances by using the scene flow clustering will be poor. If the extracted scene flow is too much, the calculation time for calculating the scene flow and the scene flow cluster can hardly reach the real time; the second problem is that there is a high probability that objects in the scene are non-rigid, and if non-rigid, the motion of the object surface may not be uniform.
The other method is to use RGB image and depth information to cluster and segment the rigid body in the image, then use the scene flow on the segmented rigid body in the image to calculate the motion information of the segmented object. Since the spatial and texture information is used for clustering segmentation, the number of clusters is important, and in fact, without prior knowledge, it is difficult to determine how many cluster centers exist, and the most ideal case is that the number of cluster centers is equal to the sum of the number of instances and the number of backgrounds in a scene, but in actual cases, too many cluster centers are usually used, resulting in excessive segmentation of an image.
Therefore, a scheme with high dynamic target rejection precision and low system omission factor is urgently needed at present to solve the problems.
Disclosure of Invention
The invention aims to provide a method for detecting and removing dynamic targets in a video sequence, which aims to solve the problems in the prior art and detects and removes all dynamic and static objects by using an example segmentation method.
In order to achieve the purpose, the invention provides the following scheme: the invention provides a method for detecting and removing dynamic targets in a video sequence, which comprises the following steps:
step one, acquiring a graph: a left image and a right image acquired by using a binocular camera;
step two, image preprocessing: preprocessing the left image and the right image such as correction, denoising, graying and the like;
step three, using Mask R-CNN example to divide: performing example segmentation on the left image by using Mask R-CNN, wherein a Mask obtained after the example segmentation is used as a first input of dynamic target detection;
firstly, extracting ORB characteristics from a left image and a right image, then matching the characteristics of the left image and the right image, the information of the front frame and the rear frame of the left image and the information of the front frame and the rear frame of the right image, calculating a scene flow between the two frames of images according to the matched relation, and taking the scene flow as the second input of the dynamic target detection;
step five, dynamic target detection is carried out: the dynamic object detection program judges the motion of each instance by utilizing instance class information, a segmentation mask and scene flow information in a scene output by instance segmentation;
step six, dynamic target elimination: and comparing the scene flow information of the semi-dynamic object with the scene flow information of the background according to the example category information to judge the dynamic property of the object, eliminating the dynamic target in the mask image of the dynamic object and outputting the image which shields the dynamic target.
Preferably, the example partition information in the step five is divided into three categories: static, dynamic, and semi-dynamic; dynamic objects are instances that are assumed to have a high probability of moving in a scene, such as a human, bird, dog, etc. target; static objects are assumed to be objects that are approximately stationary in the scene, such as signal lights, beds, etc.; a semi-dynamic object is an object that remains stationary most of the time, but may move when in contact with a dynamic object, such as a book, bag, umbrella, etc.
Preferably, the motion method of each example is judged as follows:
for a dynamic object, directly combining the mask of the dynamic object into a shielding mask of an image frame, and shielding the effect of the part in the subsequent flow of the algorithm; regarding a static object, not processing the static object, regarding the static object as a static background, and positioning and mapping the static object through the static background; and judging the intersection ratio of the mask and the dynamic mask of the semi-dynamic object, and regarding the semi-dynamic object as the dynamic object when the intersection ratio is larger than 0.3 and the area of the mask is smaller than the overlapping target.
Preferably, when detecting a part of dynamic objects which are not detected, judging by calculating a scene flow, that is, comparing scene flow information of a semi-dynamic object with scene flow information of a background to judge the dynamics of the object;
assume the scene flow vector of the background is SF b =(u b ,v b ,w b ) The scene flow vector of the semi-dynamic object is SF hs =(u hs ,v hs ,w hs ) The invention sets two thresholds for judging the standard of two vector deviations:
one is the cosine distance of two vectors:
the other is the ratio of the two vector modulo lengths:
when cosine distance sim(SF b :SF hs ) One of the two criteria, ratio of the modular lengths ratio _ m, does not satisfy:
or
ratio_m<1.2
I.e. the semi-dynamic object is considered dynamic with respect to the background.
The invention discloses the following technical effects:
(1) And the dynamic target removing precision is high. The example segmentation network used in the dynamic target detection and elimination method provided by the invention can carry out pixel precision segmentation on the examples in the scene, and the subsequent elimination uses the result of example segmentation, so that the precision of dynamic target elimination is high;
(2) The system has low omission factor. The dynamic target detection and elimination method provided by the invention judges the dynamic property of the instance in the scene by using the instance segmentation method and the scene flow method, and the probability of detecting the dynamic target is ensured by two judgments, so that the system omission ratio is low.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a schematic flow diagram of the present invention;
FIG. 2 is a schematic view of indoor and outdoor optical flows, wherein FIG. 2a is a schematic view of outdoor optical flows and FIG. 2b is a schematic view of indoor optical flows;
fig. 3 is a schematic diagram of indoor and outdoor scene flow, where fig. 3a is a schematic diagram of indoor scene flow and fig. 3b is a schematic diagram of outdoor scene flow.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Referring to fig. 1-3, the invention provides a method for detecting and removing dynamic targets in a video sequence, comprising the following steps:
step one, image acquisition: a left image and a right image acquired by using a binocular camera;
step two, image preprocessing: preprocessing the left image and the right image such as correction, denoising, graying and the like;
step three, using Mask R-CNN example to divide: performing example segmentation on the left image by using Mask R-CNN, wherein a Mask obtained after example segmentation is used as a first input of dynamic target detection;
firstly, extracting ORB characteristics from a left image and a right image, then matching the characteristics of the left image and the right image, the front frame and the rear frame of the left image and the front frame and the rear frame of the right image, calculating a scene flow between the two frames of images according to the matched relationship, and taking the scene flow as a second input of the dynamic target detection;
the motion method of each example is judged as follows:
for a dynamic object, directly combining the mask of the dynamic object into a shielding mask of an image frame, and shielding the effect of the part in the subsequent flow of the algorithm; regarding the static object as a static background without processing, and positioning and drawing the static object through the static background; and judging the intersection ratio of the mask and the dynamic mask of the semi-dynamic object, and regarding the semi-dynamic object as the dynamic object when the intersection ratio is larger than 0.3 and the mask area is smaller than the overlapped target.
When part of undetected dynamic objects are detected, judging by calculating scene flow, namely comparing scene flow information of semi-dynamic objects with scene flow information of backgrounds to judge the dynamics of the objects;
assume the scene flow vector of the background is SF b =(u b ,v b ,w b ) The scene flow vector of the semi-dynamic object is SF hs =(u hs ,v hs ,w hs ) The invention sets two thresholds for judging the standard of two vector deviations:
one is the cosine distance of two vectors:
the other is the ratio of the two vector modulo lengths:
when cosine distance Sim (SF) b :SF hs ) The ratio of the modulo lengths ratio _ m does not satisfy one of the two criteria:
or
ratio_m<1.2
I.e. the semi-dynamic object is considered dynamic with respect to the background.
Step five, dynamic target detection is carried out: the dynamic object detection program judges the motion of each instance by utilizing instance class information, a segmentation mask and scene flow information in a scene output by instance segmentation;
example partitioning information is divided into three major categories: static, dynamic, and semi-dynamic; dynamic objects are instances that are assumed to move with a high probability in a scene, such as a human, bird, dog, etc. target; static objects are assumed to be objects that are approximately stationary in the scene, such as signal lights, beds, etc.; a semi-dynamic object is an object that remains stationary most of the time, but may move when in contact with a dynamic object, such as a book, bag, umbrella, etc.
Step six, dynamic target elimination: and comparing the scene flow information of the semi-dynamic object with the scene flow information of the background according to the example category information to judge the dynamic property of the object, eliminating the dynamic target in the mask image of the dynamic object, and outputting the image which shields the dynamic target.
Optical flow and scene flow: optical flow is an estimate of the displacement between corresponding points between two images, as shown in FIG. 2. The optical flow estimates are the projection of the motion of objects in the scene onto the camera plane. When the motion of the motion vector of the object on the image plane is changed, the motion of the object can be detected, but when the moving object is moved relatively perpendicularly to the camera plane, the optical flow cannot judge the motion state thereof. The scene flow is the expansion of optical flow in stereoscopic vision, and describes the instantaneous motion of a point in space relative to the instantaneous motion of a point on an image described by the optical flow, as shown in fig. 3, which is a visual image of the scene flow, wherein the thinner the arrow lines indicate the smaller the parallax, and the thicker the lines indicate the larger the parallax. This is another important principle basis for the solution of the invention.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.
Claims (1)
1. A method for detecting and removing dynamic targets in a video sequence is characterized by comprising the following steps:
step one, acquiring a graph: a left image and a right image acquired by using a binocular camera;
step two, image preprocessing: carrying out correction, denoising and graying pretreatment on the left image and the right image;
step three, segmenting by using Mask R-CNN example: performing example segmentation on the left image by using Mask R-CNN, wherein a Mask obtained after the example segmentation is used as a first input of dynamic target detection;
firstly, extracting ORB characteristics from a left image and a right image, then matching the characteristics of the left image and the right image, the information of the front frame and the rear frame of the left image and the information of the front frame and the rear frame of the right image, calculating a scene flow between the two frames of images according to the matched relation, and taking the scene flow as the second input of the dynamic target detection;
step five, dynamic target detection is carried out: the dynamic object detection program judges the motion of each instance by utilizing instance class information, a segmentation mask and scene flow information in a scene output by instance segmentation;
step six, dynamic target elimination: comparing scene flow information of the semi-dynamic object with scene flow information of the background according to the example category information to judge the dynamic property of the object;
the example segmentation information in the step five is divided into three categories: static, dynamic, and semi-dynamic; a dynamic object is an example which is assumed to move with a high probability in a scene, a static object is assumed to be an object which is approximately stationary in the scene, and a semi-dynamic object is an object which is kept stationary most of the time but may move when in contact with the dynamic object;
the motion method of each example is judged as follows:
for a dynamic object, directly combining the mask of the dynamic object into a shielding mask of an image frame, and shielding the effect of the part in the subsequent flow of the algorithm; regarding the static object as a static background without processing, and positioning and drawing the static object through the static background; for a semi-dynamic object, judging the intersection ratio of the mask and the dynamic mask, and regarding the intersection ratio larger than 0.3 and the mask area smaller than the overlapped target as the dynamic object;
when part of the dynamic objects which are not detected are detected, judging by calculating scene flow, namely comparing scene flow information of the semi-dynamic objects with scene flow information of the background to judge the dynamics of the objects;
assume the scene flow vector of the background is SF b =(u b ,v b ,w b ) The scene flow vector of a semi-dynamic object is SF hs =(u hs ,v hs ,w hs ) Two thresholds are set for the criteria for determining the two vector deviations:
one is the cosine distance of two vectors:
the other is the ratio of the two vector modulo lengths:
when cosine distance Sim (SF) b ,SF hs ) The ratio of the modulo lengths ratio _ m does not satisfy one of the two criteria:
or
ratio_m<1.2
I.e. the semi-dynamic object is considered to be dynamic with respect to the background;
and finally, removing the dynamic target in the mask image of the dynamic object, and outputting the image with the dynamic target shielded.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910879106.6A CN110599522B (en) | 2019-09-18 | 2019-09-18 | Method for detecting and removing dynamic target in video sequence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910879106.6A CN110599522B (en) | 2019-09-18 | 2019-09-18 | Method for detecting and removing dynamic target in video sequence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110599522A CN110599522A (en) | 2019-12-20 |
CN110599522B true CN110599522B (en) | 2023-04-11 |
Family
ID=68860692
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910879106.6A Active CN110599522B (en) | 2019-09-18 | 2019-09-18 | Method for detecting and removing dynamic target in video sequence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110599522B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111126390A (en) * | 2019-12-23 | 2020-05-08 | 腾讯科技(深圳)有限公司 | Correlation method and device for identifying identification pattern in media content |
CN111797728B (en) * | 2020-06-19 | 2024-06-14 | 浙江大华技术股份有限公司 | Method and device for detecting moving object, computing equipment and storage medium |
CN111814602B (en) * | 2020-06-23 | 2022-06-17 | 成都信息工程大学 | Intelligent vehicle environment dynamic target detection method based on vision |
CN112884835A (en) * | 2020-09-17 | 2021-06-01 | 中国人民解放军陆军工程大学 | Visual SLAM method for target detection based on deep learning |
CN112883836B (en) * | 2021-01-29 | 2024-04-16 | 中国矿业大学 | Video detection method for deformation of underground coal mine roadway |
CN114170535A (en) * | 2022-02-11 | 2022-03-11 | 北京卓翼智能科技有限公司 | Target detection positioning method, device, controller, storage medium and unmanned aerial vehicle |
CN114565675B (en) * | 2022-03-03 | 2024-09-24 | 南京工业大学 | Method for removing dynamic feature points at front end of visual SLAM |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8369622B1 (en) * | 2009-10-29 | 2013-02-05 | Hsu Shin-Yi | Multi-figure system for object feature extraction tracking and recognition |
CN103955948A (en) * | 2014-04-03 | 2014-07-30 | 西北工业大学 | Method for detecting space moving object in dynamic environment |
CN105868745A (en) * | 2016-06-20 | 2016-08-17 | 重庆大学 | Weather identifying method based on dynamic scene perception |
CN108596974A (en) * | 2018-04-04 | 2018-09-28 | 清华大学 | Dynamic scene robot localization builds drawing system and method |
CN109816686A (en) * | 2019-01-15 | 2019-05-28 | 山东大学 | Robot semanteme SLAM method, processor and robot based on object example match |
-
2019
- 2019-09-18 CN CN201910879106.6A patent/CN110599522B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8369622B1 (en) * | 2009-10-29 | 2013-02-05 | Hsu Shin-Yi | Multi-figure system for object feature extraction tracking and recognition |
CN103955948A (en) * | 2014-04-03 | 2014-07-30 | 西北工业大学 | Method for detecting space moving object in dynamic environment |
CN105868745A (en) * | 2016-06-20 | 2016-08-17 | 重庆大学 | Weather identifying method based on dynamic scene perception |
CN108596974A (en) * | 2018-04-04 | 2018-09-28 | 清华大学 | Dynamic scene robot localization builds drawing system and method |
CN109816686A (en) * | 2019-01-15 | 2019-05-28 | 山东大学 | Robot semanteme SLAM method, processor and robot based on object example match |
Non-Patent Citations (4)
Title |
---|
A Point-Line Feature based Visual SLAM Method in Dynamic Indoor Scene;Runzhi Wang等;《2018 Ubiquitous Positioning, Indoor Navigation and Location-Based Services》;20181206;第1-6页 * |
A Survey of Methods for Mobile Robot Localization and Mapping in Dynamic Indoor Environments;Aishwarya A. Panchpor等;《2018 Conference on Signal Processing And Communication Engineering Systems》;20180315;第138-144页 * |
基于视觉的语义SLAM关键技术研究;王泽民;《中国优秀硕士学位论文全文数据库 信息科技辑》;20190515;第I138-1503页 * |
车载多传感器融合下的动态目标检测与跟踪;曾湘峰;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20180415;第C035-97页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110599522A (en) | 2019-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110599522B (en) | Method for detecting and removing dynamic target in video sequence | |
CN103325112B (en) | Moving target method for quick in dynamic scene | |
CN107341815B (en) | Violent motion detection method based on multi-view stereoscopic vision scene stream | |
CN110688905A (en) | Three-dimensional object detection and tracking method based on key frame | |
CN107403451B (en) | Self-adaptive binary characteristic monocular vision odometer method, computer and robot | |
AU2020300067B2 (en) | Layered motion representation and extraction in monocular still camera videos | |
Sun et al. | Fast motion object detection algorithm using complementary depth image on an RGB-D camera | |
Xu et al. | Dynamic obstacle detection based on panoramic vision in the moving state of agricultural machineries | |
Shalaby et al. | Algorithms and applications of structure from motion (SFM): A survey | |
Singh et al. | Fusing semantics and motion state detection for robust visual SLAM | |
Zhang et al. | Image sequence segmentation using 3-D structure tensor and curve evolution | |
EP2989611A1 (en) | Moving object detection | |
Min et al. | Coeb-slam: A robust vslam in dynamic environments combined object detection, epipolar geometry constraint, and blur filtering | |
CN112446885B (en) | SLAM method based on improved semantic optical flow method in dynamic environment | |
CN107358624B (en) | Monocular dense instant positioning and map reconstruction method | |
Liu et al. | A joint optical flow and principal component analysis approach for motion detection | |
Gan et al. | A dynamic detection method to improve SLAM performance | |
CN113592947B (en) | Method for realizing visual odometer by semi-direct method | |
Morar et al. | Time-consistent segmentation of indoor depth video frames | |
JP4201958B2 (en) | Moving image object extraction device | |
Tistarelli | Computation of coherent optical flow by using multiple constraints | |
Akshay | Single moving object detection and tracking using Horn-Schunck optical flow method | |
Davies et al. | Stereoscopic human detection in a natural environment | |
Matsumoto et al. | Real-time enhancement of RGB-D point clouds using piecewise plane fitting | |
CN118314162B (en) | Dynamic visual SLAM method and device for time sequence sparse reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |