CN102622768B - Depth-map gaining method of plane videos - Google Patents

Depth-map gaining method of plane videos Download PDF

Info

Publication number
CN102622768B
CN102622768B CN201210067349.8A CN201210067349A CN102622768B CN 102622768 B CN102622768 B CN 102622768B CN 201210067349 A CN201210067349 A CN 201210067349A CN 102622768 B CN102622768 B CN 102622768B
Authority
CN
China
Prior art keywords
moving object
depth
video
foreground moving
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210067349.8A
Other languages
Chinese (zh)
Other versions
CN102622768A (en
Inventor
戴琼海
巨金龙
林靖宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201210067349.8A priority Critical patent/CN102622768B/en
Publication of CN102622768A publication Critical patent/CN102622768A/en
Application granted granted Critical
Publication of CN102622768B publication Critical patent/CN102622768B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a depth-map gaining method of plane videos. The depth-map gaining method comprises the following steps of: detecting and extracting a foreground moving object in an original plane video, calculating a blocking relation of the foreground moving object and obtaining a mask image of the foreground moving object; carrying out background reconstruction on the original plane video according to the mask image of the foreground moving object so as to obtain a background video sequence with removal of the foreground moving object; solving a background depth-map sequence for the background video sequence; solving an initial depth-map sequence for the original plane video; and obtaining a depth-map sequence of the original plane video according to initial depth information of the foreground moving object in the initial depth-map sequence and geometric information of the foreground moving object in the background depth-map sequence. According to the method provided by the embodiment of the invention, the depth map of each frame in the plane video can be accurately gained; the gained depth map is clear in edge, explicit in depth hierarchy, good in smooth performance and high in time-domain stability.

Description

A kind of depth map calculating method of planar video
Technical field
The present invention relates to technical field of image processing, particularly a kind of depth map calculating method of the planar video based on background reconstruction and motion detection.
Background technology
In three-dimensional video-frequency treatment technology, the depth map of planar video generates and refers to the depth map sequence that obtains identifying scene steric information by planar video.
At present conventional degree of depth drawing generating method is mainly the geological information that utilizes planar video, camera parameters, and in planar video, the color characteristic of object etc. carries out the overall situation and asks for.Can to the depth map of non-key frame, calculate by key frame images and corresponding depth map in addition, computing method mainly comprise the methods such as deep diffusion or profile tracking.The method has good effect at key frame and the less scene of non-key frame variation.
Background reconstruction and motion detection are the gordian techniquies in Video processing, and background reconstruction refers to from have the scene of moving object and recovers static background.Conventional method is to utilize parameter or nonparametric modeling method at present, and each pixel in video is set up to model, by Matching Model, a large amount of samplings is judged and parameter modification.The colouring information of pixel and the relevance between frame of video have mainly been utilized.Motion detection refers to the information of extracting moving object from have the video of moving object.Conventional method is by movable information at present, and the geometry information of colouring information and object combines, thereby extracts foreground object.
Existing degree of depth drawing generating method often can only pack processing containing the planar video of simple scenario and motor pattern, for the processing with complex scene, lack robustness (robustness), thereby cause the depth map degree of accuracy that generates not high.
Summary of the invention
Object of the present invention is intended at least solve one of above-mentioned technological deficiency, and a kind of method that can accurately ask for the depth image of each frame in planar video is particularly provided.
For achieving the above object, the invention provides a kind of depth map calculating method of planar video, comprise the following steps: S1: the foreground moving object in original plane video is detected and extracted, calculate the hiding relation of described foreground moving object, obtain the mask image of described foreground moving object; S2: according to the mask image of described foreground moving object, described original plane video is carried out to background reconstruction, to obtain removing the background video sequence of described foreground moving object; S3: described background video sequence is asked for to background depth graphic sequence; S4: described original plane video is asked for to initial depth graphic sequence; S5: according to the initial depth information of foreground moving object described in described initial depth graphic sequence, and the geological information of the described foreground moving object in described background depth graphic sequence, obtain the depth map sequence of described plane original video.
In one embodiment of the invention, described step S01 is further comprising the steps: S11: by method for testing motion, detect and extract the described foreground moving object in described original plane video; S12: calculate the area of described foreground moving object, and by the less object removal of area wherein; S13: according to the colouring information of described foreground moving object, the hiding relation between a plurality of described foreground object that judgement may exist.
In one embodiment of the invention, step S2 is further comprising the steps: S21: according to the mask image of described foreground moving object, obtain removing the background video sequence of described foreground moving object; S22: described background video sequence is carried out to image mending according to the similarity of image neighbor; S23: associated with the pixel of described background video sequence interframe according to the movable information of described foreground moving object, fill up the white space in described background video sequence; S24: described background video sequence is carried out to interpolation and smooth operation according to the similarity of image neighbor.
In one embodiment of the invention, described in step S22, the algorithm of image mending comprises: inpaint algorithm, patch-match algorithm.
In one embodiment of the invention, step S3 comprises: described background video sequence is used to the depth map generating algorithm of static scene, to obtain described background depth graphic sequence.
In one embodiment of the invention, the depth map generating algorithm of static scene comprises Bundle algorithm.
In one embodiment of the invention, step S4 comprises: use depth estimation algorithm to carry out estimation of Depth to described original plane video, to obtain described initial depth graphic sequence.
In one embodiment of the invention, described depth estimation algorithm comprises BP algorithm.
In one embodiment of the invention, step S5 is further comprising the steps: S51: the initial depth figure that obtains described foreground moving object by mask image and the described initial depth graphic sequence of described foreground moving object; S52: in conjunction with the information of described background depth graphic sequence and the spatial positional information of described foreground moving object, the initial depth figure of foreground moving object described in each frame is revised, with the depth map sequence that obtains comprising described foreground moving object and background; S53: the depth map sequence that comprises described foreground moving object and background is carried out to filtering processing.
In one embodiment of the invention, step S52 further comprises: calculate camera parameters, in conjunction with the spatial positional information of described foreground moving object, estimate the residing plane of movement of described foreground moving object; In described background depth graphic sequence, extract the depth information of described plane of movement, the initial depth figure of foreground moving object described in each frame is revised.
In one embodiment of the invention, filtering described in step S53 is processed and is comprised: bilateral filtering, gaussian filtering.
In one embodiment of the invention, after step S5, also comprise: the depth map sequence of described plane original video is carried out to level and smooth aftertreatment.
The invention provides a kind of depth map calculating method of planar video.By motion detection, obtain the mask image of the foreground moving object of video, according to this mask image, video is carried out to the background video that background reconstruction obtains removing foreground moving object, then background video carried out to depth map is asked for and former video carried out to initial depth map and estimate, again in conjunction with the information of the two and movable information and the geological information of foreground moving object, the depth map sequence of the former video being finally optimized.According to the method for the embodiment of the present invention, can accurately ask for the depth map of each frame in planar video, and the depth map clear-cut margin of asking for, degree of depth level is clear and definite, Lubricity good and time domain stability is high.In addition, according to the depth map calculating method of the planar video of the embodiment of the present invention, can process the complex scene with a plurality of moving objects, strong adaptability, is widely used.
The aspect that the present invention is additional and advantage in the following description part provide, and part will become obviously from the following description, or recognize by practice of the present invention.
Accompanying drawing explanation
Above-mentioned and/or the additional aspect of the present invention and advantage will become from the following description of the accompanying drawings of embodiments and obviously and easily understand, wherein:
Fig. 1 is the depth map calculating method process flow diagram of the planar video of the embodiment of the present invention.
Embodiment
Describe embodiments of the invention below in detail, the example of described embodiment is shown in the drawings, and wherein same or similar label represents same or similar element or has the element of identical or similar functions from start to finish.Below by the embodiment being described with reference to the drawings, be exemplary, only for explaining the present invention, and can not be interpreted as limitation of the present invention.
Fig. 1 is the depth map calculating method process flow diagram of the planar video of the embodiment of the present invention, and the method comprises the following steps:
Step S1: the foreground moving object in original plane video is detected and extracted, calculate the hiding relation of foreground moving object, obtain the mask image of foreground moving object.Particularly, can comprise the following steps:
Step S11: detect and extract the foreground moving object in original plane video by method for testing motion, to obtain initial foreground mask image.For example, utilize movable information, colouring information etc., by background modeling, the method for testing motion such as profile tracking detect and extract the moving object in original plane video.Be pointed out that, the embodiment of the present invention can apply to have the complex scene of a plurality of moving objects, thus in this step, can Detection and Extraction original plane video in all foreground moving objects.
Step S12: calculate the area of foreground moving object, and by the less object removal of area wherein.In detection, because the reasons such as shake, light variation may cause the part small movements of certain object in background, and this small movements also may be identified as foreground moving, thereby causes error detection.Due to this class small movements, often area is less, therefore detected and can be revised the error detection that background object small movements causes by area.
Step S13: according to the colouring information of foreground moving object, the hiding relation between a plurality of described foreground object that judgement may exist.When blocking appears in two objects, will there is obvious color saltus step in the object that is blocked, thereby according to the colouring information of foreground moving object, can calculate a plurality of object hiding relations that may exist in original plane video.Step S12 and S12 are that the initial foreground mask image that step S11 is obtained is revised.
Step S2: according to the mask image of foreground moving object, original plane video is carried out to background reconstruction, to obtain removing the background video sequence of described foreground moving object.Particularly, can comprise the following steps:
Step S21: according to the result of step S14, remove the foreground moving object part in each frame of original plane video, obtain initial background video sequence;
Step S22: according to the similarity of image neighbor, for example color correlation, carries out image mending to each frame of initial background video sequence, obtains rough background image.In the present embodiment, image mending algorithm can be inpaint algorithm, patch-match algorithm etc.
Step S23: on the basis of the background image obtaining at step S22, associated with the pixel of background video sequence interframe according to the movable information of foreground moving object, fill up the white space in this background image.In the present embodiment, movable information comprises direction of motion and the displacement of the moving object in original plane video, and for the object with compound movement, its movable information comprises the movable information of each part of object; The pixel association of interframe comprises time and the space continuity of interframe.
Step S24: again according to the similarity of image neighbor, this background image is carried out to interpolation and smooth operation, obtain only comprising the background video sequence of background information.In embodiments of the present invention, by the foreground information in original plane video is separated with background information, so that prospect video and background video are separately processed, thereby reduce intractability, and improve the precision of generating depth map.
Step S3: background video sequence is asked for to background depth graphic sequence.In embodiments of the present invention, the background video sequence that can obtain step S24, is used the depth map generating algorithm of static scene, as Bundle algorithm etc., calculates background depth graphic sequence.
Step S4: original plane video is asked for to initial depth graphic sequence.In embodiments of the present invention, can use depth estimation algorithm, as BP algorithm etc., original plane video be carried out to estimation of Depth.
Step S5: according to the initial depth information of foreground moving object in initial depth graphic sequence, and the geological information of the foreground moving object in background depth graphic sequence, obtain the depth map sequence of plane original video.Particularly, can comprise the following steps:
Step S51: in conjunction with the mask image of foreground moving object and the initial depth graphic sequence of original plane video, obtain the initial depth figure of foreground moving object.
Step S52: in conjunction with the information of background depth graphic sequence and movable information and the geological information of foreground moving object, the initial depth figure of each frame foreground moving object is revised, with the depth map sequence that obtains comprising foreground moving object and background.For example in the present embodiment, this step can realize in the following manner: first, calculate camera parameters, movable information and geological information in conjunction with foreground moving object are estimated the residing plane of movement of this foreground moving object, wherein, geological information comprises prospect in scene and the three-dimensional space position of background, three-D space structure etc.; Then, in At Background Depth graphic sequence, extract the depth information of this plane of movement, merge with the initial depth information in initial depth figure, optimize the depth information of foreground moving object, the initial depth figure of each frame foreground moving object is revised.
Step S53: the depth map sequence that comprises foreground moving object and background is carried out to filtering processing, to keep the time of foreground moving Object Depth and the continuity in space in depth map sequence.In the present embodiment, the method that filtering is processed comprises bilateral filtering, gaussian filtering etc.
In the preferred embodiment of the invention, after step S53, also comprise: the depth map sequence of plane original video is carried out to level and smooth aftertreatment, to improve the Lubricity of depth map.
The invention provides a kind of depth map calculating method of planar video.By motion detection, obtain the mask image of the foreground moving object of video, according to this mask image, video is carried out to the background video that background reconstruction obtains removing foreground moving object, then background video carried out to depth map is asked for and former video carried out to initial depth map and estimate, again in conjunction with the information of the two and movable information and the geological information of foreground moving object, the depth map sequence of the former video being finally optimized.According to the method for the embodiment of the present invention, can accurately ask for the depth map of each frame in planar video, and the depth map clear-cut margin of asking for, degree of depth level is clear and definite, Lubricity good and time domain stability is high.In addition, according to the depth map calculating method of the planar video of the embodiment of the present invention, can process the complex scene with a plurality of moving objects, application prospect is extensive.
In the description of this instructions, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means to be contained at least one embodiment of the present invention or example in conjunction with specific features, structure, material or the feature of this embodiment or example description.In this manual, the schematic statement of above-mentioned term is not necessarily referred to identical embodiment or example.And the specific features of description, structure, material or feature can be with suitable mode combinations in any one or more embodiment or example.
Although illustrated and described embodiments of the invention, for the ordinary skill in the art, be appreciated that without departing from the principles and spirit of the present invention and can carry out multiple variation, modification, replacement and modification to these embodiment, scope of the present invention is by claims and be equal to and limit.

Claims (11)

1. a depth map calculating method for planar video, is characterized in that, comprises the following steps:
S1: the foreground moving object in original plane video is detected and extracted, calculate the hiding relation of described foreground moving object, obtain the mask image of described foreground moving object;
S2: according to the mask image of described foreground moving object, described original plane video is carried out to background reconstruction, to obtain removing the background video sequence of described foreground moving object;
S3: described background video sequence is asked for to background depth graphic sequence;
S4: described original plane video is asked for to initial depth graphic sequence;
S5: according to the initial depth information of foreground moving object described in described initial depth graphic sequence, and the geological information of the described foreground moving object in described background depth graphic sequence, the depth map sequence that obtains described plane original video, step S5 is further comprising the steps:
S51: the initial depth figure that obtains described foreground moving object by mask image and the described initial depth graphic sequence of described foreground moving object;
S52: in conjunction with the information of described background depth graphic sequence and the spatial positional information of described foreground moving object, the initial depth figure of foreground moving object described in each frame is revised, with the depth map sequence that obtains comprising described foreground moving object and background;
S53: the depth map sequence that comprises described foreground moving object and background is carried out to filtering processing.
2. the depth map calculating method of planar video as claimed in claim 1, is characterized in that, described step S1 is further comprising the steps:
S11: detect and extract the described foreground moving object in described original plane video by method for testing motion;
S12: calculate the area of described foreground moving object, and by the less object removal of area wherein;
S13: according to the colouring information of described foreground moving object, the hiding relation between a plurality of described foreground object that judgement may exist.
3. the depth map calculating method of planar video as claimed in claim 1, is characterized in that, step S2 is further comprising the steps:
S21: according to the mask image of described foreground moving object, obtain removing the background video sequence of described foreground moving object;
S22: described background video sequence is carried out to image mending according to the similarity of image neighbor;
S23: associated with the pixel of described background video sequence interframe according to the movable information of described foreground moving object, fill up the white space in described background video sequence;
S24: described background video sequence is carried out to interpolation and smooth operation according to the similarity of image neighbor.
4. the depth map calculating method of planar video as claimed in claim 3, is characterized in that, the algorithm of image mending described in step S22 comprises: inpaint algorithm, patch-match algorithm.
5. the depth map calculating method of planar video as claimed in claim 1, is characterized in that, step S3 comprises: described background video sequence is used to the depth map generating algorithm of static scene, to obtain described background depth graphic sequence.
6. the depth map calculating method of planar video as claimed in claim 5, is characterized in that, the depth map generating algorithm of static scene comprises Bundle algorithm.
7. the depth map calculating method of planar video as claimed in claim 1, is characterized in that, step S4 comprises: use depth estimation algorithm to carry out estimation of Depth to described original plane video, to obtain described initial depth graphic sequence.
8. the depth map calculating method of planar video as claimed in claim 7, is characterized in that, described depth estimation algorithm comprises BP algorithm.
9. the depth map calculating method of planar video as claimed in claim 1, is characterized in that, step S52 further comprises:
Calculate camera parameters, in conjunction with the spatial positional information of described foreground moving object, estimate the residing plane of movement of described foreground moving object;
In described background depth graphic sequence, extract the depth information of described plane of movement, the initial depth figure of foreground moving object described in each frame is revised.
10. the depth map calculating method of planar video as claimed in claim 1, is characterized in that, filtering described in step S53 is processed and comprised: bilateral filtering, gaussian filtering.
The depth map calculating method of 11. planar videos as claimed in claim 1, is characterized in that, also comprises: the depth map sequence of described plane original video is carried out to level and smooth aftertreatment after step S5.
CN201210067349.8A 2012-03-14 2012-03-14 Depth-map gaining method of plane videos Active CN102622768B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210067349.8A CN102622768B (en) 2012-03-14 2012-03-14 Depth-map gaining method of plane videos

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210067349.8A CN102622768B (en) 2012-03-14 2012-03-14 Depth-map gaining method of plane videos

Publications (2)

Publication Number Publication Date
CN102622768A CN102622768A (en) 2012-08-01
CN102622768B true CN102622768B (en) 2014-04-09

Family

ID=46562669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210067349.8A Active CN102622768B (en) 2012-03-14 2012-03-14 Depth-map gaining method of plane videos

Country Status (1)

Country Link
CN (1) CN102622768B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103248906B (en) * 2013-04-17 2015-02-18 清华大学深圳研究生院 Method and system for acquiring depth map of binocular stereo video sequence
CN103268606B (en) * 2013-05-15 2016-03-30 华为技术有限公司 A kind of depth information compensation method of motion blur image and device
CN104715446B (en) * 2015-02-28 2019-08-16 努比亚技术有限公司 A kind of mobile terminal and its by the method and apparatus of the object removal moved in camera shooting
CN107368188B (en) * 2017-07-13 2020-05-26 河北中科恒运软件科技股份有限公司 Foreground extraction method and system based on multiple spatial positioning in mediated reality
GB2576574B (en) * 2018-08-24 2023-01-11 Cmr Surgical Ltd Image correction of a surgical endoscope video stream
CN114007058A (en) * 2020-07-28 2022-02-01 阿里巴巴集团控股有限公司 Depth map correction method, video processing method, video reconstruction method and related devices
CN113808251B (en) * 2021-08-09 2024-04-12 杭州易现先进科技有限公司 Dense reconstruction method, system, device and medium based on semantic segmentation

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257641A (en) * 2008-03-14 2008-09-03 清华大学 Method for converting plane video into stereoscopic video based on human-machine interaction
CN101873509A (en) * 2010-06-30 2010-10-27 清华大学 Method for eliminating background and edge shake of depth map sequence

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI420413B (en) * 2010-07-15 2013-12-21 Chunghwa Picture Tubes Ltd Depth map enhancing method and computer-readable medium therefor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101257641A (en) * 2008-03-14 2008-09-03 清华大学 Method for converting plane video into stereoscopic video based on human-machine interaction
CN101873509A (en) * 2010-06-30 2010-10-27 清华大学 Method for eliminating background and edge shake of depth map sequence

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
基于背景重构的视频分割技术及应用;范小聪;《液晶与显示》;20110430;第26卷(第2期);229-233 *
真正射影像生成中遮蔽区域的补偿;边馥苓;《测绘科学》;20090531;第34卷(第3期);81-83 *
范小聪.基于背景重构的视频分割技术及应用.《液晶与显示》.2011,第26卷(第2期),229-233.
边馥苓.真正射影像生成中遮蔽区域的补偿.《测绘科学》.2009,第34卷(第3期),81-83.

Also Published As

Publication number Publication date
CN102622768A (en) 2012-08-01

Similar Documents

Publication Publication Date Title
CN102622768B (en) Depth-map gaining method of plane videos
Yang et al. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
Concha et al. Using superpixels in monocular SLAM
KR101758058B1 (en) Apparatus and method for estimating camera motion using depth information, augmented reality system
US9142011B2 (en) Shadow detection method and device
US20200380711A1 (en) Method and device for joint segmentation and 3d reconstruction of a scene
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
KR20120066300A (en) 3d motion recognition method and apparatus
CN105404888A (en) Saliency object detection method integrated with color and depth information
US9661307B1 (en) Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D
KR20170015299A (en) Method and apparatus for object tracking and segmentation via background tracking
Abrams et al. The episolar constraint: Monocular shape from shadow correspondence
Jang et al. Discontinuity preserving disparity estimation with occlusion handling
CN104778673B (en) A kind of improved gauss hybrid models depth image enhancement method
CN105791795B (en) Stereoscopic image processing method, device and Stereoscopic Video Presentation equipment
Azartash et al. An integrated stereo visual odometry for robotic navigation
CN105335934A (en) Disparity map calculating method and apparatus
Wang et al. Improving deep stereo network generalization with geometric priors
El Ansari et al. Temporal consistent fast stereo matching for advanced driver assistance systems (ADAS)
CN111860643A (en) Robustness improving method for visual template matching based on frequency modulation model
CN105825161B (en) The skin color detection method and its system of image
Fan et al. Collaborative three-dimensional completion of color and depth in a specified area with superpixels
Plakas et al. Uncalibrated vision for 3-D underwater applications
CN109360174B (en) Three-dimensional scene reconstruction method and system based on camera pose
KR101717381B1 (en) Apparatus and method for stereo camera-based free space estimation, recording medium thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant