CN103945211A  Method for generating depth map sequence through singlevisualangle color image sequence  Google Patents
Method for generating depth map sequence through singlevisualangle color image sequence Download PDFInfo
 Publication number
 CN103945211A CN103945211A CN201410092424.5A CN201410092424A CN103945211A CN 103945211 A CN103945211 A CN 103945211A CN 201410092424 A CN201410092424 A CN 201410092424A CN 103945211 A CN103945211 A CN 103945211A
 Authority
 CN
 China
 Prior art keywords
 sequence
 image
 depth map
 color image
 pixel
 Prior art date
Links
 238000004364 calculation methods Methods 0.000 claims abstract description 10
 230000000875 corresponding Effects 0.000 claims description 9
 230000002123 temporal effects Effects 0.000 claims description 7
 238000000034 methods Methods 0.000 claims description 6
 230000001808 coupling Effects 0.000 claims description 3
 238000010168 coupling process Methods 0.000 claims description 3
 238000005859 coupling reaction Methods 0.000 claims description 3
 230000000694 effects Effects 0.000 description 2
 238000005516 engineering processes Methods 0.000 description 2
 238000006243 chemical reaction Methods 0.000 description 1
Abstract
Description
Technical field
The present invention relates to a kind of method that generates corresponding depth map sequence by singleview Twodimensional Color Image sequence.
Background technology
Depth map sequence is the important information of reconstruct 3 D video, and the current depth map sequence method of obtaining mainly contains two kinds of active and passive types.Actively obtain main finger and utilize depth camera to measure the range information in threedimensional scenic, and measurement result is represented by the form of figure.Passive type obtains mainly and calculates by Twodimensional Color Image sequence.Wherein, the active depth information that obtains can only be implemented in the video acquisition stage, for the Twodimensional Color Image sequence having collected, had lost the range information of threedimensional scenic, need to utilize passive type obtain manner to calculate depth map sequence.
In passive type obtain manner, conventionally utilize Twodimensional Color Image sequence from various visual angles to mate and calculate parallax information, then utilize space geometry relationship conversion to become depth information parallax information, save as depth map sequence.But this method must possess the Twodimensional Color Image sequence of various visual angles and the registration parameter between visual angle, just can obtain depth information accurately.And in real life, a large amount of video source is all the two dimensional video sequence having collected, and only has singleview information, cannot collect once again other visual angle information.Therefore, how by the singleview Twodimensional Color Image sequence having obtained, to obtain corresponding depth map sequence in this case and just become a problem in the urgent need to address.
When utilizing singleview cromogram generating depth map, conventionally can, by vision prior information, such as spatial information (si)s such as geometry, space overlaying relations, the depth value of scene be estimated, thereby generate corresponding depth map.Such technology has obtained certain effect.But for image sequence, previous methods is generally to adopt the mode generating frame by frame, and do not fully take into account the temporal signatures between image in sequence, not only can affect degree of depth map generalization quality, even can cause depth map sequence to occur the mistake shake in time domain, thereby affect final effect.The present invention is directed to the situation of singleview color image sequence, combine and utilize image spatial information (si) and sequence timedomain information, effectively improve the quality of generating depth map sequence.
Summary of the invention
Object of the present invention is in order to promote the quality of singleview Twodimensional Color Image sequence generating depth map sequence, the accuracy that lifting depth map calculates, improve the defects such as depth map Jitter, spatial domain mistake, scenario reduction are low, and a kind of method of singleview color image sequence generating depth map sequence of passing through providing, the present invention is by the method for time domain and spatial domain combined calculation depth map sequence, by extracting the flatness feature of pixel on spatial domain and in time domain, and in the process that reads and scan, carry out the calculating of depth map sequence.Such method, can be conducive to improve the depth map sequence of the defects such as depth map Jitter, spatial domain is unstable, scenario reduction is low.
Technical scheme of the present invention is:
By a method for singleview color image sequence generating depth map sequence, it is characterized in that comprising the following steps:
(A1) input color image sequence;
(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image;
(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to ZigZag scan mode, obtain depth map sequence;
(A4) repeating step A2～A3, until all coloured images in all sequences are all disposed;
(A5) depth map sequence obtaining described in output.
Gray level image in described step (A2) by below wherein a kind of mode obtain:
(B1) by the arbitrary chrominance component of cromogram, form;
Or, (B2) by the chrominance component weighted sum to cromogram, form;
Or, (B3) convert to after other color spaces, by arbitrary chrominance component, form;
Or, (B4) convert to after other color spaces, new chrominance component weighted sum is formed.
Gray level image refers to a kind of in following mode according to ZigZag scan mode in described step (A3):
(C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete.
The method of the depth calculation based on associating time domain spatial domain described in described step (A3), comprises the following steps:
(D1) utilize time domain matching technique, include but not limited to optical flow method, present image and adjacent image are carried out to time domain coupling, obtain the temporal signatures of each pixel;
(D2) obtain the spatial feature of each pixel of present image;
(D3) according to image ZigZag scan mode, described grayscale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on grayscale map, y) with 3 pixel P (x1 of its periphery, y), P (x1, y1) the timespace domain feature difference, between P (x, y1), calculates respectively corresponding candidate depth value d1, d2, d3; Or, (D4) according to image ZigZag scan mode, described grayscale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on grayscale map, y) with its 3 pixel P of periphery (x+1, y), P (x+1, y1), P (x, y1) the timespace domain feature difference between, calculates respectively corresponding candidate depth value d1, d2, d3;
(D5) get the minimum value in d1, d2, d3, if this minimum value exceeds threshold range, the depth value of pixel P (x, y) is the initial value of setting, otherwise the depth value of pixel P (x, y) is the minimum value in d1, d2, d3;
(D6) repetition (D2)～(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.
According to the depth map sequence generation method of the embodiment of the present invention, easy to use, treatment effeciency is high, and has stable quality.Particularly, the depth map sequence generation method of the embodiment of the present invention comprises following advantage:
(1) the time domain stationarity of generating depth map sequence is good: in depth map sequence generative process, introduced temporal signatures, the time domain good stability of the depth map sequence therefore generating.
(2) the spatial domain mistake of generating depth map sequence reduces: conventional method only carries out the generation of depth value according to spatial feature, when spatial feature variation tendency and depth value variation tendency are when inconsistent, easily causes the generation error of depth value.This method considers temporal signatures, greatly reduces the generation of this mistake.
Accompanying drawing explanation
Fig. 1 is the location map of the present invention's each pixel while combining time domain spatial domain depth calculation.
Embodiment
By embodiment to being further described do the present invention.
(A1) input color image sequence;
(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image; Gray level image forms by the arbitrary chrominance component of cromogram; Certainly one of can also be in the following manner obtain: by the chrominance component weighted sum to cromogram, form; Or, convert to after other color spaces, by arbitrary chrominance component, form; Or, convert to after other color spaces, new chrominance component weighted sum is formed;
(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to ZigZag scan mode, obtain depth map sequence; Described gray level image refers to a kind of in following mode according to ZigZag scan mode: (C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete;
(A4) repeating step A2～A3, until all coloured images in all sequences are all disposed;
(A5) depth map sequence obtaining described in output.
Depth computing method based on associating time domain spatial domain described in described step (A3), comprises the following steps:
(D1) utilize namely optical flow method of time domain matching technique, present image and adjacent image are carried out to time domain coupling, calculate the motion vector of each pixel, and take the temporal signatures value that motion vector is each pixel of basic calculation.If P is any one pixel in image, T _{p}for the temporal signatures value of pixel P, p' is the match point of pixel P in adjacent image, T _{p}according to following formula, calculate:
T _{p}=f(MV _{x},MV _{y})
Wherein, MV _{x}= x _{p}x _{p'}, MV _{y}= y _{p}y _{p'}, x wherein _{p}and y _{p}image abscissa and the ordinate of pixel p.X _{p'}and y _{p'}image abscissa and the ordinate of pixel p'.Wherein, choosing of f (x, y) is open, and to choose as f (x, y) be all feasible to any rational function.
(D2) obtain the spatial feature of each pixel of present image.If P is any one pixel in image, S _{p}for the spatial feature value of pixel P, S _{p}according to following formula, calculate:
S _{p}=g (p), wherein function g(p) pixel value of represent pixel point P.
(D3) according to image ZigZag scan mode, described grayscale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on grayscale map, y) the timespace domain feature difference and between its periphery 3 pixel A, B, C, wherein the coordinate of A, B, C is respectively (x1, y), (x1, y1), (x, y1).And then calculate respectively corresponding candidate depth value d (A), d (B), d (C):
Or, (D4) according to image ZigZag scan mode, described grayscale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on grayscale map, y) the timespace domain feature difference and between its periphery 3 pixel C, D, E, wherein the space coordinates of C, D, E is respectively (x, y1), (x+1, y), (x+1, y1), calculate corresponding candidate depth value d (C), d (D), d (E) respectively;
Wherein, the position of P, A, B, C, D, E as shown in Figure 1.
Wherein the timespace domain feature difference of two pixels calculates according to following formula:
TS _{p,q}=α·h(T _{p}T _{q})+β·h(S _{p}S _{q})，
Wherein h (x)= x, α and β are weight factor, are real numbers between 0～1.In the present embodiment, α=β=0.5.
Then, for pixel p, candidate depth value d (q) is calculated as follows:
Wherein (p) be the depth value of the P previous pixel of assignment of ordering.For the situation in (D3), q point can be in 3 of A, B, C.For the situation in (D4), q point can be in 3 of C, D, E.
(D5) get three minimum values in candidate depth value, if this minimum value exceeds threshold range, pixel P (x, y) initial value of depth value for having set, otherwise the depth value of pixel P (x, y) is the minimum value in candidate depth value, in the present embodiment, threshold range is 0～255.
(D6) repetition (D2)～(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.
Claims (5)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201410092424.5A CN103945211A (en)  20140313  20140313  Method for generating depth map sequence through singlevisualangle color image sequence 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201410092424.5A CN103945211A (en)  20140313  20140313  Method for generating depth map sequence through singlevisualangle color image sequence 
Publications (1)
Publication Number  Publication Date 

CN103945211A true CN103945211A (en)  20140723 
Family
ID=51192659
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201410092424.5A CN103945211A (en)  20140313  20140313  Method for generating depth map sequence through singlevisualangle color image sequence 
Country Status (1)
Country  Link 

CN (1)  CN103945211A (en) 
Citations (8)
Publication number  Priority date  Publication date  Assignee  Title 

CN101640809A (en) *  20090817  20100203  浙江大学  Depth extraction method of merging motion information and geometric information 
CN101765022A (en) *  20100122  20100630  浙江大学  Depth representing method based on light stream and image segmentation 
CN101945295A (en) *  20090706  20110112  三星电子株式会社  Method and device for generating depth maps 
CN102026012A (en) *  20101126  20110420  清华大学  Generation method and device of depth map through threedimensional conversion to planar video 
CN102223554A (en) *  20110609  20111019  清华大学  Depth image sequence generating method and device of plane image sequence 
CN102263979A (en) *  20110805  20111130  清华大学  Depth map generation method and device for plane video threedimensional conversion 
CN102368826A (en) *  20111107  20120307  天津大学  Real time adaptive generation method from doubleviewpoint video to multiviewpoint video 
CN102769746A (en) *  20120627  20121107  宁波大学  Method for processing multiviewpoint depth video 

2014
 20140313 CN CN201410092424.5A patent/CN103945211A/en not_active Application Discontinuation
Patent Citations (8)
Publication number  Priority date  Publication date  Assignee  Title 

CN101945295A (en) *  20090706  20110112  三星电子株式会社  Method and device for generating depth maps 
CN101640809A (en) *  20090817  20100203  浙江大学  Depth extraction method of merging motion information and geometric information 
CN101765022A (en) *  20100122  20100630  浙江大学  Depth representing method based on light stream and image segmentation 
CN102026012A (en) *  20101126  20110420  清华大学  Generation method and device of depth map through threedimensional conversion to planar video 
CN102223554A (en) *  20110609  20111019  清华大学  Depth image sequence generating method and device of plane image sequence 
CN102263979A (en) *  20110805  20111130  清华大学  Depth map generation method and device for plane video threedimensional conversion 
CN102368826A (en) *  20111107  20120307  天津大学  Real time adaptive generation method from doubleviewpoint video to multiviewpoint video 
CN102769746A (en) *  20120627  20121107  宁波大学  Method for processing multiviewpoint depth video 
Similar Documents
Publication  Publication Date  Title 

Keselman et al.  Intel realsense stereoscopic depth cameras  
US9344639B2 (en)  High dynamic range array camera  
Yang et al.  Colorguided depth recovery from RGBD data using an adaptive autoregressive model  
US8988317B1 (en)  Depth determination for light field images  
US9995578B2 (en)  Image depth perception device  
Zhuo et al.  Defocus map estimation from a single image  
CN102693538B (en)  Generate the global alignment method and apparatus of high dynamic range images  
JP6467787B2 (en)  Image processing system, imaging apparatus, image processing method, and program  
KR101742120B1 (en)  Apparatus and method for image processing  
US8803950B2 (en)  Threedimensional face capturing apparatus and method and computerreadable medium thereof  
US9070042B2 (en)  Image processing apparatus, image processing method, and program thereof  
US9030469B2 (en)  Method for generating depth maps from monocular images and systems using the same  
JP5018980B2 (en)  Imaging apparatus, length measurement method, and program  
US8718356B2 (en)  Method and apparatus for 2D to 3D conversion using scene classification and face detection  
KR101200490B1 (en)  Apparatus and Method for Matching Image  
KR101930235B1 (en)  Method, device and system for digital image stabilization  
US8941750B2 (en)  Image processing device for generating reconstruction image, image generating method, and storage medium  
US10497140B2 (en)  Hybrid depth sensing pipeline  
CN103345736B (en)  A kind of virtual viewpoint rendering method  
JP6510213B2 (en)  Projection system, semiconductor integrated circuit, and image correction method  
EP2194725B1 (en)  Method and apparatus for correcting a depth image  
JP5153593B2 (en)  Image processing apparatus and image processing method  
Shimizu et al.  Superresolution from image sequence under influence of hotair optical turbulence  
US20110148868A1 (en)  Apparatus and method for reconstructing threedimensional face avatar through stereo vision and face detection  
CN104685513A (en)  Feature based high resolution motion estimation from low resolution images captured using an array source 
Legal Events
Date  Code  Title  Description 

C06  Publication  
PB01  Publication  
C10  Entry into substantive examination  
SE01  Entry into force of request for substantive examination  
C02  Deemed withdrawal of patent application after publication (patent law 2001)  
WD01  Invention patent application deemed withdrawn after publication 
Application publication date: 20140723 