CN103945211A - Method for generating depth map sequence through single-visual-angle color image sequence - Google Patents

Method for generating depth map sequence through single-visual-angle color image sequence Download PDF

Info

Publication number
CN103945211A
CN103945211A CN201410092424.5A CN201410092424A CN103945211A CN 103945211 A CN103945211 A CN 103945211A CN 201410092424 A CN201410092424 A CN 201410092424A CN 103945211 A CN103945211 A CN 103945211A
Authority
CN
China
Prior art keywords
sequence
image
depth map
color image
pixel
Prior art date
Application number
CN201410092424.5A
Other languages
Chinese (zh)
Inventor
杨铀
于国星
喻莉
陈小平
Original Assignee
华中科技大学
深圳深讯和科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华中科技大学, 深圳深讯和科技有限公司 filed Critical 华中科技大学
Priority to CN201410092424.5A priority Critical patent/CN103945211A/en
Publication of CN103945211A publication Critical patent/CN103945211A/en

Links

Abstract

The invention relates to a method for generating a depth map sequence through a single-visual-angle color image sequence. The method includes the first step of inputting the color image sequence, the second step of reading a frame of image in the color image sequence according to the time domain order and converting the frame of color image into a gray image, the third step of carrying out depth calculation based on combination of a time domain and a space domain on the gray image in a Zig-Zag scanning mode to obtain the depth map sequence, the fourth step of repeating the second step and the third step until all color images in the sequence are processed completely, and the fifth step of outputting the obtained depth map sequence. The method has the advantages that the time domain stability of the generated depth map sequence is good, and space domain errors of the generated depth map sequence are reduced.

Description

A kind of by the method for single-view color image sequence generating depth map sequence

Technical field

The present invention relates to a kind of method that generates corresponding depth map sequence by single-view Two-dimensional Color Image sequence.

Background technology

Depth map sequence is the important information of reconstruct 3 D video, and the current depth map sequence method of obtaining mainly contains two kinds of active and passive types.Actively obtain main finger and utilize depth camera to measure the range information in three-dimensional scenic, and measurement result is represented by the form of figure.Passive type obtains mainly and calculates by Two-dimensional Color Image sequence.Wherein, the active depth information that obtains can only be implemented in the video acquisition stage, for the Two-dimensional Color Image sequence having collected, had lost the range information of three-dimensional scenic, need to utilize passive type obtain manner to calculate depth map sequence.

In passive type obtain manner, conventionally utilize Two-dimensional Color Image sequence from various visual angles to mate and calculate parallax information, then utilize space geometry relationship conversion to become depth information parallax information, save as depth map sequence.But this method must possess the Two-dimensional Color Image sequence of various visual angles and the registration parameter between visual angle, just can obtain depth information accurately.And in real life, a large amount of video source is all the two dimensional video sequence having collected, and only has single-view information, cannot collect once again other visual angle information.Therefore, how by the single-view Two-dimensional Color Image sequence having obtained, to obtain corresponding depth map sequence in this case and just become a problem in the urgent need to address.

When utilizing single-view cromogram generating depth map, conventionally can, by vision prior information, such as spatial information (si)s such as geometry, space overlaying relations, the depth value of scene be estimated, thereby generate corresponding depth map.Such technology has obtained certain effect.But for image sequence, previous methods is generally to adopt the mode generating frame by frame, and do not fully take into account the temporal signatures between image in sequence, not only can affect degree of depth map generalization quality, even can cause depth map sequence to occur the mistake shake in time domain, thereby affect final effect.The present invention is directed to the situation of single-view color image sequence, combine and utilize image spatial information (si) and sequence time-domain information, effectively improve the quality of generating depth map sequence.

Summary of the invention

Object of the present invention is in order to promote the quality of single-view Two-dimensional Color Image sequence generating depth map sequence, the accuracy that lifting depth map calculates, improve the defects such as depth map Jitter, spatial domain mistake, scenario reduction are low, and a kind of method of single-view color image sequence generating depth map sequence of passing through providing, the present invention is by the method for time domain and spatial domain combined calculation depth map sequence, by extracting the flatness feature of pixel on spatial domain and in time domain, and in the process that reads and scan, carry out the calculating of depth map sequence.Such method, can be conducive to improve the depth map sequence of the defects such as depth map Jitter, spatial domain is unstable, scenario reduction is low.

Technical scheme of the present invention is:

By a method for single-view color image sequence generating depth map sequence, it is characterized in that comprising the following steps:

(A1) input color image sequence;

(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image;

(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to Zig-Zag scan mode, obtain depth map sequence;

(A4) repeating step A2~A3, until all coloured images in all sequences are all disposed;

(A5) depth map sequence obtaining described in output.

Gray level image in described step (A2) by below wherein a kind of mode obtain:

(B1) by the arbitrary chrominance component of cromogram, form;

Or, (B2) by the chrominance component weighted sum to cromogram, form;

Or, (B3) convert to after other color spaces, by arbitrary chrominance component, form;

Or, (B4) convert to after other color spaces, new chrominance component weighted sum is formed.

Gray level image refers to a kind of in following mode according to Zig-Zag scan mode in described step (A3):

(C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete.

The method of the depth calculation based on associating time domain spatial domain described in described step (A3), comprises the following steps:

(D1) utilize time domain matching technique, include but not limited to optical flow method, present image and adjacent image are carried out to time domain coupling, obtain the temporal signatures of each pixel;

(D2) obtain the spatial feature of each pixel of present image;

(D3) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on gray-scale map, y) with 3 pixel P (x-1 of its periphery, y), P (x-1, y-1) the time-space domain feature difference, between P (x, y-1), calculates respectively corresponding candidate depth value d1, d2, d3; Or, (D4) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on gray-scale map, y) with its 3 pixel P of periphery (x+1, y), P (x+1, y-1), P (x, y-1) the time-space domain feature difference between, calculates respectively corresponding candidate depth value d1, d2, d3;

(D5) get the minimum value in d1, d2, d3, if this minimum value exceeds threshold range, the depth value of pixel P (x, y) is the initial value of setting, otherwise the depth value of pixel P (x, y) is the minimum value in d1, d2, d3;

(D6) repetition (D2)~(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.

According to the depth map sequence generation method of the embodiment of the present invention, easy to use, treatment effeciency is high, and has stable quality.Particularly, the depth map sequence generation method of the embodiment of the present invention comprises following advantage:

(1) the time domain stationarity of generating depth map sequence is good: in depth map sequence generative process, introduced temporal signatures, the time domain good stability of the depth map sequence therefore generating.

(2) the spatial domain mistake of generating depth map sequence reduces: conventional method only carries out the generation of depth value according to spatial feature, when spatial feature variation tendency and depth value variation tendency are when inconsistent, easily causes the generation error of depth value.This method considers temporal signatures, greatly reduces the generation of this mistake.

Accompanying drawing explanation

Fig. 1 is the location map of the present invention's each pixel while combining time domain spatial domain depth calculation.

Embodiment

By embodiment to being further described do the present invention.

(A1) input color image sequence;

(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image; Gray level image forms by the arbitrary chrominance component of cromogram; Certainly one of can also be in the following manner obtain: by the chrominance component weighted sum to cromogram, form; Or, convert to after other color spaces, by arbitrary chrominance component, form; Or, convert to after other color spaces, new chrominance component weighted sum is formed;

(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to Zig-Zag scan mode, obtain depth map sequence; Described gray level image refers to a kind of in following mode according to Zig-Zag scan mode: (C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete;

(A4) repeating step A2~A3, until all coloured images in all sequences are all disposed;

(A5) depth map sequence obtaining described in output.

Depth computing method based on associating time domain spatial domain described in described step (A3), comprises the following steps:

(D1) utilize namely optical flow method of time domain matching technique, present image and adjacent image are carried out to time domain coupling, calculate the motion vector of each pixel, and take the temporal signatures value that motion vector is each pixel of basic calculation.If P is any one pixel in image, T pfor the temporal signatures value of pixel P, p' is the match point of pixel P in adjacent image, T paccording to following formula, calculate:

T p=f(MV x,MV y)

Wherein, MV x=| x p-x p'|, MV y=| y p-y p'|, x wherein pand y pimage abscissa and the ordinate of pixel p.X p'and y p'image abscissa and the ordinate of pixel p'.Wherein, choosing of f (x, y) is open, and to choose as f (x, y) be all feasible to any rational function.

(D2) obtain the spatial feature of each pixel of present image.If P is any one pixel in image, S pfor the spatial feature value of pixel P, S paccording to following formula, calculate:

S p=g (p), wherein function g(p) pixel value of represent pixel point P.

(D3) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on gray-scale map, y) the time-space domain feature difference and between its periphery 3 pixel A, B, C, wherein the coordinate of A, B, C is respectively (x-1, y), (x-1, y-1), (x, y-1).And then calculate respectively corresponding candidate depth value d (A), d (B), d (C):

Or, (D4) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on gray-scale map, y) the time-space domain feature difference and between its periphery 3 pixel C, D, E, wherein the space coordinates of C, D, E is respectively (x, y-1), (x+1, y), (x+1, y-1), calculate corresponding candidate depth value d (C), d (D), d (E) respectively;

Wherein, the position of P, A, B, C, D, E as shown in Figure 1.

Wherein the time-space domain feature difference of two pixels calculates according to following formula:

TS p,q=α·h(T p-T q)+β·h(S p-S q),

Wherein h (x)=| x|, α and β are weight factor, are real numbers between 0~1.In the present embodiment, α=β=0.5.

Then, for pixel p, candidate depth value d (q) is calculated as follows:

Wherein (p) be the depth value of the P previous pixel of assignment of ordering.For the situation in (D3), q point can be in 3 of A, B, C.For the situation in (D4), q point can be in 3 of C, D, E.

(D5) get three minimum values in candidate depth value, if this minimum value exceeds threshold range, pixel P (x, y) initial value of depth value for having set, otherwise the depth value of pixel P (x, y) is the minimum value in candidate depth value, in the present embodiment, threshold range is 0~255.

(D6) repetition (D2)~(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.

Claims (5)

1. by a method for single-view color image sequence generating depth map sequence, it is characterized in that comprising the following steps:
(A1) input color image sequence;
(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image;
(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to Zig-Zag scan mode, obtain depth map sequence;
(A4) repeating step A2~A3, until all coloured images in all sequences are all disposed;
(A5) depth map sequence obtaining described in output.
2. according to claim 1 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: the gray level image in described step (A2) by below wherein a kind of mode obtain:
(B1) by the arbitrary chrominance component of cromogram, form;
Or, (B2) by the chrominance component weighted sum to cromogram, form;
Or, (B3) convert to after other color spaces, by arbitrary chrominance component, form;
Or, (B4) convert to after other color spaces, new chrominance component weighted sum is formed.
3. according to claim 1 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: in described step (A3), gray level image refers to a kind of in following mode according to Zig-Zag scan mode:
(C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete.
4. according to claim 1 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: the method for the depth calculation based on associating time domain spatial domain described in described step (A3), comprises the following steps:
(D1) utilize time domain matching technique, present image and adjacent image are carried out to time domain coupling, obtain the temporal signatures of each pixel;
(D2) obtain the spatial feature of each pixel of present image;
(D3) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on gray-scale map, y) with 3 pixel P (x-1 of its periphery, y), P (x-1, y-1) the time-space domain feature difference, between P (x, y-1), calculates respectively corresponding candidate depth value d1, d2, d3; Or, (D4) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on gray-scale map, y) with its 3 pixel P of periphery (x+1, y), P (x+1, y-1), P (x, y-1) the time-space domain feature difference between, calculates respectively corresponding candidate depth value d1, d2, d3;
(D5) get the minimum value in d1, d2, d3, if this minimum value exceeds threshold range, the depth value of pixel P (x, y) is the initial value of setting, otherwise the depth value of pixel P (x, y) is the minimum value in d1, d2, d3;
(D6) repetition (D2)~(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.
5. according to claim 4 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: described time domain matching technique is optical flow method.
CN201410092424.5A 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence CN103945211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410092424.5A CN103945211A (en) 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410092424.5A CN103945211A (en) 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence

Publications (1)

Publication Number Publication Date
CN103945211A true CN103945211A (en) 2014-07-23

Family

ID=51192659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410092424.5A CN103945211A (en) 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence

Country Status (1)

Country Link
CN (1) CN103945211A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101640809A (en) * 2009-08-17 2010-02-03 浙江大学 Depth extraction method of merging motion information and geometric information
CN101765022A (en) * 2010-01-22 2010-06-30 浙江大学 Depth representing method based on light stream and image segmentation
CN101945295A (en) * 2009-07-06 2011-01-12 三星电子株式会社 Method and device for generating depth maps
CN102026012A (en) * 2010-11-26 2011-04-20 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
CN102223554A (en) * 2011-06-09 2011-10-19 清华大学 Depth image sequence generating method and device of plane image sequence
CN102263979A (en) * 2011-08-05 2011-11-30 清华大学 Depth map generation method and device for plane video three-dimensional conversion
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102769746A (en) * 2012-06-27 2012-11-07 宁波大学 Method for processing multi-viewpoint depth video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945295A (en) * 2009-07-06 2011-01-12 三星电子株式会社 Method and device for generating depth maps
CN101640809A (en) * 2009-08-17 2010-02-03 浙江大学 Depth extraction method of merging motion information and geometric information
CN101765022A (en) * 2010-01-22 2010-06-30 浙江大学 Depth representing method based on light stream and image segmentation
CN102026012A (en) * 2010-11-26 2011-04-20 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
CN102223554A (en) * 2011-06-09 2011-10-19 清华大学 Depth image sequence generating method and device of plane image sequence
CN102263979A (en) * 2011-08-05 2011-11-30 清华大学 Depth map generation method and device for plane video three-dimensional conversion
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102769746A (en) * 2012-06-27 2012-11-07 宁波大学 Method for processing multi-viewpoint depth video

Similar Documents

Publication Publication Date Title
Keselman et al. Intel realsense stereoscopic depth cameras
US9344639B2 (en) High dynamic range array camera
Yang et al. Color-guided depth recovery from RGB-D data using an adaptive autoregressive model
US8988317B1 (en) Depth determination for light field images
US9995578B2 (en) Image depth perception device
Zhuo et al. Defocus map estimation from a single image
CN102693538B (en) Generate the global alignment method and apparatus of high dynamic range images
JP6467787B2 (en) Image processing system, imaging apparatus, image processing method, and program
KR101742120B1 (en) Apparatus and method for image processing
US8803950B2 (en) Three-dimensional face capturing apparatus and method and computer-readable medium thereof
US9070042B2 (en) Image processing apparatus, image processing method, and program thereof
US9030469B2 (en) Method for generating depth maps from monocular images and systems using the same
JP5018980B2 (en) Imaging apparatus, length measurement method, and program
US8718356B2 (en) Method and apparatus for 2D to 3D conversion using scene classification and face detection
KR101200490B1 (en) Apparatus and Method for Matching Image
KR101930235B1 (en) Method, device and system for digital image stabilization
US8941750B2 (en) Image processing device for generating reconstruction image, image generating method, and storage medium
US10497140B2 (en) Hybrid depth sensing pipeline
CN103345736B (en) A kind of virtual viewpoint rendering method
JP6510213B2 (en) Projection system, semiconductor integrated circuit, and image correction method
EP2194725B1 (en) Method and apparatus for correcting a depth image
JP5153593B2 (en) Image processing apparatus and image processing method
Shimizu et al. Super-resolution from image sequence under influence of hot-air optical turbulence
US20110148868A1 (en) Apparatus and method for reconstructing three-dimensional face avatar through stereo vision and face detection
CN104685513A (en) Feature based high resolution motion estimation from low resolution images captured using an array source

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140723