CN103945211A - Method for generating depth map sequence through single-visual-angle color image sequence - Google Patents

Method for generating depth map sequence through single-visual-angle color image sequence Download PDF

Info

Publication number
CN103945211A
CN103945211A CN201410092424.5A CN201410092424A CN103945211A CN 103945211 A CN103945211 A CN 103945211A CN 201410092424 A CN201410092424 A CN 201410092424A CN 103945211 A CN103945211 A CN 103945211A
Authority
CN
China
Prior art keywords
sequence
image
depth map
pixel
color image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410092424.5A
Other languages
Chinese (zh)
Inventor
杨铀
于国星
喻莉
陈小平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd
Huazhong University of Science and Technology
Original Assignee
SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd, Huazhong University of Science and Technology filed Critical SXMOBI TECHNOLOGY (SHENZHEN) Co Ltd
Priority to CN201410092424.5A priority Critical patent/CN103945211A/en
Publication of CN103945211A publication Critical patent/CN103945211A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to a method for generating a depth map sequence through a single-visual-angle color image sequence. The method includes the first step of inputting the color image sequence, the second step of reading a frame of image in the color image sequence according to the time domain order and converting the frame of color image into a gray image, the third step of carrying out depth calculation based on combination of a time domain and a space domain on the gray image in a Zig-Zag scanning mode to obtain the depth map sequence, the fourth step of repeating the second step and the third step until all color images in the sequence are processed completely, and the fifth step of outputting the obtained depth map sequence. The method has the advantages that the time domain stability of the generated depth map sequence is good, and space domain errors of the generated depth map sequence are reduced.

Description

A kind of by the method for single-view color image sequence generating depth map sequence
Technical field
The present invention relates to a kind of method that generates corresponding depth map sequence by single-view Two-dimensional Color Image sequence.
Background technology
Depth map sequence is the important information of reconstruct 3 D video, and the current depth map sequence method of obtaining mainly contains two kinds of active and passive types.Actively obtain main finger and utilize depth camera to measure the range information in three-dimensional scenic, and measurement result is represented by the form of figure.Passive type obtains mainly and calculates by Two-dimensional Color Image sequence.Wherein, the active depth information that obtains can only be implemented in the video acquisition stage, for the Two-dimensional Color Image sequence having collected, had lost the range information of three-dimensional scenic, need to utilize passive type obtain manner to calculate depth map sequence.
In passive type obtain manner, conventionally utilize Two-dimensional Color Image sequence from various visual angles to mate and calculate parallax information, then utilize space geometry relationship conversion to become depth information parallax information, save as depth map sequence.But this method must possess the Two-dimensional Color Image sequence of various visual angles and the registration parameter between visual angle, just can obtain depth information accurately.And in real life, a large amount of video source is all the two dimensional video sequence having collected, and only has single-view information, cannot collect once again other visual angle information.Therefore, how by the single-view Two-dimensional Color Image sequence having obtained, to obtain corresponding depth map sequence in this case and just become a problem in the urgent need to address.
When utilizing single-view cromogram generating depth map, conventionally can, by vision prior information, such as spatial information (si)s such as geometry, space overlaying relations, the depth value of scene be estimated, thereby generate corresponding depth map.Such technology has obtained certain effect.But for image sequence, previous methods is generally to adopt the mode generating frame by frame, and do not fully take into account the temporal signatures between image in sequence, not only can affect degree of depth map generalization quality, even can cause depth map sequence to occur the mistake shake in time domain, thereby affect final effect.The present invention is directed to the situation of single-view color image sequence, combine and utilize image spatial information (si) and sequence time-domain information, effectively improve the quality of generating depth map sequence.
Summary of the invention
Object of the present invention is in order to promote the quality of single-view Two-dimensional Color Image sequence generating depth map sequence, the accuracy that lifting depth map calculates, improve the defects such as depth map Jitter, spatial domain mistake, scenario reduction are low, and a kind of method of single-view color image sequence generating depth map sequence of passing through providing, the present invention is by the method for time domain and spatial domain combined calculation depth map sequence, by extracting the flatness feature of pixel on spatial domain and in time domain, and in the process that reads and scan, carry out the calculating of depth map sequence.Such method, can be conducive to improve the depth map sequence of the defects such as depth map Jitter, spatial domain is unstable, scenario reduction is low.
Technical scheme of the present invention is:
By a method for single-view color image sequence generating depth map sequence, it is characterized in that comprising the following steps:
(A1) input color image sequence;
(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image;
(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to Zig-Zag scan mode, obtain depth map sequence;
(A4) repeating step A2~A3, until all coloured images in all sequences are all disposed;
(A5) depth map sequence obtaining described in output.
Gray level image in described step (A2) by below wherein a kind of mode obtain:
(B1) by the arbitrary chrominance component of cromogram, form;
Or, (B2) by the chrominance component weighted sum to cromogram, form;
Or, (B3) convert to after other color spaces, by arbitrary chrominance component, form;
Or, (B4) convert to after other color spaces, new chrominance component weighted sum is formed.
Gray level image refers to a kind of in following mode according to Zig-Zag scan mode in described step (A3):
(C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete.
The method of the depth calculation based on associating time domain spatial domain described in described step (A3), comprises the following steps:
(D1) utilize time domain matching technique, include but not limited to optical flow method, present image and adjacent image are carried out to time domain coupling, obtain the temporal signatures of each pixel;
(D2) obtain the spatial feature of each pixel of present image;
(D3) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on gray-scale map, y) with 3 pixel P (x-1 of its periphery, y), P (x-1, y-1) the time-space domain feature difference, between P (x, y-1), calculates respectively corresponding candidate depth value d1, d2, d3; Or, (D4) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on gray-scale map, y) with its 3 pixel P of periphery (x+1, y), P (x+1, y-1), P (x, y-1) the time-space domain feature difference between, calculates respectively corresponding candidate depth value d1, d2, d3;
(D5) get the minimum value in d1, d2, d3, if this minimum value exceeds threshold range, the depth value of pixel P (x, y) is the initial value of setting, otherwise the depth value of pixel P (x, y) is the minimum value in d1, d2, d3;
(D6) repetition (D2)~(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.
According to the depth map sequence generation method of the embodiment of the present invention, easy to use, treatment effeciency is high, and has stable quality.Particularly, the depth map sequence generation method of the embodiment of the present invention comprises following advantage:
(1) the time domain stationarity of generating depth map sequence is good: in depth map sequence generative process, introduced temporal signatures, the time domain good stability of the depth map sequence therefore generating.
(2) the spatial domain mistake of generating depth map sequence reduces: conventional method only carries out the generation of depth value according to spatial feature, when spatial feature variation tendency and depth value variation tendency are when inconsistent, easily causes the generation error of depth value.This method considers temporal signatures, greatly reduces the generation of this mistake.
Accompanying drawing explanation
Fig. 1 is the location map of the present invention's each pixel while combining time domain spatial domain depth calculation.
Embodiment
By embodiment to being further described do the present invention.
(A1) input color image sequence;
(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image; Gray level image forms by the arbitrary chrominance component of cromogram; Certainly one of can also be in the following manner obtain: by the chrominance component weighted sum to cromogram, form; Or, convert to after other color spaces, by arbitrary chrominance component, form; Or, convert to after other color spaces, new chrominance component weighted sum is formed;
(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to Zig-Zag scan mode, obtain depth map sequence; Described gray level image refers to a kind of in following mode according to Zig-Zag scan mode: (C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete;
(A4) repeating step A2~A3, until all coloured images in all sequences are all disposed;
(A5) depth map sequence obtaining described in output.
Depth computing method based on associating time domain spatial domain described in described step (A3), comprises the following steps:
(D1) utilize namely optical flow method of time domain matching technique, present image and adjacent image are carried out to time domain coupling, calculate the motion vector of each pixel, and take the temporal signatures value that motion vector is each pixel of basic calculation.If P is any one pixel in image, T pfor the temporal signatures value of pixel P, p' is the match point of pixel P in adjacent image, T paccording to following formula, calculate:
T p=f(MV x,MV y)
Wherein, MV x=| x p-x p'|, MV y=| y p-y p'|, x wherein pand y pimage abscissa and the ordinate of pixel p.X p'and y p'image abscissa and the ordinate of pixel p'.Wherein, choosing of f (x, y) is open, and to choose as f (x, y) be all feasible to any rational function.
(D2) obtain the spatial feature of each pixel of present image.If P is any one pixel in image, S pfor the spatial feature value of pixel P, S paccording to following formula, calculate:
S p=g (p), wherein function g(p) pixel value of represent pixel point P.
(D3) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on gray-scale map, y) the time-space domain feature difference and between its periphery 3 pixel A, B, C, wherein the coordinate of A, B, C is respectively (x-1, y), (x-1, y-1), (x, y-1).And then calculate respectively corresponding candidate depth value d (A), d (B), d (C):
Or, (D4) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on gray-scale map, y) the time-space domain feature difference and between its periphery 3 pixel C, D, E, wherein the space coordinates of C, D, E is respectively (x, y-1), (x+1, y), (x+1, y-1), calculate corresponding candidate depth value d (C), d (D), d (E) respectively;
Wherein, the position of P, A, B, C, D, E as shown in Figure 1.
Wherein the time-space domain feature difference of two pixels calculates according to following formula:
TS p,q=α·h(T p-T q)+β·h(S p-S q),
Wherein h (x)=| x|, α and β are weight factor, are real numbers between 0~1.In the present embodiment, α=β=0.5.
Then, for pixel p, candidate depth value d (q) is calculated as follows:
Wherein (p) be the depth value of the P previous pixel of assignment of ordering.For the situation in (D3), q point can be in 3 of A, B, C.For the situation in (D4), q point can be in 3 of C, D, E.
(D5) get three minimum values in candidate depth value, if this minimum value exceeds threshold range, pixel P (x, y) initial value of depth value for having set, otherwise the depth value of pixel P (x, y) is the minimum value in candidate depth value, in the present embodiment, threshold range is 0~255.
(D6) repetition (D2)~(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.

Claims (5)

1. by a method for single-view color image sequence generating depth map sequence, it is characterized in that comprising the following steps:
(A1) input color image sequence;
(A2) by time domain order, read the two field picture in color image sequence, and this color image frame is converted to gray level image;
(A3) described gray level image is carried out to the depth calculation based on associating time domain spatial domain according to Zig-Zag scan mode, obtain depth map sequence;
(A4) repeating step A2~A3, until all coloured images in all sequences are all disposed;
(A5) depth map sequence obtaining described in output.
2. according to claim 1 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: the gray level image in described step (A2) by below wherein a kind of mode obtain:
(B1) by the arbitrary chrominance component of cromogram, form;
Or, (B2) by the chrominance component weighted sum to cromogram, form;
Or, (B3) convert to after other color spaces, by arbitrary chrominance component, form;
Or, (B4) convert to after other color spaces, new chrominance component weighted sum is formed.
3. according to claim 1 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: in described step (A3), gray level image refers to a kind of in following mode according to Zig-Zag scan mode:
(C1) to described image from top to bottom, take row pixel as unit, scan first from left to right, scan from right to left subsequently, the rest may be inferred, until all picture element scans of image are complete; Or, (C2) to described image from top to bottom, take row pixel as unit, scan first from right to left, scan from left to right subsequently, the rest may be inferred, until all picture element scans of image are complete.
4. according to claim 1 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: the method for the depth calculation based on associating time domain spatial domain described in described step (A3), comprises the following steps:
(D1) utilize time domain matching technique, present image and adjacent image are carried out to time domain coupling, obtain the temporal signatures of each pixel;
(D2) obtain the spatial feature of each pixel of present image;
(D3) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from left to right time, compare current pixel point P (x on gray-scale map, y) with 3 pixel P (x-1 of its periphery, y), P (x-1, y-1) the time-space domain feature difference, between P (x, y-1), calculates respectively corresponding candidate depth value d1, d2, d3; Or, (D4) according to image Zig-Zag scan mode, described gray-scale map is scanned, when scan mode is from right to left time, compare current pixel point P (x on gray-scale map, y) with its 3 pixel P of periphery (x+1, y), P (x+1, y-1), P (x, y-1) the time-space domain feature difference between, calculates respectively corresponding candidate depth value d1, d2, d3;
(D5) get the minimum value in d1, d2, d3, if this minimum value exceeds threshold range, the depth value of pixel P (x, y) is the initial value of setting, otherwise the depth value of pixel P (x, y) is the minimum value in d1, d2, d3;
(D6) repetition (D2)~(D5) until the depth value of all pixels has all calculated, obtain depth map sequence.
5. according to claim 4 by the method for single-view color image sequence generating depth map sequence, it is characterized in that: described time domain matching technique is optical flow method.
CN201410092424.5A 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence Pending CN103945211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410092424.5A CN103945211A (en) 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410092424.5A CN103945211A (en) 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence

Publications (1)

Publication Number Publication Date
CN103945211A true CN103945211A (en) 2014-07-23

Family

ID=51192659

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410092424.5A Pending CN103945211A (en) 2014-03-13 2014-03-13 Method for generating depth map sequence through single-visual-angle color image sequence

Country Status (1)

Country Link
CN (1) CN103945211A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111058829A (en) * 2019-12-05 2020-04-24 中国矿业大学 Rock stratum analysis method based on image processing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101640809A (en) * 2009-08-17 2010-02-03 浙江大学 Depth extraction method of merging motion information and geometric information
CN101765022A (en) * 2010-01-22 2010-06-30 浙江大学 Depth representing method based on light stream and image segmentation
CN101945295A (en) * 2009-07-06 2011-01-12 三星电子株式会社 Method and device for generating depth maps
CN102026012A (en) * 2010-11-26 2011-04-20 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
CN102223554A (en) * 2011-06-09 2011-10-19 清华大学 Depth image sequence generating method and device of plane image sequence
CN102263979A (en) * 2011-08-05 2011-11-30 清华大学 Depth map generation method and device for plane video three-dimensional conversion
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102769746A (en) * 2012-06-27 2012-11-07 宁波大学 Method for processing multi-viewpoint depth video

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101945295A (en) * 2009-07-06 2011-01-12 三星电子株式会社 Method and device for generating depth maps
CN101640809A (en) * 2009-08-17 2010-02-03 浙江大学 Depth extraction method of merging motion information and geometric information
CN101765022A (en) * 2010-01-22 2010-06-30 浙江大学 Depth representing method based on light stream and image segmentation
CN102026012A (en) * 2010-11-26 2011-04-20 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
CN102223554A (en) * 2011-06-09 2011-10-19 清华大学 Depth image sequence generating method and device of plane image sequence
CN102263979A (en) * 2011-08-05 2011-11-30 清华大学 Depth map generation method and device for plane video three-dimensional conversion
CN102368826A (en) * 2011-11-07 2012-03-07 天津大学 Real time adaptive generation method from double-viewpoint video to multi-viewpoint video
CN102769746A (en) * 2012-06-27 2012-11-07 宁波大学 Method for processing multi-viewpoint depth video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111058829A (en) * 2019-12-05 2020-04-24 中国矿业大学 Rock stratum analysis method based on image processing
CN111058829B (en) * 2019-12-05 2021-06-25 中国矿业大学 Rock stratum analysis method based on image processing

Similar Documents

Publication Publication Date Title
CN107909640B (en) Face relighting method and device based on deep learning
CN103824318B (en) A kind of depth perception method of multi-cam array
US10096170B2 (en) Image device for determining an invalid depth information of a depth image and operation method thereof
JP2015197745A (en) Image processing apparatus, imaging apparatus, image processing method, and program
JP5997645B2 (en) Image processing apparatus and method, and imaging apparatus
CN107025660B (en) Method and device for determining image parallax of binocular dynamic vision sensor
CN102685511B (en) Image processing apparatus and image processing method
JP5197683B2 (en) Depth signal generation apparatus and method
CN103839258A (en) Depth perception method of binarized laser speckle images
US8774551B2 (en) Image processing apparatus and image processing method for reducing noise
CN103177260B (en) A kind of coloured image boundary extraction method
CN103810708A (en) Method and device for perceiving depth of laser speckle image
CN105139401A (en) Depth credibility assessment method for depth map
WO2013105381A1 (en) Image processing method, image processing apparatus, and image processing program
TWI608447B (en) Stereo image depth map generation device and method
US10165247B2 (en) Image device with image defocus function and method generating a defocus image thereof
JP6318576B2 (en) Image projection system, image processing apparatus, image projection method, and program
JP4385077B1 (en) Motion vector detection device and image processing device
CN114981845A (en) Image scanning method and device, equipment and storage medium
US9270883B2 (en) Image processing apparatus, image pickup apparatus, image pickup system, image processing method, and non-transitory computer-readable storage medium
JP2013044597A (en) Image processing device and method, and program
Isakova et al. FPGA design and implementation of a real-time stereo vision system
JP2016148588A (en) Depth estimation model generation device and depth estimation device
CN103945211A (en) Method for generating depth map sequence through single-visual-angle color image sequence
JP2016156702A (en) Imaging device and imaging method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140723