CN105635741A - Quick depth generating method for non-key frames - Google Patents
Quick depth generating method for non-key frames Download PDFInfo
- Publication number
- CN105635741A CN105635741A CN201410593691.0A CN201410593691A CN105635741A CN 105635741 A CN105635741 A CN 105635741A CN 201410593691 A CN201410593691 A CN 201410593691A CN 105635741 A CN105635741 A CN 105635741A
- Authority
- CN
- China
- Prior art keywords
- point
- perform
- sad value
- frame
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Abstract
The invention is suitable for the field of video conversion, and provides a quick depth generating method for non-key frames. According to the quick depth generating method, based on key frames in a common 2D video and corresponding depth information thereof, a continuous rhombus eliminating method is utilized, thereby reducing number of traversals in depth generation, improving 3D conversion speed and furthermore ensuring high three-dimensional effect in conversion.
Description
Technical field
The invention belongs to video conversion field, particularly relate to a kind of non-key frame degree of depth rapid generation.
Background technology
At present, along with three-dimensional display, 3D shadow casting technique, 3D movie theatre development, especially first three-dimensional channel of China open-minded at the beginning of 2012, increases to the demand of stereo content (program). Recent years, all do some in stereoscopic motion picture, three-dimensional live, stereoscopic television are acute etc. both at home and abroad and attempted.
But, current nearly all high-grade stereo content, especially stereoscopic motion picture, originates from external all directly or indirectly. Although domestic existing Some Enterprises has participated in 3D content production process, but it is mostly adopt manual method, carries out image scratching figure to make depth information, earned the profit in making by the human resources of China's low cost. And it is true that the existing 2D video content very fully of China, if 2D video can be solved turn the efficiency in 3D video and effect problem, can significantly advance China in the production technique of 3D video content and achievement undoubtedly.
Summary of the invention
The purpose of the embodiment of the present invention is in that to provide a kind of non-key frame degree of depth rapid generation, key frame in common 2D video and on the depth information basis of correspondence, have employed a kind of successive elimination rhombus therapy, reduce the traversal number of times that the degree of depth generates, improve speed during 3D conversion stereoeffect when simultaneously taking into account conversion.
The embodiment of the present invention is achieved in that a kind of non-key frame degree of depth rapid generation, and described method comprises the steps:
S1, reads the depth map of key frame and the planar video sequence that described key frame is corresponding;
S2, obtains the luminance component in the two dimensional image of present frame and next frame respectively, and the luminance component of described next frame is divided into the macro block of presetted pixel;
S3, when described next frame exists the macro block not obtaining motion vector, described present frame scans in preset range according to big diamond search template, the center of LDSP template is placed on the initial point of search window, calculate the sad value of central point, then remaining 8 points are carried out SEA algorithm judgement, if being all unsatisfactory for, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, and point minimum for sad value is set to the center of LDSP, and perform S5, otherwise perform S4;
S4, centered by the point that sad value is minimum, form new LDSP template, and the point not calculating sad value around new central point is carried out SEA algorithm judgement, if being all unsatisfactory for, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is LDSP template that sad value is minimum, perform S5, otherwise repeat S4;
S5, point centered by the point that sad value is minimum, using little diamond search template, around 4 points being carried out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise perform S6;
S6, point centered by the point that sad value is minimum, forming new SDSP template, the point not calculating sad value being carried out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise repeat S6;
S7, with the central point of SDSP template for match block starting point, calculates and preserves the motion vector of current macro and match block, it is judged that and obtain whether the motion vector of macro block in next frame two dimensional image terminates, and if so, then perform S8, otherwise perform S3;
S8, according to the depth map corresponding to motion vector and present frame, generates each macro block of next frame picture depth figure, and then generates the depth map of next frame image, it may be judged whether generate next frame depth map again, be, perform S2, otherwise terminate.
The embodiment of the present invention turns the non-key frame degree of depth rapid generation in 3D video by a kind of 2D video, key frame in common 2D video and on the depth information basis of correspondence, have employed a kind of successive elimination rhombus therapy, reduce the traversal number of times that the degree of depth generates, generate the depth information of non-key frame fast and accurately, improve speed during 3D conversion stereoeffect when simultaneously taking into account conversion.
Accompanying drawing explanation
Fig. 1 is the flowchart of a kind of non-key frame degree of depth rapid generation that first embodiment of the invention provides.
Detailed description of the invention
In order to make the purpose of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated. Should be appreciated that specific embodiment described herein is only in order to explain the present invention, is not intended to limit the present invention.
Below in conjunction with specific embodiment, implementing of the present invention is described in detail:
Embodiment one:
What Fig. 1 illustrated a kind of non-key frame degree of depth rapid generation that first embodiment of the invention provides realizes flow process, and details are as follows:
In step sl, the depth map of key frame and the planar video sequence that described key frame is corresponding are read.
In specific implementation process, when the non-key frame degree of depth needing to carry out 2D video and turning 3D video generates, first 2D video is carried out pretreatment, obtain the sequence of frames of video of 2D video, can adopt the mode of artificial segmentation afterwards is some subsequence by Video segmentation, two field picture is acquired by the mode of automatic decoding, artificial or computer automation mode can be adopted later to acquire key frame and non-key frame, read after acquiring key frame corresponding to the depth map of key frame and the depth information of key frame and key frame to the sequence of frames of video in 2D video, wherein the depth value of depth information ranges for 0-255.
In step s 2, obtain the luminance component in the two dimensional image of present frame and next frame respectively, and the luminance component of described next frame is divided into the macro block of presetted pixel.
In embodiments of the present invention, after acquiring the video sequence corresponding to the video depth map of key frame and key frame, obtain luminance component Yn and Yn+1 in the two dimensional image of current key frame N and next non-key frame N+1, and the luminance component Yn+1 of next frame is divided into default pixel macroblock, wherein preferably adopting the pixel of 16*16, wherein non-key frame is the frame of video containing motion vector information.
In step s3, when described next frame exists the macro block not obtaining motion vector, described present frame scans in preset range according to big diamond search template, the center of LDSP template is placed on the initial point of search window, calculate the sad value of central point, then remaining 8 points are carried out SEA algorithm judgement, if being all unsatisfactory for, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, and point minimum for sad value is set to the center of LDSP, and perform S5, otherwise perform S4. Wherein SEA algorithm particularly as follows: P-MINSAD (x, y)��M (i, j)��P+MINSAD (x, y).
Wherein: fk(m, n) for kth non-key frame, (m, n) puts gray value;
Wherein M, N are the size of the current template selected, and M is template width, and N is form height.
In embodiments of the present invention, when the luminance component Yn+1 of next frame is divided into default pixel macroblock, when the macro block acquired does not have motion vector, scan in predetermined scope according to the big diamond search template of LDSP (LargeDiamondSearchPatten) in the current frame, the center of big rhombus template is placed on the initial point of search window, first SAD (SumofAbsoluteDifference) the absolute difference summation of key store is calculated, then remaining 8 points are carried out the judgement of SEA (SuccessiveEliminationAlgorithm) successive elimination algorithm, judge whether to meet SEA algorithm, if being all unsatisfactory for, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, and point minimum for sad value is set to the center of LDSP, and perform S5, otherwise perform S4.
In step s 4, centered by the point that sad value is minimum, form new LDSP template, and the point not calculating sad value around new central point is carried out SEA algorithm judgement, if being all unsatisfactory for, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is LDSP template that sad value is minimum, perform S5, otherwise repeat S4.
In embodiments of the present invention, point centered by the point that sad value is minimum, forming new LDSP template, the point not calculating sad value around Bing Duixin center carries out SEA algorithm judgement, if being all unsatisfactory for SEA algorithm, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is LDSP template that sad value is minimum, perform S5, otherwise repeat S4.
In step s 5, point centered by the point that sad value is minimum, using little diamond search template, around 4 points being carried out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise perform S6.
In embodiments of the present invention, using the minimum electricity of sad value as central point, use SDSP (SmallDiamondSearchPatten) little diamond search template, by adopting 4 points around little diamond search module centers point to carry out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise perform S6.
In step s 6, point centered by the point that sad value is minimum, forming new SDSP template, the point not calculating sad value being carried out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise repeat S6.
In embodiments of the present invention, when the central point that the point that sad value is minimum is not SDSP template, point centered by the point that sad value is minimum, form new SDSP template, the point not calculating sad value is carried out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise repeat S6.
In the step s 7, with the central point of SDSP template for match block starting point, calculate and preserve the motion vector of current macro and match block, it is judged that and obtain whether the motion vector of macro block in next frame two dimensional image terminates, if so, then perform S8, otherwise perform S3.
In embodiments of the present invention, when the central point that point is SDSP template that sad value is minimum, with the central point of SDSP template for match block starting point, calculate and preserve the motion vector of current macro and match block, judge and obtain whether the motion vector of macro block in next frame 2D image and two dimensional image terminates, if so, then perform S8, otherwise perform S3.
In step s 8, according to the depth map corresponding to motion vector and present frame, generate each macro block of next frame picture depth figure, and then generate the depth map of next frame image, judge whether to generate next frame depth map again, be, perform S2, otherwise terminate, wherein next frame and again next frame be the frame of video between key frame, this frame of video is non-key frame.
In embodiments of the present invention, according to the depth map corresponding to motion vector and present frame, each macro block of next frame picture depth figure is generated, and then generate the depth map of next frame image, judge whether to generate next frame depth map again, be, perform S2, otherwise terminate.
After having acquired the depth map of non-key frame, the mode automatically generated can be adopted original frame of video and the frame of video having acquired depth map to be automatically synthesized, synthesize the namely three-dimensional frame of video of 3D, last at the video sequence that stereo video frame is combined into solid, complete the operation that 2D video conversion is 3D video.
The embodiment of the present invention is by the way, a kind of non-key frame degree of depth rapid generation, when by 2D video conversion 3D video, by analyzing in 2D video the motion vector information between frame and frame, adopt and specifically process step, automatization can generate the depth information of non-key frame fast and accurately, improve efficiency during 2D video conversion 3D video, and process motion vector time traversal time, can effectively accelerate convergence of algorithm speed, thus faster generating depth information, method provided by the invention can effectively utilize at advertising sector, stereoscopic motion picture and TV industry, quickly generate three-dimensional video-frequency, namely speed during 3D conversion can effectively be improved by the way, and stereoeffect when simultaneously taking into account conversion.
One of ordinary skill in the art will appreciate that all or part of step realizing in above-described embodiment method can be by the hardware that program carrys out instruction relevant and completes, described program can be stored in a computer read/write memory medium, described storage medium, such as ROM/RAM, disk, CD etc.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all any amendment, equivalent replacement and improvement etc. made within the spirit and principles in the present invention, should be included within protection scope of the present invention.
Claims (7)
1. a non-key frame degree of depth rapid generation, it is characterised in that described method comprises the steps:
S1, reads the depth map of key frame and the planar video sequence that described key frame is corresponding;
S2, obtains the luminance component in the two dimensional image of present frame and next frame respectively, and the luminance component of described next frame is divided into the macro block of presetted pixel;
S3, when described next frame exists the macro block not obtaining motion vector, described present frame scans in preset range according to big diamond search template, the center of LDSP template is placed on the initial point of search window, calculate the sad value of central point, then remaining 8 points are carried out SEA algorithm judgement, if being all unsatisfactory for, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, and point minimum for sad value is set to the center of LDSP, and perform S5, otherwise perform S4;
S4, centered by the point that sad value is minimum, form new LDSP template, and the point not calculating sad value around new central point is carried out SEA algorithm judgement, if being all unsatisfactory for, perform S5, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is LDSP template that sad value is minimum, perform S5, otherwise repeat S4;
S5, point centered by the point that sad value is minimum, using little diamond search template, around 4 points being carried out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise perform S6;
S6, point centered by the point that sad value is minimum, forming new SDSP template, the point not calculating sad value being carried out SEA algorithm judgement, if being all unsatisfactory for, perform S7, if there being at least one point to meet, then calculate the sad value of the point meeting SEA algorithm, when the central point that point is SDSP template that sad value is minimum, perform S7, otherwise repeat S6;
S7, with the central point of SDSP template for match block starting point, calculates and preserves the motion vector of current macro and match block, it is judged that and obtain whether the motion vector of macro block in next frame two dimensional image terminates, and if so, then perform S8, otherwise perform S3;
S8, according to the depth map corresponding to motion vector and present frame, generates each macro block of next frame picture depth figure, and then generates the depth map of next frame image, it may be judged whether generate next frame depth map again, be, perform S2, otherwise terminate.
2. the method for claim 1, it is characterised in that described SEA algorithm is:
P-MINSAD (x, y)��M (i, j)��P+MINSAD (x, y);
Wherein: fk (m, n) for kth non-key frame, (m, n) puts gray value;
Wherein M, N are the size of the current template selected, and M is template width, and N is form height.
3. the method for claim 1, it is characterised in that described method also includes before the depth map reading key frame and the planar video sequence step that described key frame is corresponding:
Obtain the key frame of 2D video and the depth information of key frame.
4. the method for claim 1, it is characterised in that described next frame and again next frame are non-key frame.
5. the method for claim 1, it is characterised in that described non-key frame is the frame of video between key frame.
6. the method for claim 1, it is characterised in that described non-key frame is the frame of video including motion vector information.
7. the method for claim 1, it is characterised in that the depth value of described depth information range for 0 to 255.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410593691.0A CN105635741A (en) | 2014-10-29 | 2014-10-29 | Quick depth generating method for non-key frames |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410593691.0A CN105635741A (en) | 2014-10-29 | 2014-10-29 | Quick depth generating method for non-key frames |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105635741A true CN105635741A (en) | 2016-06-01 |
Family
ID=56050171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410593691.0A Pending CN105635741A (en) | 2014-10-29 | 2014-10-29 | Quick depth generating method for non-key frames |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105635741A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447718A (en) * | 2016-08-31 | 2017-02-22 | 天津大学 | 2D-to-3D depth estimation method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006140618A (en) * | 2004-11-10 | 2006-06-01 | Victor Co Of Japan Ltd | Three-dimensional video information recording device and program |
EP1981289A2 (en) * | 2005-07-02 | 2008-10-15 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding/decoding video data to implement local three-dimensional video |
CN101400001A (en) * | 2008-11-03 | 2009-04-01 | 清华大学 | Generation method and system for video frame depth chart |
CN101635859A (en) * | 2009-08-21 | 2010-01-27 | 清华大学 | Method and device for converting plane video to three-dimensional video |
-
2014
- 2014-10-29 CN CN201410593691.0A patent/CN105635741A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006140618A (en) * | 2004-11-10 | 2006-06-01 | Victor Co Of Japan Ltd | Three-dimensional video information recording device and program |
EP1981289A2 (en) * | 2005-07-02 | 2008-10-15 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding/decoding video data to implement local three-dimensional video |
CN101400001A (en) * | 2008-11-03 | 2009-04-01 | 清华大学 | Generation method and system for video frame depth chart |
CN101635859A (en) * | 2009-08-21 | 2010-01-27 | 清华大学 | Method and device for converting plane video to three-dimensional video |
Non-Patent Citations (1)
Title |
---|
王帅: "基于运动估计的深度信息生成技术研究", 《信息科技辑》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447718A (en) * | 2016-08-31 | 2017-02-22 | 天津大学 | 2D-to-3D depth estimation method |
CN106447718B (en) * | 2016-08-31 | 2019-06-04 | 天津大学 | A kind of 2D turns 3D depth estimation method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104662910B (en) | The method and apparatus of virtual depth value in 3D Video codings | |
CN111325693B (en) | Large-scale panoramic viewpoint synthesis method based on single viewpoint RGB-D image | |
CN103024421B (en) | Method for synthesizing virtual viewpoints in free viewpoint television | |
TWI469088B (en) | Depth map generation module for foreground object and the method thereof | |
CN102523464A (en) | Depth image estimating method of binocular stereo video | |
US20170064279A1 (en) | Multi-view 3d video method and system | |
CN102881018B (en) | Method for generating depth maps of images | |
CN102724531B (en) | A kind of two-dimensional video turns the method and system of 3 D video | |
CN104378619B (en) | A kind of hole-filling algorithm rapidly and efficiently based on front and back's scape gradient transition | |
CN111047709A (en) | Binocular vision naked eye 3D image generation method | |
CN106028020B (en) | A kind of virtual perspective image cavity complementing method based on multi-direction prediction | |
CN104768019A (en) | Adjacent disparity vector obtaining method for multi-texture multi-depth video | |
CN104270624A (en) | Region-partitioning 3D video mapping method | |
CN104333758B (en) | The method and relevant apparatus of prediction technique and the detection pixel point of depth map | |
CN107592538B (en) | A method of reducing stereoscopic video depth map encoder complexity | |
CN105635741A (en) | Quick depth generating method for non-key frames | |
US20230306563A1 (en) | Image filling method and apparatus, decoding method and apparatus, electronic device, and medium | |
CN105809717B (en) | A kind of depth estimation method, system and electronic equipment | |
CN103747248B (en) | The inconsistent detection of the degree of depth and color video border and processing method | |
CN107135393A (en) | A kind of compression method of light field image | |
CN111105484B (en) | Paperless 2D serial frame optimization method | |
CN103813149B (en) | A kind of image of coding/decoding system and video reconstruction method | |
CN102685530B (en) | Intra depth image frame prediction method based on half-pixel accuracy edge and image restoration | |
CN105681776A (en) | Parallax image extraction method and device | |
CN105187813B (en) | A kind of time domain uniformity deep video method of estimation based on adaptive weight |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160601 |