CN104639932A - Free stereoscopic display content generating method based on self-adaptive blocking - Google Patents

Free stereoscopic display content generating method based on self-adaptive blocking Download PDF

Info

Publication number
CN104639932A
CN104639932A CN201410765445.9A CN201410765445A CN104639932A CN 104639932 A CN104639932 A CN 104639932A CN 201410765445 A CN201410765445 A CN 201410765445A CN 104639932 A CN104639932 A CN 104639932A
Authority
CN
China
Prior art keywords
view
virtual
block
display content
generating method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410765445.9A
Other languages
Chinese (zh)
Inventor
陶秋琰
王梁昊
李东晓
张明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201410765445.9A priority Critical patent/CN104639932A/en
Publication of CN104639932A publication Critical patent/CN104639932A/en
Pending legal-status Critical Current

Links

Abstract

The invention discloses a free stereoscopic display content generating method based on self-adaptive blocking. The free stereoscopic display content generating method based on self-adaptive blocking includes (1) acquiring a left view and a right view; (2) extracting corresponding feature points of the left view and the right view; (3) subjecting the left view and the right view to virtual blocking, computing attention degrees of the left view and the right view respectively and performing self-adaptive blocking on the left view and the right view according to the attention degrees; (4) quantizing constraining items such as structural constraints and parallax constraints of images into an energy item form, and weighting and stacking to form a total energy item; (5) solving a mapping function for a virtual viewpoint image and an original viewpoint by means of minimizing the total energy item; (6) performing reverse mapping according to the mapping function between the viewpoints so as to generate a plurality of required virtual viewpoints. The free stereoscopic display content generating method has the advantages that by means of performing self-adaptive blocking on the views on the basis of the attention degrees, different objects are blocked into different blocks, interactive influences of mapping relations of the different objects in the same block are avoided, and a virtual viewpoint image obtained from nonlinear image distortion is enabled to be more natural in parts of details and fewer in flaw.

Description

A kind of Auto-stereo display content generating method based on self adaptation piecemeal
Technical field
The invention belongs to multi-view auto stereo display technique field, be specifically related to a kind of Auto-stereo display content generating method based on self adaptation piecemeal.
Background technology
Multi-view auto stereo display technique is owing to needing the image of multiple viewpoint at display end, therefore traditional mode is the video camera co-operation adopting multiple (being generally 8 ~ 16) identical in shooting process, thus gets multi-view point video.In the acquisition of film source, the collaborative work of multiple video camera adds the synchronous and follow-up pretreated difficulty of video camera, and the image of multiple viewpoint also inevitably increases data volume pro rata in addition, increases the burden storing and transmit.Consider current already present numerous traditional binocular 3D film source, drawing viewpoints is a kind of most suitable solution.The method can utilize existing binocular 3D video, draws out required multiple virtual visual point images, plays for multi-view auto-stereo display system.
The multiple views rendering algorithm of academia's main flow is the virtual viewpoint rendering algorithm based on image, particularly based on the drawing viewpoints algorithm (DIBR) of depth image.The precondition of DIBR is the depth information of viewpoint, and this is generally obtained by Stereo matching.DIBR is actually pixel twice mapping in space in viewpoint, first, believe the guidance of the camera parameter of trouble and original viewpoint in the degree of depth under, pixel in original viewpoint is projected in true three-dimension space, utilize the camera parameter of virtual view afterwards, project back the pixel in three dimensions virtual camera plane, obtains virtual visual point image.Due to blocking between viewpoint, in the virtual visual point image of generation, also have cavity, need to do hole-filling in addition.The technological difficulties of this method are to utilize Stereo matching to obtain the disappearance of camera parameter in accurate depth map, the hole-filling problem in later stage and practical application.
Drawing viewpoints algorithm based on sparse features has been avoided the degree of depth in DIBR and has been obtained and hole-filling two hang-up, utilizes the mapping relations between the attention rate of image and sparse reliable characteristic point calculating viewpoint.But this method also also exists certain problem, what take due to the pixel in identical piecemeal is the relation of identical Linear Mapping, piecemeal crosses the object that conference causes existing in piecemeal two or more degree of depth level simultaneously, then this mapping relations the mapping relations of certain object in this piecemeal just may be caused affect by another object, cause, between adjacent two blocks, obvious twisted phenomena occurs.But the simple way reducing point block size can cause the time solving energy equation sharply to increase, and can not meet higher requirement of real-time.
Summary of the invention
For the above-mentioned technical problem existing for prior art, the invention provides a kind of Auto-stereo display content generating method based on self adaptation piecemeal, the mapping relations of different objects in same piecemeal are avoided to influence each other, make the virtual visual point image that obtained by nonlinear images distortion more natural in part details, flaw is less.
Based on an Auto-stereo display content generating method for self adaptation piecemeal, comprise the steps:
(1) current frame image comprising left and right view in three-dimensional video-frequency is obtained;
(2) by calculating the similarity of current frame image and previous frame image, judge whether current frame image is continuous in time domain;
(3) characteristic of correspondence point in the view of present frame left and right is extracted;
(4) virtual block carried out to left view or right view and calculate the attention rate of each cell block, and then according to described attention rate, each cell block being merged into several adaptive blocks;
(5) by structural constraint, the disparity constraint of left view or right view and time-domain constraints is quantized into the form of energy term and weighted superposition becomes a total energy quantifier E;
(6) described total energy quantifier E is minimized solve, obtain each characteristic point and each adaptive block summit actual abscissa in virtual view in left view or right view;
(7) the actual abscissa in virtual view according to characteristic point and adaptive block summit, calculate the actual abscissa of each cell block summit in virtual view in adaptive block by geometry, and then try to achieve the actual abscissa of each pixel in virtual view in cell block; Generating virtual view is mapped finally by pixel coordinate.
In described step (2), video camera is fixed and in scene when object of which movement, adopts the method calculating triple channel image Y-PSNR ratio to judge whether current frame image is continuous in time domain; For video camera relative to whole scene movement when, adopt the method for computation structure similarity to judge whether current frame image continuous in time domain.
In described step (3), the method adopting sparse features point and dense characteristic point to combine extracts characteristic of correspondence point in the view of present frame left and right; Wherein, sparse features point adopts the mode of ORB (Oriented Binary RobustIndependentElementary Features) algorithm to detect, describe and mate; Dense characteristic point adopts the Stereo matching mode of uniform sampling to find, and arranges thresholding for the confidence level adopting coupling.
In described step (4), these two cell blocks if difference is less than default threshold value, are then merged by the attention rate difference of more adjacent two cell blocks.
The expression formula of described total energy quantifier E is as follows:
E=λ DE DHE HVE VTE T
Wherein: E d, E h, E vand E tbe respectively parallax quantifier, horizontal energy item, vertical energy item and time domain energy item, λ d, λ h, λ vand λ tbe respectively E d, E h, E vand E tcorresponding weight factor.
Described parallax quantifier E dexpression formula as follows:
Wherein: for the actual abscissa of the arbitrary characteristic point in left view or right view in virtual view, x featurefor the theoretical abscissa of this characteristic point in virtual view, S featurefor the attention rate of this characteristic point said units block.
Described horizontal energy item E hexpression formula as follows:
Wherein: for the actual abscissa of arbitrary adaptive block right vertices in virtual view in left view or right view, for the actual abscissa of this adaptive block left upper apex in virtual view, w is the upper length of side of this adaptive block, S upper leftfor the attention rate of this adaptive block left upper apex said units block.
Described vertical energy item E vexpression formula as follows:
Wherein: for the actual abscissa of arbitrary adaptive block bottom left vertex in virtual view in left view or right view, for the actual abscissa of this adaptive block left upper apex in virtual view, S lower-leftfor the attention rate of this adaptive block bottom left vertex said units block.
Described time domain energy item E texpression formula as follows:
Wherein: for the actual abscissa of arbitrary adaptive block summit in virtual view in left view or right view, for the actual abscissa of this adaptive block summit in former frame virtual view, S is the attention rate of this adaptive block summit said units block.
The present invention proposes the framework of the method for drawing viewpoints of the distinguished point based for three-dimensional video-frequency of complete set, simultaneously by carrying out the self adaptation piecemeal based on attention rate to image, different objects is assigned in different piecemeal and goes, the mapping relations of different objects in same piecemeal are avoided to influence each other, make the virtual visual point image that obtained by nonlinear images distortion more natural in part details, flaw is less.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of the inventive method.
Fig. 2 is the location diagram of virtual view and original viewpoint.
Fig. 3 is virtual vertex and true summit relation schematic diagram.
Fig. 4 (a) is the mapping relations figure of pixel in piecemeal corresponding between original viewpoint.
Fig. 4 (b) is the mapping relations figure of pixel in piecemeal corresponding between virtual view.
The schematic diagram that Fig. 5 (a) is fixed block.
Fig. 5 (b) is the schematic diagram of self adaptation piecemeal.
Fig. 6 (a) is the partial enlarged drawing in the corresponding virtual view in HHI square areas in Fig. 5 (a).
Fig. 6 (b) is the partial enlarged drawing in the corresponding virtual view in HHI square areas in Fig. 5 (b).
In order to more specifically describe the present invention, below in conjunction with the drawings and the specific embodiments, technical scheme of the present invention is described in detail.
As shown in Figure 1, the present invention is based on the Auto-stereo display content generating method of self adaptation piecemeal, comprise the steps:
The first step: obtain the current frame image of three-dimensional video-frequency, and and then obtain the image of left and right view.
Second step: whether the similarity calculating current frame image and previous frame image, judge in time domain continuous.When judging, shooting camera being fixed and the situation of object of which movement in scene adopts the method for the Y-PSNR ratio of triple channel image, camera being adopted to the method for structural similarity relative to the situation of whole scene movement.
3rd step: the character pair point extracting left figure and right figure, adopts the method that sparse features point and uniform dense characteristic point combine.Sparse features point adopts the mode of ORB algorithm to detect, describe and mate; Dense characteristic point adopts the Stereo matching mode of uniform sampling to find, and arranges thresholding for the confidence level of the coupling adopted.
4th step: carry out virtual piecemeal to image, calculates the attention rate of left figure and right figure respectively, and carries out self adaptation piecemeal according to attention rate.Adopt the mode of center-periphery to extract attention rate between different graphical rules, calculate the characteristic pattern in image brightness and color two features, and be fused on an attention rate figure.During self adaptation piecemeal, employing be decision method based on piecemeal attention rate.If the difference of the attention rate of two adjacent virtual blocks is less than certain threshold value, then they are judged as similar, according to the direction of comparing, are divided into again level phase Sihe vertically similar two kinds.
5th step: by the form that picture structure retrains, disparity constraint and time-domain constraints are quantified as energy term, and weighted superposition becomes a total energy term.Parallax item is under parallel camera model, the constraint of the ideal position of characteristic point in virtual visual point image, as shown in Figure 2, if the coordinate (x of the point in this space in the virtual view of alpha position v, y v) namely available following formula obtain:
x V = ( 1 - α ) x L + α x R y V = y
Namely every a energy is produced by a characteristic point (x, y):
Wherein: for the actual abscissa of the characteristic point in original viewpoint in virtual view, S i,jfor the attention rate of corresponding sub-block, i and j is tried to achieve by following formula, h and w is the height and width of virtual block:
i = y h , j = x w
Structure item is keep the structure of image important area not mapped the constraint of destruction between viewpoint, can be split as again the energy of horizontal and vertical directions further.To horizontal Xiang Eryan, every a energy is produced by a piecemeal widthwise edge, and to vertical Xiang Eryan, every a energy is produced by a piecemeal longitudinal edge.
Time domain item is the constraint that reduction virtual visual point image is shaken in the time domain.To time domain Xiang Eryan, every a energy is produced in the gap of consecutive frame by a piecemeal summit, piecemeal apex coordinate and abscissa for former frame:
The energy of each several part is weighted superposition, the relation of piecemeal apex coordinate of gross energy and virtual view, λ d, λ h, λ vand λ tthe weighted factor of parallax item, level item, vertical item and time domain item respectively.
E=λ DE DHE HVE VTE T
6th step: by minimizing total energy quantifier, the mapping function on solving virtual visual point image piecemeal summit and original visual point image piecemeal summit.
7th step: do back mapping according to mapping function between viewpoint, the multiple virtual views needed for generation.On the basis on the piecemeal summit between the virtual visual point image obtained in the 5th step and original visual point image, the mapping relations of the pixel in block may be split into two parts and solve:
Part I, determines the mapping relations on the summit of all virtual blocks in the true piecemeal of original viewpoint and virtual view;
Part II, determines the mapping relations of the pixel in the virtual block of original viewpoint and virtual view.
In a first portion, in order to obtain the mapping relations on the virtual block summit in true piecemeal, the summit relation of internal virtual piecemeal jointly should be determined by all true summit on this piecemeal.As shown in Figure 3, for each summit P i,j, find out the true summit relevant with it with following methods:
First, find out two the limit summit nearest with its lateral separation or true summit from this summit respectively to from left to right, be labeled as left summit P i-L, jwith right summit P i+R, j, wherein L and R is lateral separation, and is nonnegative integer.Secondly, for left summit and right summit, 2 true summit, i.e. the left upper apex Ps nearest with its fore-and-aft distance are found out downwards respectively to upper i-L, j-LU, bottom left vertex P i-L, j+LD, right vertices P i+R, j-RUwith bottom right vertex P i+R, j+RD, wherein LU, LD, RU, RD are fore-and-aft distance, and are nonnegative integer.
Because the blocking information in original viewpoint and virtual view and vertex information remain unchanged, thus according to the thinking of inverse distance-weighting P i,jcorresponding points in virtual view abscissa with the relation of the abscissa of these corresponding points of true summit in virtual view:
x ~ i , j = R L + R x ~ i - L , j + L L + R x ~ i + R , j = R L + R ( LD LD + LU x ~ i - L , j - LU + LU LD + LU x ~ i - L , j + LD ) + L L + R ( RD RD + RU x ~ i + R , j - RU + RU RD + RU x ~ i + R , j + RD ) ( R > 0 , LD > 0 , RD > 0 )
The problem of Part II can be described as on the basis of Part I result: when the corresponding piecemeal apex coordinate of original viewpoint and virtual view is known conditions, solve the corresponding relation of both piecemeal inside.Owing to there is not vertical parallax between viewpoint, therefore between original viewpoint and virtual view, the ordinate of corresponding pixel points (comprising fraction pixel point) is equal.Based on this, adopt the relation of the Linear Mapping of horizontal direction herein, in virtual view and original viewpoint, in piecemeal, each corresponding pixel points is equal to the ratio of distances constant of the right and left of this piecemeal.As shown in Figure 4, wherein (a) is the piecemeal in original visual point image to concrete solution procedure, and (b) is the corresponding piecemeal in virtual visual point image, with (x, y) and represent the corresponding pixel points of original viewpoint and virtual view respectively, i and j represents row and the row of piecemeal respectively.Acquiescently, virtual view is equal with the ordinate of corresponding points in original viewpoint, that is:
In virtual view, the pass of the pixel x that pixel X is corresponding with original viewpoint is:
x ~ = x ~ i , j x i , j + 1 - x w y i + 1 , j - y h + x ~ i , j + 1 x - x i , j w y i + 1 , j - y h + x ~ i + 1 , j x i , j + 1 - x w y - y i , j h + x ~ i + 1 , j + 1 x - x i , j w y - y i , j h
Fig. 5 is the piecemeal result of the right visual point image of the first frame of Book_Arrival sequence, wherein (a) is the enlarged drawing of result based on fixed block and wherein HHI square areas, stone lion region and right side board wall juncture area, and (b) is the enlarged drawing of result based on self adaptation piecemeal and wherein HHI square areas, stone lion region and right side board wall juncture area.Self adaptation method of partition can distinguish different objects better, is taking the piecemeal of suitable size to be divided in different piecemeals by different objects adaptively close to object edge place.As can be seen from the figure, the blank edge on the right side of image separates completely with the Bai Qiang district of background; Four boundaries of the HHI square in the middle of picture separate with grey desk behind all completely; Lion statue below picture separates with desk behind and ground substantially.
[0073] Fig. 6 is the image adopting fixed block and self adaptation piecemeal two kinds of methods to generate No. 8 virtual views from No. 10 original viewpoints, wherein (a) is the enlarged drawing of the HHI square areas based on fixed block, and (b) is the enlarged drawing of the HHI square areas based on self adaptation piecemeal.The experimental result of both contrasts, self adaptation method of partition has better performance at regional area, this is mainly because objects different in self adaptation piecemeal is all in different piecemeals, make an object can not affect by the mapping relations of another object of close together, but particularly those close together parallaxes have two objects of gap.As can be seen from enlarged drawing, in the result based on self adaptation method of partition, the bottom of the HHI square in the middle of image maintains good shape, not affect by lion below.This shows, adopt the primal algorithm of method compared to fixed block of self adaptation piecemeal, the picture quality of the virtual view generated promotes to some extent, positive effect is maintained for objects in images edge shape.

Claims (9)

1., based on an Auto-stereo display content generating method for self adaptation piecemeal, comprise the steps:
(1) current frame image comprising left and right view in three-dimensional video-frequency is obtained;
(2) by calculating the similarity of current frame image and previous frame image, judge whether current frame image is continuous in time domain;
(3) characteristic of correspondence point in the view of present frame left and right is extracted;
(4) virtual block carried out to left view or right view and calculate the attention rate of each cell block, and then according to described attention rate, each cell block being merged into several adaptive blocks;
(5) by structural constraint, the disparity constraint of left view or right view and time-domain constraints is quantized into the form of energy term and weighted superposition becomes a total energy quantifier E;
(6) described total energy quantifier E is minimized solve, obtain each characteristic point and each adaptive block summit actual abscissa in virtual view in left view or right view;
(7) the actual abscissa in virtual view according to characteristic point and adaptive block summit, calculate the actual abscissa of each cell block summit in virtual view in adaptive block by geometry, and then try to achieve the actual abscissa of each pixel in virtual view in cell block; Generating virtual view is mapped finally by pixel coordinate.
2. Auto-stereo display content generating method according to claim 1, it is characterized in that: in described step (2), video camera is fixed and in scene when object of which movement, adopts the method calculating triple channel image Y-PSNR ratio to judge whether current frame image is continuous in time domain; For video camera relative to whole scene movement when, adopt the method for computation structure similarity to judge whether current frame image continuous in time domain.
3. Auto-stereo display content generating method according to claim 1, is characterized in that: in described step (3), and the method adopting sparse features point and dense characteristic point to combine extracts characteristic of correspondence point in the view of present frame left and right; Wherein, sparse features point adopts the mode of ORB algorithm to detect, describe and mate; Dense characteristic point adopts the Stereo matching mode of uniform sampling to find, and arranges thresholding for the confidence level adopting coupling.
4. Auto-stereo display content generating method according to claim 1, it is characterized in that: in described step (4), these two cell blocks if difference is less than default threshold value, are then merged by the attention rate difference of more adjacent two cell blocks.
5. Auto-stereo display content generating method according to claim 1, is characterized in that: the expression formula of described total energy quantifier E is as follows:
E=λ DE DHE HVE VTE T
Wherein: E d, E h, E vand E tbe respectively parallax quantifier, horizontal energy item, vertical energy item and time domain energy item, λ d, λ h, λ vand λ tbe respectively E d, E h, E vand E tcorresponding weight factor.
6. Auto-stereo display content generating method according to claim 5, is characterized in that: described parallax quantifier E dexpression formula as follows:
Wherein: for the actual abscissa of the arbitrary characteristic point in left view or right view in virtual view, x featurefor the theoretical abscissa of this characteristic point in virtual view, S featurefor the attention rate of this characteristic point said units block.
7. Auto-stereo display content generating method according to claim 5, is characterized in that: described horizontal energy item E hexpression formula as follows:
Wherein: for the actual abscissa of arbitrary adaptive block right vertices in virtual view in left view or right view, for the actual abscissa of this adaptive block left upper apex in virtual view, w is the upper length of side of this adaptive block, S upper leftfor the attention rate of this adaptive block left upper apex said units block.
8. Auto-stereo display content generating method according to claim 5, is characterized in that: described vertical energy item E vexpression formula as follows:
Wherein: for the actual abscissa of arbitrary adaptive block bottom left vertex in virtual view in left view or right view, for the actual abscissa of this adaptive block left upper apex in virtual view, S lower-leftfor the attention rate of this adaptive block bottom left vertex said units block.
9. Auto-stereo display content generating method according to claim 5, is characterized in that: described time domain energy item E texpression formula as follows:
Wherein: for the actual abscissa of arbitrary adaptive block summit in virtual view in left view or right view, for the actual abscissa of this adaptive block summit in former frame virtual view, S is the attention rate of this adaptive block summit said units block.
CN201410765445.9A 2014-12-12 2014-12-12 Free stereoscopic display content generating method based on self-adaptive blocking Pending CN104639932A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410765445.9A CN104639932A (en) 2014-12-12 2014-12-12 Free stereoscopic display content generating method based on self-adaptive blocking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410765445.9A CN104639932A (en) 2014-12-12 2014-12-12 Free stereoscopic display content generating method based on self-adaptive blocking

Publications (1)

Publication Number Publication Date
CN104639932A true CN104639932A (en) 2015-05-20

Family

ID=53218173

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410765445.9A Pending CN104639932A (en) 2014-12-12 2014-12-12 Free stereoscopic display content generating method based on self-adaptive blocking

Country Status (1)

Country Link
CN (1) CN104639932A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
WO2018014324A1 (en) * 2016-07-22 2018-01-25 北京大学深圳研究生院 Method and device for synthesizing virtual viewpoints in real time

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN101625768A (en) * 2009-07-23 2010-01-13 东南大学 Three-dimensional human face reconstruction method based on stereoscopic vision
CN103700099A (en) * 2013-12-18 2014-04-02 同济大学 Rotation and dimension unchanged wide baseline stereo matching method
CN104200517A (en) * 2014-08-06 2014-12-10 北京工业大学 Three-dimensional reconstruction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101398886A (en) * 2008-03-17 2009-04-01 杭州大清智能技术开发有限公司 Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN101625768A (en) * 2009-07-23 2010-01-13 东南大学 Three-dimensional human face reconstruction method based on stereoscopic vision
CN103700099A (en) * 2013-12-18 2014-04-02 同济大学 Rotation and dimension unchanged wide baseline stereo matching method
CN104200517A (en) * 2014-08-06 2014-12-10 北京工业大学 Three-dimensional reconstruction method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
林成亮: "基于稀疏特征的视点绘制研究", 《中国优秀说是学位论文全文数据库 信息科技辑》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018014324A1 (en) * 2016-07-22 2018-01-25 北京大学深圳研究生院 Method and device for synthesizing virtual viewpoints in real time
CN106909877A (en) * 2016-12-13 2017-06-30 浙江大学 A kind of vision based on dotted line comprehensive characteristics builds figure and localization method simultaneously
CN106909877B (en) * 2016-12-13 2020-04-14 浙江大学 Visual simultaneous mapping and positioning method based on dotted line comprehensive characteristics

Similar Documents

Publication Publication Date Title
CN101771893B (en) Video frequency sequence background modeling based virtual viewpoint rendering method
US9159154B2 (en) Image processing method and apparatus for generating disparity value
CN102263979B (en) Depth map generation method and device for plane video three-dimensional conversion
CN101902657A (en) Method for generating virtual multi-viewpoint images based on depth image layering
CN102609974B (en) Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN103581650B (en) Binocular 3D video turns the method for many orders 3D video
CN101257641A (en) Method for converting plane video into stereoscopic video based on human-machine interaction
CN104065946B (en) Based on the gap filling method of image sequence
CN104200453B (en) Parallax image correcting method based on image segmentation and credibility
CN104639933A (en) Real-time acquisition method and real-time acquisition system for depth maps of three-dimensional views
CN103679739A (en) Virtual view generating method based on shielding region detection
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
CN104639932A (en) Free stereoscopic display content generating method based on self-adaptive blocking
CN103606162A (en) Stereo matching algorithm based on image segmentation
CN103945206B (en) A kind of stereo-picture synthesis system compared based on similar frame
CN103634584A (en) Multi-view 3D video synthetic method
CN105791798B (en) A kind of 4K based on GPU surpasses the real-time method for transformation of multiple views 3D videos and device
CN103945209A (en) DIBR method based on block projection
CN101695139B (en) Gradable block-based virtual viewpoint image drawing method
Yang et al. Depth map generation using local depth hypothesis for 2D-to-3D conversion
Lee et al. Segment-based multi-view depth map estimation using belief propagation from dense multi-view video
CN102469322B (en) Image processing method for plane stereoscopic bodies
Caviedes et al. Real time 2D to 3D conversion: Technical and visual quality requirements
Tran et al. Spatially consistent view synthesis with coordinate alignment
Zhu et al. Virtual view rendering based on self-adaptive block matching disparity estimation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150520