CN103581650A - Method for converting binocular 3D video into multicast 3D video - Google Patents

Method for converting binocular 3D video into multicast 3D video Download PDF

Info

Publication number
CN103581650A
CN103581650A CN201310495786.4A CN201310495786A CN103581650A CN 103581650 A CN103581650 A CN 103581650A CN 201310495786 A CN201310495786 A CN 201310495786A CN 103581650 A CN103581650 A CN 103581650A
Authority
CN
China
Prior art keywords
video
image
algorithm
parallax
binocular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310495786.4A
Other languages
Chinese (zh)
Other versions
CN103581650B (en
Inventor
马杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Changhong Electric Co Ltd
Original Assignee
Sichuan Changhong Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Changhong Electric Co Ltd filed Critical Sichuan Changhong Electric Co Ltd
Priority to CN201310495786.4A priority Critical patent/CN103581650B/en
Publication of CN103581650A publication Critical patent/CN103581650A/en
Application granted granted Critical
Publication of CN103581650B publication Critical patent/CN103581650B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a method for converting a binocular 3D video into a multicast 3D video. The method includes the steps of step a, converting the video into sequence frames, step b, setting render parameters and dividing a current frame image into a left-path image and a right-path image for zooming, step c, setting filtering and image segmentation parameters and carrying out filtering and image segmentation on the images, step d, calculating initial parallax error values, setting information iteration parameters and carrying out plane fitting, step e, using a parallax error plane parameter when an energy function of each division area is the minimum as the best parallax error plane parameter of the area, step f, re-estimating the parallax error value of each division area and carrying out Gaussian filtering processing, step g, repeating the step c to step f for n times, calculating motion vectors of n frame images and carrying out interframe smoothness operation and step h, rendering virtual viewpoints and splicing the virtual viewpoints into multi-square images and compressing the multi-square images into videoes. According to the method for converting the binocular 3D video into the multicast 3D video, the problem of short of naked-eye 3D film sources is effectively solved, and the 3D manufacturing cost is effectively lowered and the 3D display effect is improved in the process of converting common 3D format videos into the multicast 3D videos.

Description

Binocular 3D video turns the method for many orders 3D video
Technical field
The present invention relates to Video processing, is the method that binocular 3D video turns many orders 3D video concretely.
Background technology
Along with the development of free 3 D display technology, bore hole 3D Display Technique becomes a popular research topic gradually.Bore hole 3D Display Technique with its distinctive visual impact gravitational attraction increasing concern, nowadays most bore hole 3D show that solutions are all the schemes based on many viewpoints, therefore program film source are had to special demand.Can pass through the modeling software customizing programming film sources such as 3DMAX, MAYA on the one hand, but can not satisfy the demand far away, on the other hand can be by transferring common 3D format video to many viewpoints 3D video, to meet the demand of many viewpoints 3D scheme to program film source, by binocular, turn many objects scheme and enriched greatly program category, reduced program making cost and can reach the bore hole 3D display effect comparing favourably with customization film source.
Summary of the invention
The invention provides a kind of method that binocular 3D video turns many orders 3D video, to solve the serious deficient problem of existing bore hole 3D film source, and can reduce 3D cost of manufacture, improve 3D display effect.
Binocular 3D video of the present invention turns the method for many orders 3D video, comprising:
A. extract the frame of video of video to be converted, and press the order preservation frame of video of video, for example, use AVS script to change video;
B., the number of the virtual view that need to play up and the resolution of many orders of output frame of video are set, because incoming frame is generally left and right form, therefore present frame figure need to be split into independent left and right two-way image, and adopt cube interpolation algorithm to carry out convergent-divergent left and right two-way image;
C. arrange after quick bilateral filtering algorithm parameter, described left and right two-way image is carried out respectively to the filtering of quick bilateral filtering algorithm; Then image segmentation algorithm parameter is set, the left and right road image after quick bilateral filtering is carried out to image and cut apart;
D., disparity range and Stereo matching pixel reference windows size that left and right road image is set, disparity range is [disp disp], and disp is positive integer, and the value of disp is an empirical value.Left and right image is carried out to SAD algorithm (Sum of absolute differences, a kind of image matching algorithm) in disparity range, and adopts WTA(Winer Take All) policy calculation goes out initial parallax value; Belief propagation algorithm message iterative parameter is set again, utilizes image carve information to carry out plane fitting, each cut zone all can obtain one group of plane parameter;
E. to each cut zone, utilize belief propagation algorithm to calculate the energy function summation that this region is adjacent region, and each region energy function is got to hour corresponding parallax plane parameter as the best parallax plane parameter in this region, the belief propagation Study on Stereo Matching Algorithm > > that concrete grammar is cut apart based on image at the Master's thesis < < of Nanjing Aero-Space University, author: Li Binbin, has a detailed description in a literary composition;
F. each cut zone is reappraised all parallax value in cut zone with the best parallax plane parameter described in step e, and the disparity map after reappraising is carried out to gaussian filtering processing;
G. repeat step c~f n time, obtain the original left right wing video frame images that continuous n frame disparity map is corresponding, utilize feature point detection and Feature Correspondence Algorithm to calculate the motion vector of described n two field picture; According to the motion vector calculating, the n frame anaglyph having calculated is carried out interframe smooth operation and preserved; Wherein n is natural number, and general value is 6~10;
H. use original left right wing image and in conjunction with corresponding anaglyph, according to many viewpoints, play up formula, the number of the virtual view that the needs that arrange in integrating step b are played up, carry out playing up of virtual view, then the multiway images after playing up is spliced into many palaces table images according to rule corresponding to the number of viewpoint, as: 4 viewpoints are spliced into 4 palace lattice, and 8 viewpoints are spliced into 9 palace lattice.Finally many palaces lattice sequence frame is compressed into video, can in bore hole 3D player, plays.
Further, in step b, needing the number of the virtual view played up is 4 viewpoints or 8 viewpoints; The resolution of exporting many orders frame of video is 1920 * 1080; Two-way image scaling to 960 * 540, left and right.
Concrete, the quick bilateral filtering algorithm parameter arranging in step c comprises: filter window size, color component value and location components value; Image segmentation algorithm parameter comprises: the pixel number of color threshold value, radius threshold value and Minimum Area.
Preferably, described left and right two-way image is being carried out respectively before the filtering of quick bilateral filtering algorithm, from RGB, transfer the color space of left and right road image to Lab colour model, after filtering, the color space of filtering result is transferred to RGB and preserves filtering result from Lab colour model.Through color space conversion, better filter effect will be obtained.
Concrete, in steps d, belief propagation algorithm message iterative parameter comprises: the scale factor after discontinuous penalty factor, message iteration and the number of times after message iteration.
Further, in steps d, before belief propagation algorithm message iterative parameter is set, also to applies left and right consistency detection rule initial parallax value is detected, meet the credible parallax of being labeled as of this rule, not meet the insincere parallax of being labeled as of this rule; When carrying out plane fitting, utilize image carve information to carry out plane fitting in conjunction with all parallax value that are labeled as the pixel of credible parallax in this cut zone.
Further, in step h after virtual view is played up, if play up rear image in the disparity range of left and right road, by fill a vacancy the mutually method in hole of left and right road, carry out hole region and fill up, if image outside disparity range, is used image mending algorithm to fill up hole region, and then carry out described many palaces table images splicing.
Binocular 3D video of the present invention turns the method for many orders 3D video, has effectively solved the problem of existing bore hole 3D film source scarcity, common binocular 3D format video is transferred in many viewpoints 3D video, can effectively reduce 3D cost of manufacture, and improve 3D display effect.
Below in conjunction with the embodiment of embodiment, foregoing of the present invention is described in further detail again.But this should be interpreted as to the scope of the above-mentioned theme of the present invention only limits to following example.Without departing from the idea case in the present invention described above, various replacements or the change according to ordinary skill knowledge and customary means, made, all should comprise within the scope of the invention.Embodiment
Binocular 3D video of the present invention turns the method for many orders 3D video, comprising:
A. use AVS script that video to be converted is converted to sequence frame, and press the order preservation frame of video of video;
B., the number of the virtual view that need to play up is set, is generally 4 viewpoints or 8 viewpoints, then the resolution that output many orders frame of video is set is 1920 * 1080.Because incoming frame is generally left and right form, therefore present frame figure need to be split into independent left and right two-way image, and adopt cube interpolation algorithm to zoom to 960 * 540 left and right two-way image;
C., quick bilateral filtering algorithm parameter is set, comprises: filter window size, the sigma value of color component and the sigma value of location components (sigma is a parameter, is a technical term in bilateral filtering).From RGB, transfer the color space of left and right road image to Lab colour model, described left and right two-way image is carried out respectively to the filtering of quick bilateral filtering algorithm, after filtering, the color space of filtering result is transferred to RGB and preserves filtering result from Lab colour model.Through color space conversion, better filter effect will be obtained.Then image segmentation algorithm parameter is set, comprises: the pixel number of color threshold value, radius threshold value and Minimum Area, the left and right road image after quick bilateral filtering is carried out to image and cut apart;
D., disparity range and Stereo matching pixel reference windows size that left and right road image is set, disparity range is [disp disp], and disp is positive integer, and the value of disp is an empirical value.Left and right image is carried out to SAD algorithm (Sum of absolute differences, a kind of image matching algorithm) in disparity range, and adopt the algorithm of WTA to calculate initial parallax value; The computing formula of SAD algorithm is: SAD ( X , Y , d ) = &Sigma; i < | r | &Sigma; j < | r | | | left ( x + i , y + j ) - right ( x + i + d , y + j ) | , wherein left is left image, and right is right image, and r is window size, and d is current parallax.WTA(Winer Take All) strategy is in all SAD (X, Y, d), to choose d value corresponding to minimum value as the parallax value of current pixel point.
Application left and right consistency detection rule detects initial parallax value: the parallax value dLeft (x of current point, y)=d, if dReft is (x+d, y)=dLeft (x, y)=d, meet disparity consistency constraint, this parallax value is labeled as to credible parallax value, otherwise is labeled as insincere parallax value; Belief propagation algorithm message iterative parameter is set again, comprises: the scale factor after discontinuous penalty factor, message iteration and the number of times after message iteration.Utilize image carve information to carry out plane fitting in conjunction with all parallax value that are labeled as the pixel of credible parallax in this cut zone, each cut zone all can obtain one group of plane parameter.If plane equation is d (x, y)=ax+by+c, wherein a, b and c are parallax plane parameter, x and y are coordinate figure, the situation of change in x direction and y direction according to parallax, utilizes Voting Algorithm to simulate most possible a, b value respectively, according to existing depth value, in conjunction with a, the b value that simulate, can calculate each c value of n, the number that wherein n is pixel, utilizes Voting Algorithm to find out most possible c value again.
E. to each cut zone, utilize belief propagation algorithm to calculate the energy function summation that this region is adjacent region, and each region energy function is got to hour corresponding parallax plane parameter as the best parallax plane parameter in this region.The global energy function of definition is wherein d represents that the parallax of entire image distributes, the neighbours territory point set of all pixels in N presentation video, d prepresent the parallax value that some p distributes, level and smooth V (d p, d q) represent that two neighbor pixel p and some q distribute parallax d pand d ptime parallax discontinuity punishment, the set of pixel in p presentation video, data item D p(d p) represent that p point parallax is d ptime non-similarity estimate.If certain parallax distributes, make global energy minimum, this parallax distributes the final parallax that is image.Specifically refer to the belief propagation Study on Stereo Matching Algorithm > > that the Master's thesis < < of Nanjing Aero-Space University is cut apart based on image, author: Li Binbin;
F. each cut zone is reappraised all parallax value in cut zone with best parallax plane parameter, according to the result of step e, each region will have optimization plane parameter afterwards, now according to formula d (x, y)=ax+by+c, recalculate the parallax value of all pixels in region.X and y are coordinate figure.Each cut zone is reappraised all parallax value in cut zone with described best parallax plane parameter, and the disparity map after reappraising is carried out to gaussian filtering processing, the computational methods of Gaussian kernel are:
Figure BDA0000399303650000042
wherein, δ is standard deviation, x, and y is respectively current some distance center point in the distance of x direction, y direction;
G. repeat step c~f n time, obtain the original left right wing video frame images that continuous n frame disparity map is corresponding, utilize feature point detection and Feature Correspondence Algorithm to calculate the motion vector of described n two field picture, the value of general n is 6~10.Concrete grammar is: first detect the harris angle point of consecutive frame image, calculate the feature description vectors of each characteristic point, then utilize characteristic point description vectors to carry out the coupling of characteristic point, calculate the motion vector of consecutive frame by the matching relationship of characteristic point.According to the motion vector calculating, image is carried out to the operations such as corresponding translation, convergent-divergent according to motion vector, then ask for the residual information of consecutive frame, when residual information is less than certain threshold value threash, adopt weighting smoothing method that the n frame anaglyph having calculated is carried out interframe smooth operation and preserved;
H. use original left right wing image and in conjunction with corresponding anaglyph, according to many viewpoints, play up formula, the number of the virtual view that the needs that arrange in integrating step b are played up, carry out playing up of virtual view, many viewpoints are played up the translation that essence is pixel, and the computing formula of translation vector is: Shift (x, y)=dScale*d (x, y), x wherein, y is pixel coordinate, dScale is shift factor, with the resolution of depth map, play up mode relevant.If play up rear image in the disparity range of left and right road, by fill a vacancy the mutually method in hole of left and right road, carry out hole region and fill up, if image outside disparity range, is used image mending algorithm to fill up hole region.Wherein the hole of filling a vacancy mutually, left and right road is first to navigate to cavity, then utilizes information before and after cavity to the region of finding the hole region in the image of the current road of most possible coupling in another road image.And image mending algorithm is only to utilize the current road existing information of image to repair out hole region.Then the multiway images after playing up is spliced into many palaces table images according to rule corresponding to the number of viewpoint, as: 4 viewpoints are spliced into 4 palace lattice, and 8 viewpoints are spliced into 9 palace lattice, put in order as from left to right, from top to bottom.Finally many palaces lattice sequence frame is compressed into video, can in bore hole 3D player, plays.

Claims (7)

1. binocular 3D video turns the method for many orders 3D video, and its feature comprises:
A. extract the frame of video of video to be converted, and press the order preservation frame of video of video;
B., the number of the virtual view that need to play up and the resolution of many orders of output frame of video are set, present frame figure are split into independent left and right two-way image, and adopt cube interpolation algorithm to carry out convergent-divergent left and right two-way image;
C. arrange after quick bilateral filtering algorithm parameter, described left and right two-way image is carried out respectively to the filtering of quick bilateral filtering algorithm; Then image segmentation algorithm parameter is set, the left and right road image after quick bilateral filtering is carried out to image and cut apart;
D., disparity range and the Stereo matching pixel reference windows size of left and right road image are set, left and right image are carried out to SAD algorithm in disparity range, and adopt WTA policy calculation to go out initial parallax value; Belief propagation algorithm message iterative parameter is set again, utilizes image carve information to carry out plane fitting, each cut zone all can obtain one group of plane parameter;
E. to each cut zone, utilize belief propagation algorithm to calculate the energy function summation that this region is adjacent region, and each region energy function is got to hour corresponding parallax plane parameter as the best parallax plane parameter in this region;
F. each cut zone is reappraised all parallax value in cut zone with the best parallax plane parameter described in step e, and the disparity map after reappraising is carried out to gaussian filtering processing;
G. repeat step c~f n time, obtain the original left right wing video frame images that continuous n frame disparity map is corresponding, utilize feature point detection and Feature Correspondence Algorithm to calculate the motion vector of described n two field picture; According to the motion vector calculating, the n frame anaglyph having calculated is carried out interframe smooth operation and preserved; Wherein n is natural number;
H. use original left right wing image and in conjunction with corresponding anaglyph, according to many viewpoints, play up formula, the number of the virtual view that the needs that arrange in integrating step b are played up, carry out playing up of virtual view, then the multiway images after playing up is spliced into many palaces table images according to rule corresponding to the number of viewpoint, finally many palaces lattice sequence frame is compressed into video.
2. binocular 3D video as claimed in claim 1 turns the method for many orders 3D video, it is characterized by: in step b, needing the number of the virtual view played up is 4 viewpoints or 8 viewpoints; The resolution of exporting many orders frame of video is 1920 * 1080; Two-way image scaling to 960 * 540, left and right.
3. binocular 3D video as claimed in claim 1 turns the method for many orders 3D video, it is characterized by: the quick bilateral filtering algorithm parameter arranging in step c comprises: filter window size, color component value and location components value; Image segmentation algorithm parameter comprises: the pixel number of color threshold value, radius threshold value and Minimum Area.
4. binocular 3D video as claimed in claim 1 turns the method for many orders 3D video, it is characterized by: described left and right two-way image is being carried out respectively before the filtering of quick bilateral filtering algorithm, from RGB, transfer the color space of left and right road image to Lab colour model, after filtering, the color space of filtering result is transferred to RGB and preserves filtering result from Lab colour model.
5. binocular 3D video as claimed in claim 1 turns the method for many orders 3D video, it is characterized by: in steps d, belief propagation algorithm message iterative parameter comprises: the scale factor after discontinuous penalty factor, message iteration and the number of times after message iteration.
6. binocular 3D video as claimed in claim 1 turns the method for many orders 3D video, it is characterized by: in steps d before belief propagation algorithm message iterative parameter is set, also will apply left and right consistency detection rule detects initial parallax value, meet the credible parallax of being labeled as of this rule, do not meet the insincere parallax of being labeled as of this rule; When carrying out plane fitting, utilize image carve information to carry out plane fitting in conjunction with all parallax value that are labeled as the pixel of credible parallax in this cut zone.
7. binocular 3D video as claimed in claim 1 turns the method for many orders 3D video, it is characterized by: in step h after virtual view is played up, if play up rear image in the disparity range of left and right road, by fill a vacancy the mutually method in hole of left and right road, carry out hole region and fill up, if image outside disparity range, is used image mending algorithm to fill up hole region, and then carry out described many palaces table images splicing.
CN201310495786.4A 2013-10-21 2013-10-21 Binocular 3D video turns the method for many orders 3D video Active CN103581650B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310495786.4A CN103581650B (en) 2013-10-21 2013-10-21 Binocular 3D video turns the method for many orders 3D video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310495786.4A CN103581650B (en) 2013-10-21 2013-10-21 Binocular 3D video turns the method for many orders 3D video

Publications (2)

Publication Number Publication Date
CN103581650A true CN103581650A (en) 2014-02-12
CN103581650B CN103581650B (en) 2015-08-19

Family

ID=50052432

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310495786.4A Active CN103581650B (en) 2013-10-21 2013-10-21 Binocular 3D video turns the method for many orders 3D video

Country Status (1)

Country Link
CN (1) CN103581650B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103945208A (en) * 2014-04-24 2014-07-23 西安交通大学 Parallel synchronous scaling engine and method for multi-view naked eye 3D display
CN104378616A (en) * 2014-09-03 2015-02-25 王元庆 Tiled type multi-view image frame packaging structure and construction method
CN104469440A (en) * 2014-04-16 2015-03-25 成都理想境界科技有限公司 Vide playing method, video player and corresponding video playing device
CN104717514A (en) * 2015-02-04 2015-06-17 四川长虹电器股份有限公司 Multi-viewpoint image rendering system and method
CN107493465A (en) * 2017-09-18 2017-12-19 郑州轻工业学院 A kind of virtual multi-view point video generation method
WO2018082604A1 (en) * 2016-11-04 2018-05-11 宁波舜宇光电信息有限公司 Parallax and distance parameter calculation methods, dual camera module and electronic device
CN108377376A (en) * 2016-11-04 2018-08-07 宁波舜宇光电信息有限公司 Parallax calculation method, dual camera module and electronic equipment
CN108492326A (en) * 2018-01-31 2018-09-04 北京大学深圳研究生院 The resolution ratio solid matching method gradually refined from low to high and system
CN109688397A (en) * 2017-10-18 2019-04-26 上海质尊文化传媒发展有限公司 A kind of 2D switchs to the method for 3D video
CN110310317A (en) * 2019-06-28 2019-10-08 西北工业大学 A method of the monocular vision scene depth estimation based on deep learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483788A (en) * 2009-01-20 2009-07-15 清华大学 Method and apparatus for converting plane video into tridimensional video
CN102752616A (en) * 2012-06-20 2012-10-24 四川长虹电器股份有限公司 Method for converting double-view three-dimensional video to multi-view three-dimensional video
CN102831601A (en) * 2012-07-26 2012-12-19 中北大学 Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
CN103269435A (en) * 2013-04-19 2013-08-28 四川长虹电器股份有限公司 Binocular to multi-view virtual viewpoint synthetic method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101483788A (en) * 2009-01-20 2009-07-15 清华大学 Method and apparatus for converting plane video into tridimensional video
CN102752616A (en) * 2012-06-20 2012-10-24 四川长虹电器股份有限公司 Method for converting double-view three-dimensional video to multi-view three-dimensional video
CN102831601A (en) * 2012-07-26 2012-12-19 中北大学 Three-dimensional matching method based on union similarity measure and self-adaptive support weighting
CN103269435A (en) * 2013-04-19 2013-08-28 四川长虹电器股份有限公司 Binocular to multi-view virtual viewpoint synthetic method

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104469440A (en) * 2014-04-16 2015-03-25 成都理想境界科技有限公司 Vide playing method, video player and corresponding video playing device
CN103945208A (en) * 2014-04-24 2014-07-23 西安交通大学 Parallel synchronous scaling engine and method for multi-view naked eye 3D display
CN103945208B (en) * 2014-04-24 2015-10-28 西安交通大学 A kind of parallel synchronous zooming engine for multiple views bore hole 3D display and method
CN104378616A (en) * 2014-09-03 2015-02-25 王元庆 Tiled type multi-view image frame packaging structure and construction method
CN104717514A (en) * 2015-02-04 2015-06-17 四川长虹电器股份有限公司 Multi-viewpoint image rendering system and method
WO2018082604A1 (en) * 2016-11-04 2018-05-11 宁波舜宇光电信息有限公司 Parallax and distance parameter calculation methods, dual camera module and electronic device
CN108377376A (en) * 2016-11-04 2018-08-07 宁波舜宇光电信息有限公司 Parallax calculation method, dual camera module and electronic equipment
CN108377376B (en) * 2016-11-04 2021-01-26 宁波舜宇光电信息有限公司 Parallax calculation method, double-camera module and electronic equipment
CN107493465A (en) * 2017-09-18 2017-12-19 郑州轻工业学院 A kind of virtual multi-view point video generation method
CN107493465B (en) * 2017-09-18 2019-06-07 郑州轻工业学院 A kind of virtual multi-view point video generation method
CN109688397A (en) * 2017-10-18 2019-04-26 上海质尊文化传媒发展有限公司 A kind of 2D switchs to the method for 3D video
CN109688397B (en) * 2017-10-18 2021-10-22 上海质尊文化传媒发展有限公司 Method for converting 2D (two-dimensional) video into 3D video
CN108492326A (en) * 2018-01-31 2018-09-04 北京大学深圳研究生院 The resolution ratio solid matching method gradually refined from low to high and system
CN108492326B (en) * 2018-01-31 2021-11-23 北京大学深圳研究生院 Stereo matching method and system with gradually refined resolution ratio from low to high
CN110310317A (en) * 2019-06-28 2019-10-08 西北工业大学 A method of the monocular vision scene depth estimation based on deep learning

Also Published As

Publication number Publication date
CN103581650B (en) 2015-08-19

Similar Documents

Publication Publication Date Title
CN103581650B (en) Binocular 3D video turns the method for many orders 3D video
CN100576934C (en) Virtual visual point synthesizing method based on the degree of depth and block information
CN103702098B (en) Three viewpoint three-dimensional video-frequency depth extraction methods of constraint are combined in a kind of time-space domain
CN106504190B (en) A kind of three-dimensional video-frequency generation method based on 3D convolutional neural networks
CN101742349B (en) Method for expressing three-dimensional scenes and television system thereof
CN102263979B (en) Depth map generation method and device for plane video three-dimensional conversion
CN102254348B (en) Virtual viewpoint mapping method based o adaptive disparity estimation
US20130286017A1 (en) Method for generating depth maps for converting moving 2d images to 3d
CN103236082A (en) Quasi-three dimensional reconstruction method for acquiring two-dimensional videos of static scenes
CN103248909B (en) Method and system of converting monocular video into stereoscopic video
CN102592275A (en) Virtual viewpoint rendering method
CN102026013A (en) Stereo video matching method based on affine transformation
CN104639933A (en) Real-time acquisition method and real-time acquisition system for depth maps of three-dimensional views
CN108234985B (en) Filtering method under dimension transformation space for rendering processing of reverse depth map
CN112019828B (en) Method for converting 2D (two-dimensional) video into 3D video
CN101605271A (en) A kind of 2D based on single image changes the 3D method
CN103679739A (en) Virtual view generating method based on shielding region detection
CN103716615B (en) 2D video three-dimensional method based on sample learning and depth image transmission
CN103763564A (en) Depth image coding method based on edge lossless compression
Lu et al. A survey on multiview video synthesis and editing
CN108259917A (en) 3 D video decoding method and system based on depth time domain down-sampling
KR101103511B1 (en) Method for Converting Two Dimensional Images into Three Dimensional Images
CN107592538A (en) A kind of method for reducing stereoscopic video depth map encoder complexity
Chellappa et al. Academic Press Library in Signal Processing, Volume 6: Image and Video Processing and Analysis and Computer Vision
CN104639932A (en) Free stereoscopic display content generating method based on self-adaptive blocking

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant