CN102368824A - Video stereo vision conversion method - Google Patents

Video stereo vision conversion method Download PDF

Info

Publication number
CN102368824A
CN102368824A CN2011102762754A CN201110276275A CN102368824A CN 102368824 A CN102368824 A CN 102368824A CN 2011102762754 A CN2011102762754 A CN 2011102762754A CN 201110276275 A CN201110276275 A CN 201110276275A CN 102368824 A CN102368824 A CN 102368824A
Authority
CN
China
Prior art keywords
viewpoint
input video
key frame
depth map
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011102762754A
Other languages
Chinese (zh)
Other versions
CN102368824B (en
Inventor
戴琼海
杨铀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN2011102762754A priority Critical patent/CN102368824B/en
Publication of CN102368824A publication Critical patent/CN102368824A/en
Application granted granted Critical
Publication of CN102368824B publication Critical patent/CN102368824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a video stereo vision conversion method which comprises the following steps: acquiring a viewpoint number n of an input video and a viewpoint number N which is needed in naked eye stereoscopic display, wherein n is lessen than N; acquiring a key frame of each viewpoint of the input video and calculating a scene depth of each viewpoint so as to obtain a depth map of the key frame of each viewpoint of the input video; according to two adjacent key frames and corresponding depth maps of each viewpoint of the input video, acquiring a depth map of a non-key frame between the two adjacent key frames, and repeating the above step to obtain depth maps of all non-key frames of each viewpoint of the input video; according to an image of the input video and the depth map of each viewpoint of the input video, drafting images of N-n viewpoints, and forming N viewpoint images with the image of the input video; carrying out pixel arrangement on the N viewpoint images so as to obtain N viewpoint images which are adapted to a naked eye stereoscopic display device. According to the method, three-dimensional stereoscopic display of videos of single viewpoint plane, binocular stereo and the like are realized, and a manufacture period and cost are saved.

Description

Video perspective transformations method
Technical field
The present invention relates to technical field of computer vision, particularly a kind of video perspective transformations method.
Background technology
Along with the continuous development of 3D stereo display technique, stereo products such as three-dimensional film, TV, mobile device are popularized rapidly, and the demand of popular stereoscopic video is also more and more.Three-dimensional video-frequency of the prior art shows through the binocular mode, watch in the process need through initiatively fast gate-type, polarization type, red blue formula glasses etc. with the binocular image be sent to respectively the people about two, thereby form the stereoscopic vision perception.This mode needs user's wearing spectacles, and is inconvenient to watch.
To needing wearing spectacles to watch the defective of 3D video in the prior art, prior art adopts bore hole 3D stereoscopic display device to show.Bore hole 3D stereo display technique can let the user need not to wear the stereoeffect that auxiliary equipment can watch video to have, and is to be used for the comparatively desirable mode of stereos copic viewing in the occasions such as following family, advertisement, displaying.The problem that prior art exists is; On bore hole 3D stereoscopic display device, carry out stereo display; Need watch number of views N according to this display device provided, import the video signal source of equivalent viewpoint N simultaneously, and cause the video source of bore hole 3D stereoscopic display device to obtain difficulty to this equipment.
In the prior art; The method that the video capture device that adopts exploitation to have N viewpoint carries out synchronous acquisition provides video source for the bore hole stereoscopic display device; The problem that this method exists is that cost of manufacture is high on the one hand, the cycle is long, and is high to the requirement of collecting device and auxiliary control appliance; Existing on the other hand multitude of video data, for example single view plane program source, binocular solid program source etc. are difficult to show on bore hole 3D stereoscopic display device.
Summary of the invention
The object of the invention is intended to solve at least one of above-mentioned technological deficiency.
For achieving the above object, the present invention proposes a kind of video perspective transformations method, and may further comprise the steps: S1: the viewpoint of obtaining input video is counted n; S2: obtain the required viewpoint of bore hole stereo display and count N, wherein, n<N; S3: obtain the key frame of each viewpoint of said input video, calculate the scene depth of key frame of each viewpoint of said input video, obtain the depth map of key frame of each viewpoint of said input video; S4:, obtain the depth map of the non-key frame between said adjacent two key frames according to adjacent two key frames of each viewpoint of said input video and the depth map of said adjacent two key frames; S5: repeating step S4 obtains the depth map of all non-key frames of each viewpoint of said input video; S6: according to the depth map of each viewpoint of the image of said input video and said input video, draw the image of N-n viewpoint, and with the image construction N visual point image of said input video; S7: said N visual point image is carried out pixel arrange to obtain to be suitable for the N visual point image of predetermined bore hole stereoscopic display device.
In one embodiment of the invention, said step S1 further comprises: S11: the file number of judging said input video; S12: if the file number of said input video is not 1, then to count n be said file number to the viewpoint of said input video; S13: if the file number of said input video is 1, further judge that then the segmentation of said file is counted, it is that the segmentation of said file is counted that the viewpoint of said input video is counted n.
In one embodiment of the invention, said step S3 further comprises: if the viewpoint of said input video is counted n=1, then obtain the key frame of said viewpoint, the scene depth of key frame that calculates said viewpoint is with the depth map of the key frame that obtains said viewpoint; If the viewpoint of said input video is counted n=2; Then obtain the key frame of each viewpoint of said input video; And calculate the parallax of said input video, and according to the transformational relation of said parallax and scene depth, obtain the depth map of key frame of each viewpoint of said input video.
In one embodiment of the invention, said step S4 further comprises: S41: according to adjacent two key frame K of said each viewpoint nAnd K N+1, and key frame K nCorresponding depth map DK nWith key frame K N+1Corresponding depth map DK N+1, obtain said key frame K nAnd K N+1Between non-key frame F i(i=1,2 ..., t), wherein t is the number of non-key frame; S42: calculating K nWith F 1Between, F 1With F 2Between ..., F T-1With F tBetween light stream figure, and be that benchmark is with DK with said light stream figure nAt non-key frame F i(i=1,2 ..., carry out the pixel copy in t), obtain non-key frame F i(i=1,2 ..., first depth map t)
Figure BDA0000091995520000021
(i=1,2 ..., t); S43: to said first depth map
Figure BDA0000091995520000022
(i=1,2 ..., t) carry out medium filtering and obtain the 3rd depth map
Figure BDA0000091995520000023
(i=1,2 ..., t); S44: calculating K N+1With F tBetween, F tWith F T-1Between ..., F 2With F 1Between light stream figure, and be that benchmark is with DK with said light stream figure N+1At non-key frame F i(i=t, t-1, t-2 ..., 1) in carry out pixel copy, obtain non-key frame F i(i=1,2 ..., second depth map t)
Figure BDA0000091995520000024
(i=1,2 ..., t); S45: to said second depth map
Figure BDA0000091995520000025
(i=1,2 ..., t) carry out medium filtering and obtain the 4th depth map
Figure BDA0000091995520000026
(i=1,2 ..., t); S46: calculate said the 3rd depth map
Figure BDA0000091995520000027
(i=1,2 ..., t) with said the 4th depth map
Figure BDA0000091995520000028
(i=1,2 ..., averaging of the corresponding pixel points in t) obtains the depth map DF of non-key frame i(i=1,2 ..., t).
In one embodiment of the invention, said step S6 further comprises: if the viewpoint of said input video is counted n=1, then obtain the image and the depth map thereof of said viewpoint, and carry out pixel according to the preset expected viewpoint position and draw; If the viewpoint of said input video is counted n=2; Then obtain the image and the depth map thereof of said each viewpoint; And obtain said each viewpoint position and the distance B L of predetermined expectation viewpoint position and the distance B of DR and said each viewpoint position; The pixel value of said expectation viewpoint position point calculates according to following formula
Pixel=Pixel (L) * DR/D+Pixel (R) * DL/D, wherein, Pixel (L) is the pixel value with the said predetermined corresponding viewpoint of expectation viewpoint position distance B R, Pixel (R) is the pixel value with the said predetermined corresponding viewpoint of expectation viewpoint position distance B L.
In one embodiment of the invention, also comprise: if the number of said predetermined expectation viewpoint position greater than 1, then repeats pixel according to said predetermined expectation viewpoint position and draws.
In one embodiment of the invention; Said step S6 also comprises: if the viewpoint of said input video is counted n>1; According to the image of said video n viewpoint and the disparity map of said video; Draw the image of N-n viewpoint, with the N visual point image of the said bore hole stereo display of the image construction of said n viewpoint.
Video perspective transformations method according to the embodiment of the invention has following beneficial effect at least:
(1) the how visual frequency that is applicable to the bore hole stereoscopic display device of realizing input video is changed, and obtains good bore hole stereos copic viewing effect.
(2) it is high to have practiced thrift the cost of manufacture of bore hole 3D stereo display technique, shortens fabrication cycle simultaneously.
(3) can easily existing multitude of video data be shown on bore hole 3D stereoscopic display device.
Aspect that the present invention adds and advantage part in the following description provide, and part will become obviously from the following description, or recognize through practice of the present invention.
Description of drawings
Above-mentioned and/or additional aspect of the present invention and advantage are from obviously with easily understanding becoming the description of embodiment below in conjunction with accompanying drawing, wherein:
Fig. 1 is the video perspective transformations method flow diagram of the embodiment of the invention.
Embodiment
Describe embodiments of the invention below in detail, the example of said embodiment is shown in the drawings, and wherein identical from start to finish or similar label is represented identical or similar elements or the element with identical or similar functions.Be exemplary through the embodiment that is described with reference to the drawings below, only be used to explain the present invention, and can not be interpreted as limitation of the present invention.
Fig. 1 is the video perspective transformations method flow diagram of the embodiment of the invention.As shown in Figure 1, the video perspective transformations method according to the embodiment of the invention may further comprise the steps:
Step S101: the viewpoint of obtaining input video is counted n.
Wherein, input video can comprise single view planar video and binocular tri-dimensional video.
Particularly, can at first judge the file number of input video.If the file number of input video is not 1, then to count n be the file number to the viewpoint of input video; If the file number of input video is 1, judge further that then the segmentation of file is counted, it is that the segmentation of file is counted that the viewpoint of input video is counted n.
Step S102: obtain the required viewpoint of bore hole stereo display and count N, wherein, n<N.
Wherein, obtain the required viewpoint of bore hole stereo display according to the bore hole stereoscopic display device and count N.
Step S103: obtain the key frame of each viewpoint of input video, the scene depth of the key frame of each viewpoint of calculating input video, the depth map of the key frame of each viewpoint of acquisition input video.
Particularly, in one embodiment of the invention, can obtain the depth map of key frame of each viewpoint of input video through following method.
At first obtain the key frame of each viewpoint of input video.
Wherein, But obtain the described method of method referenced patent ZL200810225050.4 of key frame of each viewpoint of input video; Also can select simultaneously other the algorithm of choosing, for example based on the camera lens edge obtain key frame method, obtain the method for key frame, obtain the method for key frame based on the method for image information acquisition key frame with based on the video cluster based on motion analysis.
Then, calculate its scene depth, obtain the depth map of key frame of each viewpoint of input video according to the key frame of each viewpoint of input video.
Particularly; If the viewpoint of input video is counted n=1; Then, calculate the depth map of the scene depth of key frame, for example with the key frame of acquisition viewpoint according to the viewpoint key frame that gets access to; The described method of patent ZL200710117654.2 capable of using obtains the depth map of the key frame of viewpoint, also can select other method.
If the viewpoint of input video is counted n=2, then according to the key frame of each viewpoint of getting access to, calculate the parallax of input video, again according to the transformational relation of parallax and scene depth, the depth map of the key frame of each viewpoint of acquisition input video.
Wherein, the parallax of key frame calculates can adopt typical solid matching method, also can select other method.
Parallax and scene depth all can be used for describing the three-dimensional information of scene, and parallax and scene depth can be changed each other.
Step S104:, obtain the depth map of the non-key frame between adjacent two key frames according to adjacent two key frames of each viewpoint of input video and the depth map of adjacent two key frames.
Particularly, at first according to adjacent two key frame K of each viewpoint nAnd K N+1, and key frame K nCorresponding depth map DK nWith key frame K N+1Corresponding depth map DK N+1, obtain key frame K nAnd K N+1Between non-key frame F i(i=1,2 ..., t), wherein t is the number of non-key frame.
Calculating K nWith F 1Between, F 1With F 2Between ..., F T-1With F tBetween light stream figure, and be that benchmark is with DK with light stream figure nAt non-key frame F i(i=1,2 ..., carry out the pixel copy in t), obtain non-key frame F i(i=1,2 ..., first depth map t)
Figure BDA0000091995520000041
(i=1,2 ..., t).
To first depth map
Figure BDA0000091995520000042
(i=1; 2; ...; T) carry out medium filtering and obtain the 3rd depth map (i=1; 2 ..., t).
Calculating K N+1With F tBetween, F tWith F T-1Between ..., F 2With F 1Between light stream figure, and be that benchmark is with DK with light stream figure N+1At non-key frame F i(i=t, t-1, t-2 ..., 1) in carry out pixel copy, obtain non-key frame F i(i=1,2 ..., second depth map t)
Figure BDA0000091995520000044
(i=1,2 ..., t).
To second depth map
Figure BDA0000091995520000045
(i=1; 2; ...; T) carry out medium filtering and obtain the 4th depth map
Figure BDA0000091995520000046
(i=1; 2 ..., t).
Wherein, The 3rd depth map
Figure BDA0000091995520000047
(i=1; 2; ..., t) with the 4th depth map
Figure BDA0000091995520000048
(i=1,2; ..., median filter method t) is identical.
Calculate the 3rd depth map
Figure BDA0000091995520000049
(i=1,2 ..., t) with the 4th depth map
Figure BDA00000919955200000410
(i=1,2 ..., the depth map DF of non-key frame is obtained in averaging of the corresponding pixel points in t) i(i=1,2 ..., t).
Step S105: repeating step S104, the depth map of all non-key frames of each viewpoint of acquisition input video.
Particularly,, calculate the depth map of the non-key frame between all two adjacent key frames of input video, obtain the depth map of all non-key frames of each viewpoint of input video thus according to the described method of step S104.
Step S106: according to the depth map of each viewpoint of the image of input video and input video, draw the image of N-n viewpoint, and with the image construction N visual point image of input video.
Particularly, if the viewpoint of input video is counted n=1, then obtain the image and the corresponding depth map thereof of viewpoint, and carry out pixel according to the preset expected viewpoint position and draw.
If the viewpoint of video is counted n=2; Then obtain the image and the corresponding depth map thereof of each viewpoint; Obtain each viewpoint position and the distance B L of the expectation viewpoint position of being scheduled to and the distance B of DR and each viewpoint position, the pixel value of expectation viewpoint position point calculates according to following formula
Pixel=Pixel (L) * DR/D+Pixel (R) * DL/D, wherein, Pixel (L) is the pixel value with the predetermined corresponding viewpoint of expectation viewpoint position distance B R, Pixel (R) is the pixel value with the corresponding viewpoint of expectation viewpoint position distance B L of being scheduled to.
Wherein, if the number of preset expected viewpoint position greater than 1, then repeats pixel according to predetermined expectation viewpoint position and draws.
In addition,,, also can draw the image of N-n viewpoint, with the N visual point image of the image construction bore hole stereo display of n viewpoint according to the image of video n viewpoint and the disparity map of video if the viewpoint of input video is counted n>1.Method for drafting is identical with method for drafting based on depth map.
Step S107: the N visual point image is carried out pixel arrange to obtain to be suitable for the N visual point image of predetermined bore hole stereoscopic display device.
Particularly, the N visual point image is carried out pixel arrangement method can adopt the described method of patent CN200910088902.4, also can adopt additive method.
According to the method for the embodiment of the invention, have following beneficial effect at least:
(1) the how visual frequency that is applicable to the bore hole stereoscopic display device of realizing input video is changed, and obtains good bore hole stereos copic viewing effect.
(2) it is high to have practiced thrift the cost of manufacture of bore hole 3D stereo display technique, shortens fabrication cycle simultaneously.
(3) can easily existing multitude of video data be shown on bore hole 3D stereoscopic display device.
Although illustrated and described embodiments of the invention; For those of ordinary skill in the art; Be appreciated that under the situation that does not break away from principle of the present invention and spirit and can carry out multiple variation, modification, replacement and modification that scope of the present invention is accompanying claims and be equal to and limit to these embodiment.

Claims (7)

1. a video perspective transformations method is characterized in that, may further comprise the steps:
S1: the viewpoint of obtaining input video is counted n;
S2: obtain the required viewpoint of bore hole stereo display and count N, wherein, n<N;
S3: obtain the key frame of each viewpoint of said input video, calculate the scene depth of key frame of each viewpoint of said input video, obtain the depth map of key frame of each viewpoint of said input video;
S4:, obtain the depth map of the non-key frame between said adjacent two key frames according to adjacent two key frames of each viewpoint of said input video and the depth map of said adjacent two key frames;
S5: repeating step S4 obtains the depth map of all non-key frames of each viewpoint of said input video;
S6: according to the depth map of each viewpoint of the image of said input video and said input video, draw the image of N-n viewpoint, and with the image construction N visual point image of said input video;
S7: said N visual point image is carried out pixel arrange to obtain to be suitable for the N visual point image of predetermined bore hole stereoscopic display device.
2. video perspective transformations method according to claim 1 is characterized in that said step S1 further comprises:
S11: the file number of judging said input video;
S12: if the file number of said input video is not 1, then to count n be said file number to the viewpoint of said input video;
S13: if the file number of said input video is 1, further judge that then the segmentation of said file is counted, it is that the segmentation of said file is counted that the viewpoint of said input video is counted n.
3. video perspective transformations method according to claim 1 is characterized in that said step S3 further comprises:
If the viewpoint of said input video is counted n=1, then obtain the key frame of said viewpoint, the scene depth of key frame that calculates said viewpoint is with the depth map of the key frame that obtains said viewpoint;
If the viewpoint of said input video is counted n=2; Then obtain the key frame of each viewpoint of said input video; And calculate the parallax of said input video, and according to the transformational relation of said parallax and scene depth, obtain the depth map of key frame of each viewpoint of said input video.
4. video perspective transformations method according to claim 1 is characterized in that said step S4 further comprises:
S41: according to adjacent two key frame K of said each viewpoint nAnd K N+1, and key frame K nCorresponding depth map DK nWith key frame K N+1Corresponding depth map DK N+1, obtain said key frame K nAnd K N+1Between non-key frame F i(i=1,2 ..., t), wherein t is the number of non-key frame;
S42: calculating K nWith F 1Between, F 1With F 2Between ..., F T-1With F tBetween light stream figure, and be that benchmark is with DK with said light stream figure nAt non-key frame F i(i=1,2 ..., carry out the pixel copy in t), obtain non-key frame F i(i=1,2 ..., first depth map t)
Figure FDA0000091995510000021
(i=1,2 ..., t);
S43: to said first depth map
Figure FDA0000091995510000022
(i=1; 2; ...; T) carry out medium filtering and obtain the 3rd depth map (i=1; 2 ..., t);
S44: calculating K N+1With F tBetween, F tWith F T-1Between ..., F 2With F 1Between light stream figure, and be that benchmark is with DK with said light stream figure N+1At non-key frame F i(i=t, t-1, t-2 ..., 1) in carry out pixel copy, obtain non-key frame F i(i=1,2 ..., second depth map t) (i=1,2 ..., t);
S45: to said second depth map
Figure FDA0000091995510000025
(i=1; 2; ...; T) carry out medium filtering and obtain the 4th depth map
Figure FDA0000091995510000026
(i=1; 2 ..., t);
S46: calculate said the 3rd depth map
Figure FDA0000091995510000027
(i=1,2 ..., t) with said the 4th depth map
Figure FDA0000091995510000028
(i=1,2 ..., averaging of the corresponding pixel points in t) obtains the depth map DF of non-key frame i(i=1,2 ..., t).
5. video perspective transformations method according to claim 1 is characterized in that said step S6 further comprises:
If the viewpoint of said input video is counted n=1, then obtain the image and the depth map thereof of said viewpoint, and carry out pixel according to the preset expected viewpoint position and draw;
If the viewpoint of said input video is counted n=2; Then obtain the image and the depth map thereof of said each viewpoint; And obtain said each viewpoint position and the distance B L of predetermined expectation viewpoint position and the distance B of DR and said each viewpoint position; The pixel value of said expectation viewpoint position point calculates according to following formula
Pixel=Pixel(L)*DR/D+Pixel(R)*DL/D,
Wherein, Pixel (L) is the pixel value with the said predetermined corresponding viewpoint of expectation viewpoint position distance B R, and Pixel (R) is the pixel value with the said predetermined corresponding viewpoint of expectation viewpoint position distance B L.
6. video perspective transformations method according to claim 5 is characterized in that, also comprises:
If the number of said predetermined expectation viewpoint position greater than 1, then repeats pixel according to said predetermined expectation viewpoint position and draws.
7. video perspective transformations method according to claim 1 is characterized in that said step S6 also comprises:
If the viewpoint of said input video is counted n>1,, draw the image of N-n viewpoint, with the N visual point image of the said bore hole stereo display of the image construction of said n viewpoint according to the image of said video n viewpoint and the disparity map of said video.
CN2011102762754A 2011-09-16 2011-09-16 Video stereo vision conversion method Active CN102368824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011102762754A CN102368824B (en) 2011-09-16 2011-09-16 Video stereo vision conversion method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011102762754A CN102368824B (en) 2011-09-16 2011-09-16 Video stereo vision conversion method

Publications (2)

Publication Number Publication Date
CN102368824A true CN102368824A (en) 2012-03-07
CN102368824B CN102368824B (en) 2013-11-20

Family

ID=45761373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011102762754A Active CN102368824B (en) 2011-09-16 2011-09-16 Video stereo vision conversion method

Country Status (1)

Country Link
CN (1) CN102368824B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177467A (en) * 2013-03-27 2013-06-26 四川长虹电器股份有限公司 Method for creating naked eye 3D (three-dimensional) subtitles by using Direct 3D technology
CN105100773A (en) * 2015-07-20 2015-11-25 清华大学 Three-dimensional video manufacturing method, three-dimensional view manufacturing method and manufacturing system
CN110198411A (en) * 2019-05-31 2019-09-03 努比亚技术有限公司 Depth of field control method, equipment and computer readable storage medium during a kind of video capture

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070052794A1 (en) * 2005-09-03 2007-03-08 Samsung Electronics Co., Ltd. 3D image processing apparatus and method
CN101087437A (en) * 2007-06-21 2007-12-12 清华大学 Method for plane video converting to 3D video based on optical stream field
CN101400001A (en) * 2008-11-03 2009-04-01 清华大学 Generation method and system for video frame depth chart
CN101398855A (en) * 2008-10-24 2009-04-01 清华大学 Video key frame extracting method and system
CN101499085A (en) * 2008-12-16 2009-08-05 北京大学 Method and apparatus for fast extracting key frame
CN101610424A (en) * 2009-07-13 2009-12-23 清华大学 A kind of method of synthetic stereo image and device
CN101765022A (en) * 2010-01-22 2010-06-30 浙江大学 Depth representing method based on light stream and image segmentation

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070052794A1 (en) * 2005-09-03 2007-03-08 Samsung Electronics Co., Ltd. 3D image processing apparatus and method
CN101087437A (en) * 2007-06-21 2007-12-12 清华大学 Method for plane video converting to 3D video based on optical stream field
CN101398855A (en) * 2008-10-24 2009-04-01 清华大学 Video key frame extracting method and system
CN101400001A (en) * 2008-11-03 2009-04-01 清华大学 Generation method and system for video frame depth chart
CN101499085A (en) * 2008-12-16 2009-08-05 北京大学 Method and apparatus for fast extracting key frame
CN101610424A (en) * 2009-07-13 2009-12-23 清华大学 A kind of method of synthetic stereo image and device
CN101765022A (en) * 2010-01-22 2010-06-30 浙江大学 Depth representing method based on light stream and image segmentation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103177467A (en) * 2013-03-27 2013-06-26 四川长虹电器股份有限公司 Method for creating naked eye 3D (three-dimensional) subtitles by using Direct 3D technology
CN105100773A (en) * 2015-07-20 2015-11-25 清华大学 Three-dimensional video manufacturing method, three-dimensional view manufacturing method and manufacturing system
CN105100773B (en) * 2015-07-20 2017-07-28 清华大学 Three-dimensional video-frequency preparation method, three-dimensional view preparation method and manufacturing system
CN110198411A (en) * 2019-05-31 2019-09-03 努比亚技术有限公司 Depth of field control method, equipment and computer readable storage medium during a kind of video capture
CN110198411B (en) * 2019-05-31 2021-11-02 努比亚技术有限公司 Depth of field control method and device in video shooting process and computer readable storage medium

Also Published As

Publication number Publication date
CN102368824B (en) 2013-11-20

Similar Documents

Publication Publication Date Title
CN102710956B (en) Naked 3D track display method and equipment
US8488869B2 (en) Image processing method and apparatus
CN107396087B (en) Naked eye three-dimensional display device and its control method
CN102170577B (en) Method and system for processing video images
US20120293489A1 (en) Nonlinear depth remapping system and method thereof
US9154765B2 (en) Image processing device and method, and stereoscopic image display device
US20120044330A1 (en) Stereoscopic video display apparatus and stereoscopic video display method
CN103984108B (en) Nakedness-yet stereoscopic display method and device based on vibrating grating
US20140111627A1 (en) Multi-viewpoint image generation device and multi-viewpoint image generation method
US20130250053A1 (en) System and method for real time 2d to 3d conversion of video in a digital camera
US20110181593A1 (en) Image processing apparatus, 3d display apparatus, and image processing method
US8421850B2 (en) Image processing apparatus, image processing method and image display apparatus
JP2013013056A (en) Naked eye stereoscopic viewing video data generating method
CN102256143A (en) Video processing apparatus and method
Kim et al. Depth adjustment for stereoscopic image using visual fatigue prediction and depth-based view synthesis
EP2582144A2 (en) Image processing method and image display device according to the method
CN102547350A (en) Method for synthesizing virtual viewpoints based on gradient optical flow algorithm and three-dimensional display device
CN102223564A (en) 2D/3D switchable and field-depth adjustable display module
CN102368824B (en) Video stereo vision conversion method
WO2008030011A1 (en) File format for encoded stereoscopic image/video data
CN102340678A (en) Stereoscopic display device with adjustable field depth and field depth adjusting method
CN102457756A (en) Structure and method for 3D (Three Dimensional) video monitoring system capable of watching video in naked eye manner
CN105191300B (en) Image processing method and image processing apparatus
CN102111637A (en) Stereoscopic video depth map generation method and device
CN102447863A (en) Multi-viewpoint three-dimensional video subtitle processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant