CN101257641A - Method for converting plane video into stereoscopic video based on human-machine interaction - Google Patents

Method for converting plane video into stereoscopic video based on human-machine interaction Download PDF

Info

Publication number
CN101257641A
CN101257641A CNA2008101020331A CN200810102033A CN101257641A CN 101257641 A CN101257641 A CN 101257641A CN A2008101020331 A CNA2008101020331 A CN A2008101020331A CN 200810102033 A CN200810102033 A CN 200810102033A CN 101257641 A CN101257641 A CN 101257641A
Authority
CN
China
Prior art keywords
frame
point
old
characteristic point
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008101020331A
Other languages
Chinese (zh)
Inventor
戴琼海
李涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CNA2008101020331A priority Critical patent/CN101257641A/en
Publication of CN101257641A publication Critical patent/CN101257641A/en
Pending legal-status Critical Current

Links

Images

Abstract

The present invention relates to a human-machine interaction based method for converting plane video to stereo video, which belongs to the computer multimedia technology field. The method comprises that: performing object partition to foreground object of first frame of plane video sequence and designating the deepness value, giving a deepness value for the background region, generating a deepness map of the first frame; selecting a plurality of characteristic points using KLT method on the profile of foreground object partitioned from the first frame, and performing track, obtaining positions of a plurality of characteristic points in the subsequent frame; generating closed profile curve of a plurality of objects in the frame using profile recovery method; generating deepness map; synthesizing a left view of original sequence and a right view synthesized by a left view and corresponding deepness map to obtain the stereo video frame; consisting stereo video sequence with all stereo video frame.The advantage of the invention is that accurate deepness map of each frame is obtained, thereby implementing conversion from plane video to stereo video, and the method simplifies working amount of users in the maximal degree.

Description

Method based on the converting plane video into stereoscopic video of man-machine interaction
Technical field
The invention belongs to technical field of computer multimedia, particularly a kind of technology that the common plane video is transferred to three-dimensional video-frequency.
Technical background
Seen the people of 3D film all can stay deep impression to its sense of reality, the sort of sense assignee on the spot in person is difficult to forget, and now, three-dimensional video-frequency is just more and more promoted in worldwide and in a plurality of industries and is favored, such as, the large-scale shopping place and the public place of entertainment of some key cities of China have begun to arrange three-dimensional player, advertisement or other promotional videos made in advance with these three-dimensional player plays, give very strong visual impact, can attract very much spectators' eyeball.
Planar video is exactly the monocular video sequence, and above-mentioned 3D film belongs to three-dimensional video-frequency, so-called three-dimensional video-frequency, it is exactly the binocular video sequence, promptly comprise two video sequences, be respectively left view sequence and right view sequence, two sequences are seen with right eye for respectively spectators' left eye and are seen.The people watches why the world has third dimension, be to have a little different and have a parallax (disparity) because left eye and right eye are seen the visual angle in the world, and each frame of stereopsis two sequence correspondences frequently also has parallax, and this also is to make spectators watch three-dimensional video-frequency can produce relief reason as if on the spot in person.
The film source of three-dimensional video-frequency making at present is one of main difficult problem of field of stereo videos.Owing to directly obtain the big cost height of three-dimensional video-frequency difficulty, more and more paid attention to so transfer planar video to the three-dimensional video-frequency technology.
Above-mentioned parallax (disparity) is meant the horizontal displacement corresponding to two picture elements of same world point of left view and right view.The theorem of computer vision field is pointed out, the degree of depth (depth) of the parallax of certain point and its pairing world point is inversely proportional to, and that is to say, and is more little from the parallax value of watching point a little far away more, the parallax of infinite point is that the depth value that 0, one image is had a few has been formed depth map (depth map).If given left view (right view) and depth map just can obtain right view (left view).
Planar video transfers three-dimensional video-frequency to, manage exactly to obtain former sequence (planar video) thus the depth information of every frame obtain degree of depth sequence, then former sequence and degree of depth sequence are done certain processing and obtain new video sequences, thereby obtain three-dimensional video-frequency.
The present application people has been entitled as the automated process that discloses a kind of converting plane video into stereoscopic video in the patent of " based on the method for the planar video of optical flow field " for " 2007101176542 " at application number, the interframe movement information that mainly is based on video is come compute depth figure.The advantage of the method is simple, and can realize the conversion of planar video to three-dimensional video-frequency automatically.
But should also have following problem based on the automated process of interframe movement:
1. the depth information that obtains by the image motion between analysis frame in fact can only be can be regarded as " fake information ", rather than real depth information, therefore obtain depth map be not inconsistent through regular meeting and actual conditions.
2. obtain depth information owing to be based on interframe movement, so, then can't obtain any depth information if there is not the part of motion in the image.
Some automatic methods are certainly also arranged now based on the depth information that blocks, shade, texture even depth clue obtain rest image, but in general these method versatilities are very poor.
Therefore, automatic converting plane video into stereoscopic video method no matter be based on the degree of depth clue (block, shade, texture or the like) of still image, also is based on the image motion of interframe, and the depth information that obtains all is not inconsistent through regular meeting and actual conditions.
Be described below with the correlation technique that adopts in the method for the present invention:
1, KLT method: KLT is ripe and feature point tracking method (list of references robust in a kind of computer vision field, " detection of characteristic point and tracking ", Carnegie Mellon University's technical report, author Carlo Tomasi, TakeoKanade, former title " Detection and Tracking of Point Features ", Technical ReportCMU-CS-91-132), mainly comprise choosing and following the tracks of to characteristic point, this characteristic point is the block of pixels (also claiming window) (N is generally odd number) of the N * N in the image, and such window must meet some requirements and just can be elected to be characteristic point by the KLT method, and this condition mainly is that the pixel value that window comprises needs bigger excursion.After characteristic point is chosen, KLT can be automatically in subsequent frame the change in location to these characteristic points follow the tracks of, concrete track algorithm is mainly based on minimizing SSD (quadratic sum of window respective pixel value difference).
2.B spline method: this is a kind of method that is fitted to smooth curve by discrete point.
3.warping technology, the basic principle of this technology are, because depth map is represented the parallax of the point of left and right sides view correspondence, so according to depth map, the left view every bit are done the point that corresponding skew just can obtain right view.
Summary of the invention
The objective of the invention is not for overcoming the weak point of prior art, a kind of method of the converting plane video into stereoscopic video based on man-machine interaction is proposed, this method adopts the method for user and computer interactive to realize, can obtain the depth map accurately of each frame of a video sequence, thereby finely must realize converting plane video into stereoscopic video, and user's workload is simplified farthest.
The method based on the converting plane video into stereoscopic video of optical flow field that the present invention proposes is characterized in that, may further comprise the steps:
1) first frame to planar video sequence to be processed carries out object segmentation and each object that splits is specified a depth value of representing this object degree of depth foreground object in the mode of drawing, simultaneously to the depth value of given this background depth of expression in background area, thereby generate the depth map of first frame:
2) on the profile of the foreground object that first frame splits, choose a plurality of characteristic points, and a plurality of characteristic points that in each subsequent frame after first frame first frame selected follow the tracks of, obtain this position of a plurality of characteristic points in subsequent frame with the KLT method;
3) adopt the profile restoring method to generate the contour curve of the closure of a plurality of objects in this frame, thereby recover these characteristic points pairing contour of object in each subsequent frame according to the position of the characteristic point of each subsequent frame;
4) generate the depth map of each subsequent frame according to the depth value of each object and background in first frame;
5) with each frame of former sequence as a left view, obtain a right view this left view and corresponding depth map are synthetic, obtain a stereo video frame with left and right view is synthetic again; All stereo video frame are formed stereoscopic video sequence.
The concrete steps of the depth map of generation first frame in the described step 1) can comprise:
11) to behind planar video sequence first frame to be processed, the profile of each foreground object to be split is got a point every a segment distance in this frame, be in turn connected into the outline line of the closure of this object a little;
12) each object is specified a depth value with the mode of input value, this whole two field picture is removed the depth value of the appointment of background area input outside all objects;
13), generate the depth map of first frame according to the depth value of all objects and background area.
The concrete steps that adopt the profile restoring method to generate the contour curve of the closure of a plurality of objects in this frame in the described step 3) can comprise:
31) optional in the S set of the characteristic point of an object of every subsequent frame a bit as initial characteristics point P 1, establish S OldFor to be generated
Become the set of characteristic point, with characteristic point P 1Comprise into S Old
32) in S set, seek distance feature point P 1Nearest point is as characteristic point P 2, simultaneously with characteristic point P 2Comprise into S Old
33) in S set, seek and characteristic point P 2Nearest and do not belong to S OldCharacteristic point P 3( P 3 ∉ S old ), simultaneously with characteristic point P 3Comprise into S Old
34) with seeking P 3Method obtain characteristic point P 4, P 5, P 6... P N, simultaneously with characteristic point P 4, P 5, P 6... P NComprise into S OldIf satisfy following end condition, then stop to seek;
End condition 1: when with characteristic point P NComprise into S OldAfter, S=S Old,
Or end condition 2: in S set, still have a P to satisfy P ∉ S old , But | P NP|>T 1, T 1Be distance threshold;
35) method of use B spline interpolation is linked to be the point of seeking out the contour curve of the sealing of this object successively;
36) repeat step 31)-35), obtain the contour curve of the sealing of each object in each subsequent frame.
Described distance threshold T 1Span can be 40-60 length in pixels.
Characteristics of the present invention and beneficial effect:
Method of the present invention can adopt the semi-automatic method of man-machine interaction, obtain the depth map accurately of first frame in the planar video sequence by user's operation, then can generate the depth map more accurately of each subsequent frame more automatically, last compound stereoscopic video sequence, thus finely must realize converting plane video into stereoscopic video.
The purpose of carrying out image segmentation is and the zone or the object segmentation that are in a depth layer in the image can be come out that this is necessary step for obtaining accurate depth map.Present automatic division method comprises that static graphics side of cutting apart and moving image cut apart, all can't robust and accurate split image, and if can not accurately just must cut apart and can't obtain depth map accurately.The inventive method is to come split image and projected depth value by user's operational computations machine, can obtain best segmentation effect and accurate depth value, thereby lay a good foundation for the fine converting plane video into stereoscopic video of must realizing.
Description of drawings
Fig. 1 is the inventive method overall procedure block diagram.
Fig. 2 is first two field picture of planar video sequence pending in the embodiments of the invention.
Fig. 3 is for carrying out object segmentation and selected characteristic point figure as a result afterwards to first frame.
Fig. 4 is the depth map of first frame of generation.
Fig. 5 is the profile restored method schematic diagram of present embodiment.
The depth map of the 2-6 frame that Fig. 6 present embodiment generates.
Embodiment
The method based on the converting plane video into stereoscopic video of user interactions that the present invention proposes reaches embodiment in conjunction with the accompanying drawings and is described in detail as follows:
The method based on the converting plane video into stereoscopic video of optical flow field that the present invention proposes as shown in Figure 1, may further comprise the steps:
1) first frame to planar video sequence to be processed carries out object segmentation and each object that splits is specified a depth value of representing this object degree of depth foreground object in the mode of drawing, simultaneously to the depth value of given this background depth of expression in background area, thereby generate the depth map of first frame;
The concrete steps that generate the depth map of first frame are:
11) computer screen shows behind planar video sequence first frame to be processed, user's profile of each foreground object to be split in this frame is got a point every a segment distance, and (dot spacing is determined on a case-by-case basis, curve is complicated more, ask for a little just intensive more), be in turn connected into the outline line of the closure of this object a little;
12) user specifies a depth value (scope of depth value is 0-255, and the big more expression of depth value is near more from the observer) to each object with the mode of input value, this whole two field picture is removed the depth value of the appointment of background area input outside all objects;
13), generate the depth map of first frame according to the depth value of all objects and background area;
2) on the profile of the foreground object that described first frame splits, choose a plurality of characteristic points, and a plurality of characteristic points that in each subsequent frame after first frame first frame selected follow the tracks of, obtain this position of a plurality of characteristic points in subsequent frame with the KLT method;
3) adopt the profile restoring method to generate the contour curve of the closure of a plurality of objects in this frame, thereby recover these characteristic points pairing contour of object in each subsequent frame according to the position of the characteristic point of each subsequent frame;
The concrete steps that adopt the profile restoring method to generate the contour curve of the closure of a plurality of objects in this frame in the above-mentioned steps comprise:
31) optional in the S set of the characteristic point of an object of every subsequent frame a bit as initial characteristics point P 1, establish S OldBe the set of characteristic point to be generated, with characteristic point P 1Comprise into S Old
32) in S set, seek distance feature point P 1Nearest point is as characteristic point P 2, simultaneously with characteristic point P 2Comprise into S Old
33) in S set, seek and characteristic point P 2Nearest and do not belong to S OldCharacteristic point P 3( P 3 ∉ S old ), simultaneously with characteristic point P 3Comprise into S Old
34) with seeking P 3Method obtain characteristic point P 4, P 5, P 6... P N, simultaneously with characteristic point P 4, P 5, P 6... P NComprise into S OldIf satisfy following end condition, then stop to seek;
End condition 1: when with characteristic point P NComprise into S OldAfter, S=S Old,
Or end condition 2: in S set, still have a P to satisfy P ∉ S old , But | P NP|>T 1, T 1Be distance threshold, this threshold value span is a 40-60 length in pixels;
35) method of use B spline interpolation is linked to be the point of seeking out the contour curve of the sealing of this object successively;
36) repeat step 31)-35), obtain the contour curve of the sealing of each object in each subsequent frame;
4) generate the depth map of each subsequent frame according to the depth value of each object and background in first frame;
The method of the depth map of above-mentioned each subsequent frame of generation is:
Utilize step 2) the process of tracking in the area of each object that traces into of present frame and the area of the first frame respective objects are compared, and the given object depth value of first frame is revised according to the variation of area, amended depth value as the depth value of this object of present frame (change of the degree of depth of this object be bound to cause it the change of area of right region R, if area becomes greatly then illustrates from the observer near, otherwise then illustrate far, then can come the Corrected Depth value) according to the change of the area size of R from the observer;
5) with each frame of former sequence as left view, left view and the synthetic right view that obtains of depth map, again with the synthetic stereo video frame that obtains of left and right view; All stereo video frame are formed stereoscopic video sequence (concrete grammar adopts existing conventional method).
A kind of embodiment of the inventive method may further comprise the steps:
1) first frame to planar video sequence to be processed (the totalframes M of present embodiment is 20) carries out object segmentation and each object that splits is specified a depth value of representing this object degree of depth foreground object in the mode of drawing, simultaneously to the given depth value of representing this background depth in background area, thereby generate the depth map of first frame, concrete grammar is:
Computer screen shows to planar video sequence first frame to be processed as shown in Figure 2, can see and mainly contain two foreground objects (N=2), these two foreground objects are two personages, and distance is being observed nearer personage to left movement, and personage far away moves right.User's profile of each foreground people to be split in this frame is got a point every a segment distance, because people's head and foot's profile are complicated, so get a little and should compare comparatively dense, so just obtain the outline line of two personages' closure, the while user specifies a depth value to each object with the mode of input value, and (scope of depth value is 0-255, the big more expression of depth value is near more from the observer), nearer personage's depth value is 200, far away is 150, depth value to background is given as 50 again, has so just generated the depth map of first frame;
2) choose a plurality of characteristic points with the KLT method on the profile of the foreground object that first frame splits, a plurality of characteristic points that in each subsequent frame after first frame first frame selected are followed the tracks of, and obtain this position of a plurality of characteristic points in subsequent frame; Concrete grammar is:
To each region R that is split, near the outline line of R, use the automatic selected characteristic point of KLT algorithm.The characteristic window of present embodiment is set at the block of pixels of 7 * 7 sizes, and it is 10 pixels that the minimum spacing between the characteristic point (qualification of the minimum value of two characteristic window centre distances) is set at.Fig. 3 is the result of selected characteristic point on the profile of two foreground people in first frame, and point is represented the characteristic point that the KLT method is chosen automatically.
Subsequent frame after first frame is used the KLT algorithm carry out automatic feature point tracking, obtain this position of a plurality of characteristic points in subsequent frame;
3) method that adopts profile to restore generates the contour curve of a plurality of closures in this frame according to the position of the characteristic point of each subsequent frame:
It is exactly that a discrete characteristic point that traces into couples together with smooth curve that the profile that the present invention proposes restores, thereby recovers these characteristic point The corresponding area profiles.Concrete restored method, as shown in Figure 4:
31) in the S set of the characteristic point of an object of subsequent frame optional 1 A as initial point P 1, establish S OldBe the set of the point chosen, this moment S OldOnly comprise a some P 1
32) in S, seek the nearest point of distance A as P 2, while P 2Be included in S Old
33) in S, seek and P 2Nearest and do not belong to S OldPoint as P 3( P 3 ∉ S old ), while P 3Be included in S Old
34) obtain P with the method for seeking P3 4, P 5, P 6... P N,, then stop to seek if satisfy following end condition; End condition 1: when with P NComprise into S OldAfter, S=S Old,
Or end condition 2: still have a P to satisfy P ∉ S old , But | P NP|>T 1(distance threshold is got 50 length in pixels).
35) use the method for B spline interpolation that the point of seeking out is linked to be smooth contour curve.
Proposing second Rule of judgment is because may there be such a case, as Fig. 5.With an arbitrfary point " point 1 " is starting point, finds " point 2 " successively according to method of the present invention then, " point 3 ", " point 4 ", " point 5 ", " point 6 ", " point 7 ".When finding at this process mid-point 8 owing to depart from profile and " having been abandoned ".After finding " point 7 ", " point 8 " is that new point (promptly belongs to S but do not belong to S to find to also have one Old).If have only Rule of judgment 1, will be used as last point to " point 8 ", yes does not wish to see for this, because " point 8 " should not be selected into profile point as bad point.Therefore the present invention proposes second Rule of judgment, so just can avoid allowing " point 8 " selected, and " point 7 " is just as last selected point.In fact above-mentioned this situation is often to occur in actual conditions.
4) generate the depth map of each frame according to the depth value of each object and background in first frame;
Concrete grammar is: the area to the area in each zone that traces into of present frame and the first frame respective regions in the process of following the tracks of compares (calculating of area and relatively be routine techniques), and the given depth value of first framed user is revised according to the variation of area, amended depth value as the depth value of this object of present frame (change of the degree of depth of this object be bound to cause it the change of area of right region R, if area becomes greatly then illustrates from the observer near, otherwise then illustrate far, so just can come the Corrected Depth value) according to the change of the area size of R from the observer; Personage's degree of depth remains unchanged in present embodiment.The former figure of the 2nd frame to the 6 frames and the depth map of generation are seen Fig. 6, and wherein (a)-(e) is former figure, (A)-(E) are depth map.
5) with each frame of former sequence as left view, left view and the synthetic right view that obtains of depth map, again with the synthetic stereo video frame that obtains of left and right view; All stereo video frame are formed stereoscopic video sequence.The existing conventional method of above-mentioned steps realizes, adopt the warping technology left view and depth map can be synthesized right view, the synthetic three-dimensional view (odd number of three-dimensional view is classified the left view odd column as, and even number is classified the even column of right view as) that obtains of mode that intersects with the odd even ordered series of numbers of left and right sides view then.

Claims (4)

1, based on the method for the converting plane video into stereoscopic video of man-machine interaction, this method may further comprise the steps:
1) first frame to planar video sequence to be processed carries out object segmentation and each object that splits is specified a depth value of representing this object degree of depth foreground object in the mode of drawing, simultaneously to the depth value of given this background depth of expression in background area, thereby generate the depth map of first frame;
2) on the profile of the foreground object that first frame splits, choose a plurality of characteristic points, and a plurality of characteristic points that in each subsequent frame after first frame first frame selected follow the tracks of, obtain this position of a plurality of characteristic points in subsequent frame with the KLT method;
3) adopt the profile restoring method to generate the contour curve of the closure of a plurality of objects in this frame, thereby recover these characteristic points pairing contour of object in each subsequent frame according to the position of the characteristic point of each subsequent frame;
4) generate the depth map of each subsequent frame according to the depth value of each object and background in first frame;
5) with each frame of former sequence as a left view, obtain a right view this left view and corresponding depth map are synthetic, obtain a stereo video frame with left and right view is synthetic again; All stereo video frame are formed stereoscopic video sequence.
2, method according to claim 1 is characterized in that the concrete steps of the depth map of generation first frame in the described step 1) comprise:
11) to behind planar video sequence first frame to be processed, the profile of each foreground object to be split is got a point every a segment distance in this frame, be in turn connected into the outline line of the closure of this object a little;
12) each object is specified a depth value with the mode of input value, this whole two field picture is removed the depth value of the appointment of background area input outside all objects;
13), generate the depth map of first frame according to the depth value of all objects and background area.
3, method according to claim 1 is characterized in that, the concrete steps that adopt the profile restoring method to generate the contour curve of the closure of a plurality of objects in this frame in the described step 3) comprise:
31) optional in the S set of the characteristic point of an object of every subsequent frame a bit as initial characteristics point P 1, establish S OldBe the set of characteristic point to be generated, with characteristic point P 1Comprise into S Old
32) in S set, seek distance feature point P 1Nearest point is as characteristic point P 2, simultaneously with characteristic point P 2Comprise into S Old
33) in S set, seek and characteristic point P 2Nearest and do not belong to S OldCharacteristic point P 3( P 3 ∉ S old ), simultaneously with characteristic point P 3Comprise into S Old
34) with seeking P 3Method obtain characteristic point P 4, P 5, P 6... P N, simultaneously with characteristic point P 4, P 5, P 6... P NComprise into S OldIf satisfy following end condition, then stop to seek;
End condition 1: when with characteristic point P NComprise into S OldAfter, S=S Old,
Or end condition 2: in S set, still have a P to satisfy P ∉ S old , But | P NP|>T 1, T 1Be distance threshold;
35) method of use B spline interpolation is linked to be the point of seeking out the contour curve of the sealing of this object successively;
36) repeat step 31)-35), obtain the contour curve of the sealing of each object in each subsequent frame.
4, as method as described in the claim 3, it is characterized in that described distance threshold T 1Span is a 40-60 length in pixels.
CNA2008101020331A 2008-03-14 2008-03-14 Method for converting plane video into stereoscopic video based on human-machine interaction Pending CN101257641A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008101020331A CN101257641A (en) 2008-03-14 2008-03-14 Method for converting plane video into stereoscopic video based on human-machine interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008101020331A CN101257641A (en) 2008-03-14 2008-03-14 Method for converting plane video into stereoscopic video based on human-machine interaction

Publications (1)

Publication Number Publication Date
CN101257641A true CN101257641A (en) 2008-09-03

Family

ID=39892052

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008101020331A Pending CN101257641A (en) 2008-03-14 2008-03-14 Method for converting plane video into stereoscopic video based on human-machine interaction

Country Status (1)

Country Link
CN (1) CN101257641A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101815225A (en) * 2009-02-25 2010-08-25 三星电子株式会社 Method for generating depth map and device thereof
CN101917636A (en) * 2010-04-13 2010-12-15 上海易维视科技有限公司 Method and system for converting two-dimensional video of complex scene into three-dimensional video
CN101483788B (en) * 2009-01-20 2011-03-23 清华大学 Method and apparatus for converting plane video into tridimensional video
CN101540833B (en) * 2009-04-13 2011-04-13 浙江大学 Anti-interference real-time tracking method for profile of object
CN102047669A (en) * 2008-06-02 2011-05-04 皇家飞利浦电子股份有限公司 Video signal with depth information
CN102063725A (en) * 2010-12-30 2011-05-18 Tcl集团股份有限公司 Depth information-based multi-target tracking method
CN102074018A (en) * 2010-12-22 2011-05-25 Tcl集团股份有限公司 Depth information-based contour tracing method
CN102098527A (en) * 2011-01-28 2011-06-15 清华大学 Method and device for transforming two dimensions into three dimensions based on motion analysis
CN102436680A (en) * 2011-08-19 2012-05-02 合肥鹏润图像科技有限公司 Method for making stereo image of digital photo
CN102549507A (en) * 2009-10-02 2012-07-04 皇家飞利浦电子股份有限公司 Selecting viewpoints for generating additional views in 3D video
CN102566790A (en) * 2010-12-28 2012-07-11 康佳集团股份有限公司 Method and system for realizing 3D (three-dimensional) mouse as well as 3D display device
CN102622762A (en) * 2011-01-31 2012-08-01 微软公司 Real-time camera tracking using depth maps
CN102622768A (en) * 2012-03-14 2012-08-01 清华大学 Depth-map gaining method of plane videos
CN102695069A (en) * 2012-05-22 2012-09-26 山东大学 Depth propagation method in video conversion from two dimension to three dimension
CN102761768A (en) * 2012-06-28 2012-10-31 中兴通讯股份有限公司 Method and device for realizing three-dimensional imaging
CN102857772A (en) * 2011-06-29 2013-01-02 晨星软件研发(深圳)有限公司 Image processing method and image processing device
CN103369353A (en) * 2012-04-01 2013-10-23 兔将创意影业股份有限公司 Integrated 3D conversion device using web-based network
CN105740839A (en) * 2010-05-31 2016-07-06 苹果公司 Analysis of three-dimensional scenes
US9619561B2 (en) 2011-02-14 2017-04-11 Microsoft Technology Licensing, Llc Change invariant scene recognition by an agent
CN106998459A (en) * 2017-03-15 2017-08-01 河南师范大学 A kind of single camera stereoscopic image generation method of continuous vari-focus technology
CN107111764A (en) * 2015-01-16 2017-08-29 高通股份有限公司 By the event of depth triggering of the object in the visual field of imaging device
CN108616745A (en) * 2016-12-12 2018-10-02 三维视觉科技有限公司 2D is from turn 3D method and systems
CN108986154A (en) * 2017-05-31 2018-12-11 钰立微电子股份有限公司 Method and system for verifying quality of depth map corresponding to image acquisition device
US10210382B2 (en) 2009-05-01 2019-02-19 Microsoft Technology Licensing, Llc Human body pose estimation
CN109767467A (en) * 2019-01-22 2019-05-17 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110378944A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment
US11215711B2 (en) 2012-12-28 2022-01-04 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US11710309B2 (en) 2013-02-22 2023-07-25 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102047669A (en) * 2008-06-02 2011-05-04 皇家飞利浦电子股份有限公司 Video signal with depth information
CN102047669B (en) * 2008-06-02 2013-12-18 皇家飞利浦电子股份有限公司 Video signal with depth information
CN101483788B (en) * 2009-01-20 2011-03-23 清华大学 Method and apparatus for converting plane video into tridimensional video
CN101815225B (en) * 2009-02-25 2014-07-30 三星电子株式会社 Method for generating depth map and device thereof
CN101815225A (en) * 2009-02-25 2010-08-25 三星电子株式会社 Method for generating depth map and device thereof
CN101540833B (en) * 2009-04-13 2011-04-13 浙江大学 Anti-interference real-time tracking method for profile of object
US10210382B2 (en) 2009-05-01 2019-02-19 Microsoft Technology Licensing, Llc Human body pose estimation
CN102549507B (en) * 2009-10-02 2014-08-20 皇家飞利浦电子股份有限公司 Selecting viewpoints for generating additional views in 3D video
CN102549507A (en) * 2009-10-02 2012-07-04 皇家飞利浦电子股份有限公司 Selecting viewpoints for generating additional views in 3D video
CN101917636A (en) * 2010-04-13 2010-12-15 上海易维视科技有限公司 Method and system for converting two-dimensional video of complex scene into three-dimensional video
CN105740839B (en) * 2010-05-31 2020-01-14 苹果公司 Analysis of three-dimensional scenes
CN105740839A (en) * 2010-05-31 2016-07-06 苹果公司 Analysis of three-dimensional scenes
CN102074018A (en) * 2010-12-22 2011-05-25 Tcl集团股份有限公司 Depth information-based contour tracing method
CN102074018B (en) * 2010-12-22 2013-03-20 Tcl集团股份有限公司 Depth information-based contour tracing method
CN102566790A (en) * 2010-12-28 2012-07-11 康佳集团股份有限公司 Method and system for realizing 3D (three-dimensional) mouse as well as 3D display device
CN102566790B (en) * 2010-12-28 2015-06-17 康佳集团股份有限公司 Method and system for realizing 3D (three-dimensional) mouse as well as 3D display device
CN102063725B (en) * 2010-12-30 2013-05-08 Tcl集团股份有限公司 Depth information-based multi-target tracking method
CN102063725A (en) * 2010-12-30 2011-05-18 Tcl集团股份有限公司 Depth information-based multi-target tracking method
CN102098527B (en) * 2011-01-28 2013-04-10 清华大学 Method and device for transforming two dimensions into three dimensions based on motion analysis
CN102098527A (en) * 2011-01-28 2011-06-15 清华大学 Method and device for transforming two dimensions into three dimensions based on motion analysis
CN102622762A (en) * 2011-01-31 2012-08-01 微软公司 Real-time camera tracking using depth maps
US9242171B2 (en) 2011-01-31 2016-01-26 Microsoft Technology Licensing, Llc Real-time camera tracking using depth maps
CN102622762B (en) * 2011-01-31 2014-07-23 微软公司 Real-time camera tracking using depth maps
US9619561B2 (en) 2011-02-14 2017-04-11 Microsoft Technology Licensing, Llc Change invariant scene recognition by an agent
CN102857772A (en) * 2011-06-29 2013-01-02 晨星软件研发(深圳)有限公司 Image processing method and image processing device
CN102857772B (en) * 2011-06-29 2015-11-11 晨星软件研发(深圳)有限公司 Image treatment method and image processor
CN102436680A (en) * 2011-08-19 2012-05-02 合肥鹏润图像科技有限公司 Method for making stereo image of digital photo
CN102622768B (en) * 2012-03-14 2014-04-09 清华大学 Depth-map gaining method of plane videos
CN102622768A (en) * 2012-03-14 2012-08-01 清华大学 Depth-map gaining method of plane videos
CN103369353A (en) * 2012-04-01 2013-10-23 兔将创意影业股份有限公司 Integrated 3D conversion device using web-based network
CN102695069A (en) * 2012-05-22 2012-09-26 山东大学 Depth propagation method in video conversion from two dimension to three dimension
CN102695069B (en) * 2012-05-22 2014-07-16 山东大学 Depth propagation method in video conversion from two dimension to three dimension
CN102761768A (en) * 2012-06-28 2012-10-31 中兴通讯股份有限公司 Method and device for realizing three-dimensional imaging
EP2852161A4 (en) * 2012-06-28 2015-06-10 Zte Corp Method and device for implementing stereo imaging
WO2014000663A1 (en) * 2012-06-28 2014-01-03 中兴通讯股份有限公司 Method and device for implementing stereo imaging
US11215711B2 (en) 2012-12-28 2022-01-04 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
US11710309B2 (en) 2013-02-22 2023-07-25 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
CN107111764A (en) * 2015-01-16 2017-08-29 高通股份有限公司 By the event of depth triggering of the object in the visual field of imaging device
CN108616745A (en) * 2016-12-12 2018-10-02 三维视觉科技有限公司 2D is from turn 3D method and systems
CN106998459A (en) * 2017-03-15 2017-08-01 河南师范大学 A kind of single camera stereoscopic image generation method of continuous vari-focus technology
CN108986154A (en) * 2017-05-31 2018-12-11 钰立微电子股份有限公司 Method and system for verifying quality of depth map corresponding to image acquisition device
CN109767467A (en) * 2019-01-22 2019-05-17 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN112102386A (en) * 2019-01-22 2020-12-18 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and computer readable storage medium
CN110378944A (en) * 2019-07-11 2019-10-25 Oppo广东移动通信有限公司 Depth map processing method, device and electronic equipment

Similar Documents

Publication Publication Date Title
CN101257641A (en) Method for converting plane video into stereoscopic video based on human-machine interaction
CN101902657B (en) Method for generating virtual multi-viewpoint images based on depth image layering
JP5692980B2 (en) Conversion method and apparatus using depth map generation
Cao et al. Semi-automatic 2D-to-3D conversion using disparity propagation
CN102903096B (en) Monocular video based object depth extraction method
EP2595116A1 (en) Method for generating depth maps for converting moving 2d images to 3d
CN102609974B (en) Virtual viewpoint image generation process on basis of depth map segmentation and rendering
CN101287142A (en) Method for converting flat video to tridimensional video based on bidirectional tracing and characteristic points correction
CN100539710C (en) Method based on the converting plane video into stereoscopic video of optical flow field
TWI496452B (en) Stereoscopic image system, stereoscopic image generating method, stereoscopic image adjusting apparatus and method thereof
CN101287143A (en) Method for converting flat video to tridimensional video based on real-time dialog between human and machine
CN104065946B (en) Based on the gap filling method of image sequence
CN104639933A (en) Real-time acquisition method and real-time acquisition system for depth maps of three-dimensional views
CN102892021A (en) New method for synthesizing virtual viewpoint image
CN112019828B (en) Method for converting 2D (two-dimensional) video into 3D video
CN106341676A (en) Super-pixel-based depth image preprocessing and depth hole filling method
CN103679739A (en) Virtual view generating method based on shielding region detection
CN102957936A (en) Virtual viewpoint generation method from video single viewpoint to multiple viewpoints
Hsu et al. Spatio-temporally consistent view synthesis from video-plus-depth data with global optimization
Wang et al. Block-based depth maps interpolation for efficient multiview content generation
CN103945206A (en) Three-dimensional picture synthesis system based on comparison between similar frames
CN110149508A (en) A kind of array of figure generation and complementing method based on one-dimensional integrated imaging system
CN102469322B (en) Image processing method for plane stereoscopic bodies
Caviedes et al. Real time 2D to 3D conversion: Technical and visual quality requirements
KR101754976B1 (en) Contents convert method for layered hologram and apparatu

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080903