CA2553473A1 - Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging - Google Patents

Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging Download PDF

Info

Publication number
CA2553473A1
CA2553473A1 CA002553473A CA2553473A CA2553473A1 CA 2553473 A1 CA2553473 A1 CA 2553473A1 CA 002553473 A CA002553473 A CA 002553473A CA 2553473 A CA2553473 A CA 2553473A CA 2553473 A1 CA2553473 A1 CA 2553473A1
Authority
CA
Canada
Prior art keywords
image
depth
depth map
source image
edge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002553473A
Other languages
French (fr)
Inventor
Wa James Tam
Liang Zhang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Communications Research Centre Canada
Original Assignee
Communications Research Centre Canada
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Communications Research Centre Canada filed Critical Communications Research Centre Canada
Publication of CA2553473A1 publication Critical patent/CA2553473A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/282Image signal generators for generating image signals corresponding to three or more geometrical viewpoints, e.g. multi-view systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Depth maps are generated from a monoscopic source images and asymmetrically smoothed to a near-saturation level. Each depth map contains depth values focused on edges of local regions in the source image. Each edge is defined by a predetermined image parameter having an estimated value exceeding a predefined threshold. The depth values are based on the corresponding estimated values of the image parameter. The depth map is used to process the source image by a depth image based rendering algorithm to create at least one deviated image, which forms with the source image a set of monoscopic images. At least one stereoscopic image pair is selected from such a set for use in generating different viewpoints for multiview and stereoscopic purposes, including still and moving images.

Description

s :hoc. No. 102-22 CA Patent G.ENERA7'I~TG A (DEPTH MA,P' FROM A'TWO-DIMENSIONAL SOURCE IMAGE FOl?.
~iTEREOSC01'IC AND IVIULTIVIEW IMAGING
'CECHNICAL FIELD
j Ol] The presenl invention generally relates to depth maps generated from a ;r~ono;~;,oy~ic source i:n;:.ge, or use in creating; deviated images with n~~w camera viewpoints for stereoscopic and rnultivievv displays, and in particular to asymmetrically smoothed sparse depth maps.
_BACKGROUNE~ TO THE INVENTION
~_0~] The view.ng experience of visual displays arid communication systems can be enhanced by v:ncorporating multivie;w and stereoscopic (3D) infonnation that heighten the perceived depth and the virtual presence of objects depicted in the visual scene. Given this desirable feature and with the maturation of digital video technologies, there has been a strong impetus to find effcient and ~:ommercially viahle methods of creating, recording, transmitting, and displaying multiview and stereoscopic images and sequences. The fundament<:1 problem of working with multiview and stereo~c;opic images i~; that multiple images are required, as opposed to a single strewm of rr~or~osc~pio mages for standavd di pla~rs. This means that multiple cameras are required during capture and that storage as well as transmission requirements are greatly increased.
~ 0~] In a technique call:d depth image based renc:.ering (DIBR); images with new r_.amera viewpoints are generated usirg information from an original source image and its corresponding depth map. These new images then c;an he used for 3D or multiview imaging devices. One example is tue process disclosed n LJS Patent 7,01 S,v26 by Zitnick et al for generating a two-layer, 3D
representation of a digitized stnage ~~rom the image and its pixc;l disparity map.
[044] The DIBF: techniq ae is useful for stereoscopic systems because one set of source images and their ~;orresponding depth maps can be coded more efficiently than two streams of natural images (that are required for a stereoscopic display), thereby reducing the bandwidth required for storabe and transmission.
~or more details on this approach, see:
C. T. Kim,1VI. Sic:gel, & J. Y. Son, "Synthesis of a high-resolution 3D
stereoscopic image pair from a lngh-resolution monoscopic image and a low-resolution depth map," Proceedings of the SPIE:
Stereoscopic Disflays and Applications IX, Vol. 3295A, pp. 76-86, San Jose, C.~:., 11.;3.A., 1998: mu ,f. Flack, P. Harman, ~i S. pox, "Low bandwidth sterr:oscopic image encoding and transrnissicr~,"
:'roceedings of the SPIE: Stereoscopic Displays and Virtual Reality Systems X, Vol. 5006., pp. 206-214, Manta Clara, CA, USA, Jar. 2003.

Doc. No. 102-22 CA Patent ~ OS] Furthermur~, based on information from the depth maps, DIBR permits the cr~~atio.~ of not only one novel image hut also a set of images as if they wire captured with a camera from a range of viewpoints. This feabare i~. particularly suited for mc.ltiview stereoscopic displays v'-he;re several ~- ie;v~s ;ire re~auired.
[06] A major yroblem with conventional ~~IBR is the difficulty in generating the depth maps ~xith oidequate accurac~~, withou: a need for much nr~anual input and adjustments, or without much ~:omp~.rtational coat. ~~n example of this is the method disclosed by Redert et al in 'IS E'ator~t Appl;ca,:ion 2006/V)056679 for creating a pixel dense full depth map from a 3-D scene, by using bath depth val .~.~s ,u7~:1 ~ierivates of depth values. Another problem arises with such dense depth maps for motion picture ;applications, where the depth map is too dense; to allow adequately fast frame-to-frame processing.
X07] There are software methods to generate deptl maps from pairs of stereoscopic images as described in:
:J. Scharstein & R.. A. Szeliski, "Taxonomy and evaluation of dense two-frame stereo correspondence;
algorithms", International ;journal of Computer Visicn, Vol. 47(1-3), pp.7-42, 2002: and :~. Zhang, D. Wang. &: A. 'Jincent, "Reliability measurement of disparity estimates for intermediate view reconstruction," Proce;edin~s ofthe IntE;rnational Conference on Irr'age Processing (1C'1P'02), Vol. 3, pp.
.337-840, Rochester NY, USA, Sept. 2002.
:dowever, the resulting defth maps are likely to con>:cin undesirable blocky artifacts, depth instabilities, and inaccuracies, because the problem of find'ng ma~:ching features in a pair of stereoscopic images is a difficult problem v:o solve. For example, usua?ly these software methods assume that the cameras used to capture the stereo ~cof~ic inr~ages a.re parallel.
X08] To ensure r~asonahle accuracy ofthe depth maps would typically require (a) a.ppreciah~le arnc'u-art of hunnan intervention and steady input, (I>) extensive computation, and/or (c) specialized hardware with restrictive image capture conditions. For example, I::arman et al describe in US Patenia 7,035,451 and '7,054.,478 t~vo respc;ctive methods for producing a depth map for use in the conversio~-~ of 2D images into 3D images ti-om an image. These examples involve intensive human intervention ro select areas within key frames and then t<rg them with an arbitrary depth or to apply image pixel repositioning sn:~ dcptl-contouring effects.
~~09] Two approaches hive been attempted for extracting depth from the level of sharpness based on "depth -from focus" and "d~;pth from defocus". In ''d~:pth from focus," depth information in the visual scene is obtained from onl,~ a single image by model ing the effect that a camera's fcval parameters have nn the image, as described in .t t Doc. No. 102-22 C.~ Patent ,1. Ens & P. Lawre:n~:e., "An investigation of methods for determining depth from focus,'' IEEE
'Trans,actio~YS on Pattern Analysis and Machine; Intelligence Vol. 15, pp. 97-108, l 993.
x;;10] In ''depth from defocus," depth information is obtained based on the blur i7folmation contained n two or more images that have been captured with ~.iifferent camera focal or aperh~re settings from the same camera viev~point, i.e., location, as described in Y. Xiimg & S. Shafer "Depth from focusing and defocusing," In Proceedings of tl-:a lu~ernatio~~tal ~::onfe re;nce of Computer Vision and Pattern Recognition, pp.68-73, 1993.
sn both cases, camera parameters are required to help convert the blur to the depth dimension.
X11] Others have attempted to generate depth maps from blur without knowledge of camera parameters by assuming a ;ene;ral monotonic relationship between blur and distance and arbitrarily setting; t:he minimum and maximum ranges of depth as described in:
ii. A.'Jalencia & R 114. R. Dagnino, "Synthesizing stereo 3D views from focus cues iu mono~copi~~ :?D
rnages," Proceedings of the SPIE: Stereoscopic Displays and Virtual Reality Systems X, Vol. SOt~b, hp.
377-388, Santa Clara, CA.. U.S.A., Jan. 2003.
:however, the main pr.oblern with these attempts is that depth within object boundaries is still difficult to determine and, for the described methods, attempts are made to fill the regions v~hich t:enu to b ;inaccurate, as well as computationally complex and intensive.
X12] Another major problem with DIBR concerns the rendering of newly exposed regions that occur at ripe edges of objects where the background was previously hidden from view, and no information is ;available in depth maps on how to properly fill in these exposed regions or "holes" in the rendered images. Although not: perfect, a common method is to fill these regions with the weigl.~ted average of luminance and ch_°ominance values of neighboring pnxels. However, this solution often leads to d isible distortions or annoying artifacts at edges of objects. In general, there is a consensus in. prior art against smoothing to redt:ce such distortions, especially smoothing across object boundaries «kith sharp depth transitions, as this has been presumed to reduce the depth between the object and its background. See for ~:xample:
.I. Yin, & J. R. Cooperstocl;, "Improving depth maps by nonlinear diffusion", Short Communication 'apers of the 12th Internatconal Conference on Computer Graphics, Visualization and Computer Vision, ?lzen, Czech Republic, Vc1.12, pp.305-31 l, Feb. 2-6, 2004.
[13] Contrary to this consensus, we have provided empirical ev dence of an ameliorative effect o1 a rather simple 'unifornt' smoothing of depth maps, in~~luding smoothing across object ocundari~;a, ;m mage quality as given in cur report:
G. Alr:un, "Stereo vision, the illusion of depth," Co-op term report, Apr.
2003.

~~oc. No. 102-22 C.~ Patent 'this was subsequently confirmed. in a published suggestion in the following two publications by Fehn to use 2D uniform Gaussian smoothing of depth maps a.t object boundaries:
C. Fel7n, "A 3D-TV approach using depth-image-based rendering (DIBR)", Proceedings of Visualization, ::rnaging, and Image, Processing (VIIP'U3), pp. 48287, Benalmadena, Spain, Sept. 20113; and C. Fehn, "Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV", Proceedings of S~?IE Stereoscopic Displays and Virtual Reality Systems XI, Voi. 5291, pp. 93 104, CA, L~.S.A., Jan 2004.
v(ore recently, however, we found that uniform smoothing of depth maps causes undesirable geometrical distortion in the newly exposed regions as further described below.
j 14] Another limitation of conventional methods in DIBR, in general, is likely to o~~cur when applied to motion picture, entailing; a sequence of image frarles. Any sharp frame-to-frame transitions in depth within a cormentional deptl map, often result in misalignment of a given edge depth between frames thereby producing jerkiness when the frames are viewed as a video sequence.
I 15] Based on the above; described shortcoming i~; prior art, there is clearly a need for an affordably :~implc; solution far deriving sparse depth map; from a single 2D source image without requiring lalowtedge of camera paraaneters, to mE;et the purpose of creating with DIBR
higher quality virtual 31~
mages having ne;;l gible distortions and annoying artifacts, and minimized frame-to-~'rame jerkiness in motion pictures, pa~ticular'y at object boundaries.
;3UM'.VIAR'Y OF THE INVENTION
;;16] Accordin;;l:~, the present invention relates to a method for generating a smoothed sparse depth map from a monoscopic source image, for use in creating at least one stereoscopic image pairs with a c-elatively higher quality.
[17] In a first aspect, the present invention provides a method for generating a depth map from a monoscopic source image, comprising the steps of
2.5 (a) idetatifying; a subset: of the array of pix~;ls representing an edge of at least one local region of the source image, the edge being defined by a predetermined image parameter having an estimated value exceeding a predefined threshold; and fb) assigning to each pixel within said subset, a depth value based on the corresponding estimated value of the image parameter;
1 c) srr~oothing the depth map to a near-<.;aturation level, so selected as to minimize dis-occluded regions ;wournd each edge:
.1 l~oc. No. 102-22 C.~ Patent yd) using a depth image bayed rendering (DIBR) algorithm to create a plurality of deviated images by proce:~,sing the sour~;e ima~;e based on the depth map: and ye) selecting from the sour~;e image and the plurality of deviated images more than onc; stereoscopic image pairs, so as to give an impression of being captured from different camera positions.
X18] Optionall;y, step (a) is penormed by the steps of:
- determining from a finite set of scales a minimum reliable scale; and - estimating gradient magnitude for each pixel oi'the source image by using the minimum reliable scale;
;'<rrd step (b) is performed by the steps of:
- recording the estimated gradient magnitude as the depth value;
- partitioning total area. of the depth map into a plurality of windows of a prodeter~nineci size; and >filling the dc;pth map in regions with missing depth values, by inserting maxinturn depth values within each window I_19] Alternatively, step (a) is performed by applying a Sobel operator to the source image to dete~;t the ocation of the edl;e, the operator having a suitably sc;lected input threshold value, such as selected from t:he range of 0.04 ':0 0.10 to obtain a binary depth val:.~e distribution for use by step (b). the input threshold select~;d from an c:rrtpirical 1y pre-determined range so as to make the depth map lie between being too barren and too finely textured;
:md sty°p (b) is performed by the steps of:
- amplifying th~~ binary depth value distribution by a predetermined factor, and - expanding; spatial location of each depth vahce by a predetermined number of pixels to increase width of the identified subset of the array of pixels representing the edge.
~~0] Preferably, step (cy uses a 2D Gaussian filter defined by a pair of parameter v~~lues for window size arid standard deviation so chosen for both the horizontal and vertical orientations as to detPrn~ r~E: a ~y~pe of smoothing; selected from the group consistin~; o~
i) uniform smoothing;, wherein each of the parameter values is similar in the horizontal and vertical orientations;
ii;l asymmetrical smoothing, wherein each of tha parameter values is substantially larger in the vertical tl-.an in the horizontal orientation; and iii) adaptive ~;nuoothin;, wherein each of the par;imeter values follows a respective oredefinec function of the depth values.

Doc. No. I02-22 C.~ f'ateut I21] The DIBF; algorithm typically perFornns the steps of - ,electing a value for::ero-parallax setting; (ZPS 1 between nearest and farthest clipping planes ofth~
depth map, so selc;cted as to meet viewing preferences;
providing a depth range value and a corresponding focal length for the 3D
image; and - tilling each residual vacant spot, by using an average of all neighboring pixels X22] In another aspect, the present invention provides a method for generating a smoothed depth map For a rnonoscopic source image, comprising the steps of:
i a) deriving a depth map from the monoscopic source: image; and i b) smoothing the depth map to a near-saturation level around an area corresponding to at least one local L0 region of the source image defined by a change in depth exceeding a predefined thresr~old, so as to minimize dis-occluded regions around each edge, wt.erein range and strength of smoothing are substantially higher in the vertical than the horizontal orientation.
X23] In a further aspect, the present invention pro~rides a system for generating a stc;reoscopic view :From a monoscopic source image, the system comprising a tandem chain o~
15 - an edge analyzer for receiving the source image: and deriving a depth map therefrom, the depth map containing de nth values of at least one edge of a local region or the source image, t xe edge bei:n~
de.:fined by a I~rf;dcaerrr,~ined image parameter hav ing an estimated value exceecii~~; ,~ predef ine~l threshold, wherein each depth value is based on t:he corresponding estimated value; of the image parameter;
20 - ,:m asymmetric; smoother, for smoothing the depth map to a near-saturation level;
-;:~ 1DIBR proc:e;~sor for processing t:he source image based on the depth map to rond~r at least one deviated image to form with the source inrlage at least one stereoscopic image p;~.ir - a 3D display for generating at least one stereoscopic view from the at least one stereoscopic image pair.
25 [24] In vet another aspect, the present invention provides a system for generating a ~D motion picture from a sequence of monosc;opic source images, the s y stem comprising a tandem chain o~
- an edge analyzer for vreceiving each source image and deriving a corresponding ~.epth map therefrom;
- a DIBR processor for processing each source image with the corresponding deptl'n map to render at 30 le;~st one corresponding deviated image forming with the source image at least one: stereoscopic inuage pair; and ri :~oc.1~10. 102-22 C~~ r'aterrt - a 3D display device for sequentially generating at least one stereoscopic view from each rendered st~c.reoscopic image pair.
X25] A major advantage ofthe system and methods provided by this invention is drat they address both ssues of depth map generation and depth-image-based rendering (DIBR) without annoying artifacts at object boundaries. In this .. espe,ct, the invention provides methods for generating a no~rel type of depth maps containing sparse information concentrated at edges and boundaries of objects within the source mage, ro serve the purpose of savings in bandwidth °equirements for either storage ~w t.ranc.m-ssi;:ur. h1 i in c~~ntrast with c~nvent:onal depth maps containing dense information about the absolute or relati~re depth of objects of a given image with no particular c;mphasis on edges and boundarie;~ of objects.
BRIEF DESCRIPTION OF TI-IE DRAWINGS
]26] The inversion will be described in greater detail with reference to the accompanying drawings which represent exemplary embodiments thereof, in which same reference numerals designate sirni'nu-parts throughout the: figures thereof, wl-aerein:
;~27] Figure 1 illustrates in a flow chart a method for generating a sparse depth neap :rom a 2D source enrage and using the generated depth map in creating a deviated image to form with th~~ source image a stereoscopic image pair, in accordance with an embodiment of the present invention.
i=28] Figure 2 shows the geometry in a commonly used configuration of using three cameras for ;;eneruting virtual stereoscopic images from one center image associated with one depi:h map for 3h'C'v.
I,'29] Figure 3 illustrates in .a flow chart a method for creating a deviated image using sparse depth map derived from a raw depth rasp in accordance with anther embodiment of the present invention.
'30] Figure 4 illustrate:, in a block diagram a system for generating stereoscopic views on a 3D~
display device, based on a stream of 2D source imag~a, in accordance with yet another embodiment of the present invention.
:DETAILED DE~1CRIPTI~ON
x_31] Reference: herein t~~ any embodiment means that a particular feature, structure., or clraracto~istir;
descri bed in connection with the embodiment can be included in at least one embodiment of the invention.
'The appearances of the phrase "in one embodiment" in various places in the specification are riot necessarily all referring to the same embodiment, nor are separate or alternative embodiments ;nutually ~:xclusive of other embodiments.
~ 32] In contexr of i:he present invention, the follo'ving general definitions apply.
.A souuce image is a pi.cture;, typically digital and two-dimensional planar, containing an image of a scene ~;omplete with visual characteristics and information that are observed with one eye, such as luminan<;e Ooc. No. 102-22 C.~ Patent ntensity, shape, colour, texture, etc.
.~ depth map is a::wo-dimensianal array of pi~;els (or blocks of pixels) each being assi;~ned a depth value ndicating the relative or absolute depth ofthe part oi'objects in the scene, depicted by t~~e pixel (or block) iom an image capturing device.
~,;33] With reference to lFigure 1, the present invention addresses prior art limitations by pr~sviding a method 10 for generating a smoothed depth map 2s from a monoscopic (2D) source image I to be used in processing the source image 1 to create at least one deviated image 3 with a different camera e-iewpoint :Tom the saurce irn~.ge; 1.
;34] The method 10 includes an edge analysis process 11 for generating a sparse dE;pth map 2p ~,vherein the array of pixels is concentrated at edges and object boundaries of local regions, while disregarding all other regions where no edge is detected. The depth value assigned to each pixel in such ;array indicates thf; depth ofthe corresponding edge. The sparse depth map 2p is treat;:d by a 'moething process 12 to smooth any sharp changes in depth at t~orders and object boundaries to near-saturation revels, thereby obtaining a smoothed depth map 2s. 'The source image 1 is then vombir:ecwiti~ the smoothed depth map 2s by a depth image based rendering (DIBR) algorithm 13 to create the deviato~
mage 3. The DIBR algorithm 13 generates at: least ane deviated image 3 based on the: source image 1 ;md smoothed de~~tf~ map 2s, such that the viewpoint of the deviated image 3 is different from the sourcf;
mage I. The deviated image 3 together with the source image 1 forms a stereoscopic image pair ~, far use in stereoscopi:. imaging.
X35] In embodiments, where more than one devia:ed image 3 is created by the D1BR algorithm I3, floe source image 1 and the deviated images 3 together form a set of monoscopic images, such that mot a than one stereoscopic imag:e pairs 4 is selected from such a set. The selected stereoscopic image pairs ~+ are then used in generating different viewpoints with varying degrees of deviation in camera viewpoints ~rcm t:he source image l for multiview and stereoscopic purposes, incluc.ing still and pnovir~e: images. l, a course, the farther the camera viewpoint from the original the more rendering artefact~_-, there will be.
~~36] It is to be noted that within the context of this embodiment, there are two type; of edges of the _ocal re,gioii defined by two different image pa~ramete rs as follows:
(a) the image par~rleter bE;ing a transition in depth, ;rn~d a b) the image parameter being simply a transition in luminancelcontrast/texture/calor but without an actual transiti~~n in depth.
'typically, the sparse depth map 2p is based on type I,a), but the present embodiment i~. applicable to bath types. according to our experimental evidence so far, there appears to be no loss in depth/image qualit~;
as a result of treating the t<vo t~rpes in a similar way.

''hoc. No. 102-22 C.~ Patent ',;37] It is a well known observation that the; humal visual system attempts to arrive at a final perception of depth even when a given depth map used in DIBR is not complete.
This is done by ~:ombining all avaihtble information in uerms of multple monoscopic cues to depth «nd surface ;nterpolatic»~ in natural images to fill in regions between boundaries or within sparse disparate entities.
'The present invention takes advantage of such observation by requiring only the original sour;; image t nor generating the depth m;~p 2p.
I(38] As well, there is evidence that the human visual system is able to carry out surface and boundary completion, presuma);~ly by integrating horizontal disparity information with other 2D cieptn cues. h: line with this, we have: experimentally found that a depth map containing depth values at object boundari~;s does not necessarily have to be as veridical as commonly practiced in prior art. This means that a mere vocalization of object boundaries (e.g., using a non-z~;ro value at each of the pixel locations that make up the edge/bounday, and a value of zero elsewhere) will be sufficient for creating an ap~~reciable stereoscopic depth du;~lity in a 3D view generated from the stereoscopic image pair ~, a5 eo~tt.r~str:~~ t~~ t'n~~
'ZD source image 1.
[39] Another c:eparture of the present invention from prior art is the use of the near-saturation smoothing process 12. Unlike what has been previously taught we empirically observed that such smoothing process :l2 led to improvement in quality ~f rendered stereoscopic images over those rendered by unsmoothed dc;pth maps. We obseried that such smoothing reduced the effects uf'blocky ~~~tifact~ ;an~,a other distortions that are otherwise found especially in raw (unprocessed) depth maps that have been generated from block-based methods. Importantly, vre found that smoothing of depth maps bafore D1BR
resulted in a reduced impairments and/or rendering artifacts in dis-occluded regions at object baundaries of the rendered irna,~e. This, in effect, improves the quality of the stereoscopic ima~e~_-, created from ~ it.her the source image l plus the rendered deviated image 3 forming the stereoscopic irnagc: pair 4, or fr~n-, the;
rendered deviated images 3 of both the left-eye and tile right-eye view that form the stereoscopic image pair 4.
!~40] More particularly, the smoothed depth map:;s exhibited a reduction in the rendered images from the DIB~R algorithm 13 in:
ya) the number an 3 si~:e of newly exposed (dis-occlusion) regions where potential texture artex a.ets crz; ~c~~:1 by the hole-filling interpolation process of image warping through a DIBR
algorithm 13; and ~;b) geometrical distortion in th~~ newly e~;posed regicns caused by uniform smoothing of the sparse depth map 2p.
i[41] Furthermur~, we found the smoothing process 12 to be effective for improwiv~y; *hc qu ~lit~~ c f tJo-.~,.~, deviated image 3 iwespective of which process is used to generate a depth map, hence making the .Joc. No. 102-22 C,~ f'ateort Smoothing process 12 applicable to various types of nepth maps other than the sparse depth map 2p generated herewith. Our anecdotal evidence also indicates that smoothing process can help reduce the pere~ption of an undesirab9.e cardboard effect (which is indicated when objects look like they are at different depths b at the objects look flat themselves) because object boundaries are smoothed.
S j42] For a furtler description of our experimental findings and additional details relevant to the present invention, see the following articles co-authored by the inventors, which are incorporated herein by reference:
W. J. ~T;~m, G. Alain, L. Zhang, T'. Marlin, & R. Renuud, "Smoothing depth maps for LmpYo~~eci stereoscopic image quality," Proceedings ofThree-Dimensional T\~, Video and Displ,:y '~lI r~Ir~ICC'~,i'~')~-l)..
'Vol. 5599, pp.162-172, Philadelphia, USA, Oct. 25-a;8, 2004;
'L. Zhang, J. Tam, & D. Wang, "Stereoscopic image ;;eneration based on depth images," Proceedings of the IEEE Conference on Image Processing, pp. 2993-2996, Singapore, Oct. 2004.
'W. J. T;am & L. Zhang, "Non-uniform smoothing of Jepth maps before image-based rendering,"
Proceedings of Thrc;e-Dimensional TV, Video and Display III (ITCOM'04), Vol. j ~ 9!~~, pp. 1 l 3-1 R3., Philadelphia, USES, Oct. 2:>-28, 2004;
:~~. Zhang & W. J. Tam, "Steresoscopic Image Generation based on depth images for 3 D TV," IEEE
'Transactions on Eroadcasting, 51, pp.191-199, 2005:
W. J. 'Tam & L. Zhmg, "3D-TV content generation: 2D-to-3D conversion." To be published in the proceedings ofthc; International Conference on Multimedia & Expo (ICME 20061, 9-1'2 July ~;~JO~, 'Toronto; and W. J. Tam, F. Speranza, L. Zhang, R. R_enaud. J. Cha.n, & C. Vazquez, "Depth image based rendering for multivi~:w stereoscopic displays: Role of information at object boundaries."
Proceedings of Tl~.ee-:Dimensional TV, Video and Display 1V (ITCOM'OS;i, Vol. 6016, paper No.
601609, I?~oston, _Vlassa.chusetts, U;3A, Oct. 24-26, 2005.
p[43] Several alternative approaches, as described below, are available for implementing the edge ;rnalysis process 11.
(44] I. One approach for the edge analysis process 11 is based on estimating levels of blur (opposite to ;sharpness) at local regions in the monoscopic source image 1, and uses the principle tluat edges and lines are considered blurred if they are thick and sharp if they are thin. This approach assumes that. for a gi~rn camera focal length, the distance of an object from the camera is directly related to the level of blur (or sharpness) of the picW re of°that object in the source image 1. In other words, an object placed at a specific position that produces a sharp picture in the image plane will produce a blurred picture if the same object is located farther away from that specific; position. Accordingly, the le~, e1 of blur can be :Ooc. No. 102-22 C.~ Patent estimated by applying an algorithm that determines t le best local scale (window size) to use fur the detection of edge, in the source image 1. Such an ali;orithm is performed in two steps as follcws.
i[45] In a first ~,te:p, the minimum reliable scale Wit, to estimate gradient magnitttcte (;such as the ;; ;~dn~ul decrease or increase: in lun-~inance at blurred edges) fir each pixel of the source image l, is deterrnmc;d from a finite set of scales so as to reduce the number of computations. Once ~u is fov:nd, the ustin~ated ;~adient magnitude is recorded as the depth value in the depth map 2p. More specit c;~lly, the first step includes the following operations:
(1 al Constructing a Ga.ussian first derivative basis filters for a set of the minimum re2ia~~lE scales ~~;
[16, 8, ~4, 2, 1. 0.5];
(1 b) Processing the image pixels in the sovurce image 1 by convolution, which involves systematically processiing one of the local regions and then shifting to the next local region centered around the next pixel (or block of pixe'Is), using the first scale in the ~t set, such that a convolution magnitude is set as the depth value when being larger than a critical value empirically determined ~ priori ~aserl 0 4.
sample set of images. Otherwise, the magnitude is set to 0 and step (lb) is reiterated with the fm~.a cI'u I S scale; and (lc) Adjusting; the range of depth values to lie ~w~thin a given range such as [0-255].
[46] In a second step, the sparse depth map 2p obtained from the first step (which is likely to be relatively thin) is expanded to neighboring local regions with missing depth val:aes by pa.-~titioaing tue total area of the depth map 2p into a number of windows of MXN pixels, and calculating the rtaxirnum depth value within each window. A typical window size has ll~N= 9 pixels. The pixels that have missing depth val aes are assigned the maximum de-pt:h value. More specifically, the second step includes t:he following ope-ations for each window:
(2a) Retrieving the depth values;
(2b) Determining the maximum depth value; and (2c) Scanning each pixel, such as to replace the ctepth value with the maximum depth value when being 0 for a given pixel;
'The second step is rex~eated for the next adjacent window until the entire area of the source image 1 is covered.
j[47] II. Alternatively, toe depth map 2p is generated from the source image 1 by e~,timating location of t:he edges and objc;ct boundaries using edge/line dete~;ting techniques, such as the use of Sobel operator.
.Applying the Sob~~l operator to the source image 1 results in the detection of the lo~~at~ on of bound:aF~ia;s ;end edges that depends lar;;ely on what input threshcld is selected for the Sobel operator; the larger the Doc. No. 102-22 C~~ PatEnt threshold the more suppression of spurious lines and edges, and vice versa. A
"best" threshol;l is therefore selected such that the depth map 2p will lie between being too barren and too finely textured.
;For example using; a threshold range of 0.04 to 0.10 with the Sobel operator is found to result in a binar,~
~~alue of 1 where a line was detected, and 0 elsewhen~. The resulting binary distribution, shoring object outlines, is then amplified by 255 (28 - 1 ) and expanc:ed by n pixels (typically n=4 j in the hor~tonta;
orientation to increase: the width of the detected edges and boundaries.
[48] III. A further alternative to generating the depth map 2p is based on estimating the luminance ;ntensiry dis,tribut.oa at each local region, by determining the standard deviation of iunlinance values 'within the local reg~orrs, as further detailed in the fol:owing article co-authored by the in~dentor~, which is incorporated here n by refc;rence:
W. J. Tam, G. Again, l~. Zhang, T. Martin, & R. Renaud, "Smoothing depth maps fer a mprover~
stereoscopic image quality," Proceedings of Three-Dimensional 1'V, Video and Displ~.y lII (ITCOVf04), 'Vol. 5599, pp.162-17:?, Philadelphia, USA, Oct. 25-a:8, 2004.
[[49] Subsequent to the edge analysis process 11, the depth map 2p is then treated by the smoothing process 12 using a aD Gaussian filter g(x, ~) definer by ( z 8(x>~)= 1 eXPj-xz~ for -w5x<-w , (1) 2~r6 l 6 where rw is the filter's width (window size), which determines the range (spatial extent) of depth smoothing at the local region, and a is the standard deviation, which deter~rrrines the strength cet'depih smoothing. Let s~x,y) be a depth value at pixel (xy), then, a smoothed depth value s(x,y) is obtained from the Gaussian filter to be equal to E ))gW~
~ ~ ~S~x-I~~Y-~)g~fr~6 )~

, ~
o ( ) ($0~~:~ygW~uy U= W ~(!=-W

[50] As reported in the above cited articles co-authored by the inventors, we found that the newly exposed portion of the total image area for a sample ~:est image progressively decreased with depth smoothing strength anal approached a rrrinimum value when depth smoothing strength reaches a near-saturation level. hor near-:Saturation smoothing, exemplary paired filter parameter values for uv and cr are ;riven in Talble I.
~[51] Different parameter values are found to have different impact on the image quality of the deviated image 3 created from the source image 1. Therefore. it is possible to manipulate the e:~ctent and tr,~pe of smoothing by chanl;in.g the parameter values for both horizontal and vertical orientations, as follows.

lDoc. No. 102-22 C.~ Patent ii) Unifornr smoothing, wherein each of the parameter values is similar in the hori~or~tal and vertical orientations.
vii) Asymmetrica:. smoothing, wherein each of the parameter values is substantially different between the ven:ical and horizontal orientations. It is to be noted that filtering done in the 1-yoriaontal and vertical orientations i~, F~erformed by two independent processes. W'e discovered that iarge:r parameter vauues in the vertical than in the horizontal orientation provide a better rendered 3D image quality by getting rid of geometric distortions that arise frorr~ rendering of object boundaries especially where there are vertical lines or edges. 'This is by virtue of the fact that the human visual system is more attuned to horizontal disparivies than vertical disparities (i.e., the two eyes are positiorg~,d au -the horn antal p larnc:).
Table 1 gives ~x=ernplary filter parameter values which are three times larger in the vertica° than the horizontal orientation.
iii) Adaptive smoothing, vrherein each of the parameaer values follows a respective predefined function of the depth values at x, y locations in the depth map 2p. The minimum and maximum values of a represent the smog lest and the largest values used. in the smoothing process that or,; associated ~ i~h the grey-scale inte;nsiy values of the depth map ::p, with linear interpolation for the grey-:,cage v,~lu~, falling between the two extremes. As an example of a typical embodiment, the st=mdard deviation a in the vertical orientation is set to be around three times that in the horizontal orientation, and the filter window size w is set to be around 3a, in order to improve image quality while havinL; a nrinirnal imF~act on deF~th quality.
:Examples of the parameter values adopted for the ab~we three smoothing methods are summarized irr 'Table I.
~[52] Following; the smoothing process 12, the resulting smoothed depth map 2s is used by the DIBR
;algorithm 13 to create the deviated image 3. For simplicity, we only consider a commonly used camera cons-iguratic~n for generating virtual stereoscopic images from one center image associ;tted with or~= depth map for 3D televi lion. In rhis case, the vertical coordinate of the projection of any 3D point on each ;image plane of three cameras is the same. With reference to Fibure 2, let c~
be the viewpoint of the original center image, cg and c, be the respective viewpoints of the virtual left-eye and right-eye images to be generated, and tXbe the distance between these two virtual cameras. Under such ca.nnera canfislu:wtirn, one point p with a, depth Z is projected onto the imagre plane of three cameras at pixel (xr, y), (.~~., y ~ and (x"
w), respectively. From the geometry shown in Figure 2, we have xl x' + 2 Z ' x' x° 2 Z ' (3) lDoc. No. 102-22 C~~ Patent adhere information about x~. and fiZ is given in the center image and the associated depth map, respectively.
'Therefore, with fc~rrnulation (3) for 3D image warping, the virtual left-eye and rig:~t-eye images are possible to generate from the source image 1 and the corresponding depth map 2p by providing the value of tx.
[53] Accordin;;l:~, the DIBR algorithm 13 consist; of three steps:
~;a) Setting the convergence; distance of a virtual cam';ra configuration (so-called zero-parallax setting or ZPS), as further detailed below;
(t>) 3D image "warping" by providing depth r~urge v~.lue in the deviated image
3 and the corresponding focal length; and ( c;) Filling any residual vacant spots as necessary, by using the average of all neighboring pixels.
'The ZPS is chosen to be between the nearest clippin~; plane and the farthest clippirg plane of the depth map, based on viewing preference of depth range in +ront of a display screen.
As an e:~campie, the depth map 2p is represented as an 8-bit map, and the neare:~t clipping plane is set to be 25j and the farthest clipping; plane is yet tc> zero. Thus, ZPS is equal to I 27.5. This ZPS value is then subtracted f~-orn each of IS t:he fey intensity valves in the depth map 2p and then normalized to lie between 0 an~:l 255. f,fter thri.
t:he depth values in the depth map 2p are further norrzalized to be within the interval [--0.5, 0.~] as required by step (h) above.
[54] Another embodime et of the present inventioi is illustrated by Figure 3, which shows another method 30 for creating deviated images 3 using the sparse depth map 2p, which gener;~tod from a raw depth map 2r. The other method 30 shown in Figure 3 performs similar functions to those pe~'ornrev:! iry t:he method 10 shown in Figure 1 and described above, with an exception that the raw depth map 2r is used as a source fir generating the sparse depth map 2p instead of using the source image 1. It is also possible; to simplify tl:_e embodiments s~rown in Figures 1 and 3 without deviating from the spirit of tree present invention, by removing the smoothing proce<.a 12.
;[55] Yet another e}nbocliment of the present invention is illustrated by Figure 4. which shows a>
system 20 for generating a stereoscopic view of the source image 1 on a 3D
display device 24. The source image 1 is received from a transmission medi~.rm and decoded by a data receiver 25, and then iea to a tandem chain of an edge analyzer 21, followed by a depth reap smoother 22, azd then a DIBR
processor 23. ThE; received source image 1 is also fed to the DIBR processor 23. 'f he outcome of the .DIBR processor 23 is then provided to the 3D displa;~ device 24 for providing the stereoscopic view. The edge analyzer 21, the depth map smoother 22, and the DIBR processor 23 respectively perform similar functions to those described above for the edge analysis process 11, the smoothing process 12, and the DIBR algorithm 13. all shown in Figure 1.

t ' , t Doc. No. 102-22 C.~ P;~te:rrt (56] The embodiment of Figure 4 is suitable for various applications showing still or moving images, such as:
(a) multiview autostereoscopic displays;
(b) 3D videoc;onferencing, (c) 3D television:, and (d) sequences of image frames for motion pictures.
~[5i] In multiview displays, multiple views and str;re;oscopic pairs are generated frorrr a received 2D
television images. lVlultiview images are rendered images that give an impression that they were captured from camer;r positicms different from the original carzera position.
(5b] For sequen<;es of images, the near-saturation smoothing performed by the deprh map ~;moc=then :Z2 helps minimize any perceived jerkiness that would orherwise arise between frames from the DIBR
processor 23 when not being preceded by edge-smoothing. This is because such depth rnap smoothing results in a spreading of the depth (as contrasted to a sharp change in depth), such that the edges are not as precisely localized deaoth-wise.
(5S~] The abov~:-described embodiments are intended to be examples of the present invention.
Numerous variations, modifications, and adaptations may be made to the particular embodiments by those of ;~lcill in the art, without departing from the spirit and scope of the invention, which a.re defined solely by the claims appended hereto.

Claims

WE CLAIM:
1. A method for generating a depth map containing an array of pixels, from a monoscopic source image, comprising the steps of:
(a) identifying a subset of the array of pixels representing an edge of at least one local region of the source image, the edge being defined by a predetermined image parameter having an estimated value exceeding a predefined threshold; and (b) assigning to each pixel within said subset, a depth value based on the corresponding estimated value of the image parameter.
2. A method for creating at least one deviated image from a monoscopic source image, to form with the source image at least one stereoscopic image pair, the method comprising the steps of:
(a) deriving a sparse depth map containing an array of pixels, from a raw depth map corresponding to the source image, by - identifying a subset of the array of pixels representing an edge of at least one local region of the raw depth map, the edge being defined by a predetermined image parameter having an estimated value exceeding a predefined threshold; and - assigning to each pixel within said subset, a depth value based on the corresponding estimated value of the image parameter; and (b) applying a depth image based rendering (DIBR) algorithm to the depth map to obtain the at least one deviated image.
3. The method of any one of claims 1 and 2, further comprising the step using a depth image based rendering (DIBR) algorithm to create at least one deviated image by processing the source image based on the depth map, such that the source image and the at least one deviated image form at least one stereoscopic image pair.
4. The method of claim 3, wherein the DIBR algorithm performs the steps of:
- selecting a value for zero-parallax setting between nearest and farthest clipping planes of the depth map, so selected as to meet viewing preferences;

- providing a depth range value and a corresponding focal length for each stereoscopic image pair;
and - filling each residual vacant spot, by using an average of all neighboring pixels.
5. The method of any one of claims 3 and 4, further comprising the steps of:
- using a depth image based rendering (DIBR) algorithm to create a plurality of deviated images by processing the source image based on the depth map: and - selecting from the source image and the plurality of deviated images more than one stereoscopic image pair, so as to give an impression of being captured from different camera positions.
6. The method of any one of claims 1 and 2, further comprising the step of (c) smoothing the depth map to a near-saturation level, so as to minimize dis-occluded regions around each edge.
7. A method for generating a smoothed depth map from a monoscopic source image, comprising the steps of:
(a) deriving a depth map from the monoscopic source image; and (b) smoothing the depth map to a near-saturation level around an area corresponding to at least one local region of the source image defined by a change in depth exceeding a predefined threshold, so as to minimize dis-occlusions around each said area, wherein range and strength of smoothing are substantially higher in the vertical than the horizontal orientation.
8. The method of any one of claims 6 and 7 further comprising the step of using a depth image based rendering (DIBR) algorithm to create at least one deviated image by processing the source image based on the smoothed depth map, such that the source image and the at least one deviated image form at least one stereoscopic image pair.
9. The method of any one of claims 6-8, wherein step (c) uses a 2D Gaussian filter defined by a pair of parameter values for window size and standard deviation so chosen for both horizontal and vertical orientations as to determine a type of smoothing selected from the group consisting of:

iv) uniform smoothing, wherein each of the parameter values is similar in the horizontal and vertical orientations;
v) asymmetrical smoothing, wherein each of the parameter values is substantially larger in the vertical than in the horizontal orientation; and vi) adaptive smoothing, wherein each of the parameter values follows a respective predefined function of the depth value.
10. The method of any one of claims 1-9, wherein the image parameter is a transition in depth.
11. The method of any one of claims 1-9, wherein the image parameter is a transition in an image property selected from the group consisting of luminance, contrast, texture, and color.
12. The method of any one of claims 1-11, wherein step (a) is performed by the steps of:
- determining from a finite set of scales a minimum reliable scale; and - estimating gradient magnitude for each pixel of the source image by using the minimum reliable scale; and wherein step (b) is performed by the steps of:
- recording the estimated gradient magnitude as the depth value;
- partitioning the total area of the depth map into a plurality of windows of a predetermined size; and - filling the depth map in regions with missing depth values, by inserting maximum depth values within each window.
13. The method of any one of claims 1-11, wherein step (a) is performed by applying an operator to the source image to detect location of the edge, the operator having a suitably selected input threshold value to obtain a binary depth value distribution for use by step (b), the input threshold selected from an empirically pre-determined range so as to make the depth map lie between being too barren and too finely textured; and wherein step (b) is performed by the steps of:
- amplifying the binary depth value distribution by a predetermined factor;
and - expanding spatial location of each depth value by a predetermined number of pixels, to increase width of the identified subset of the array of pixels representing the edge.
14. The method of claim 13, wherein the operator is a Sobel operator and the input threshold range is 0.04 to 0.10.
15. A system for generating a stereoscopic view from a monoscopic source image, the system comprising a tandem chain of:
- an edge analyzer for receiving the source image and deriving a depth map therefrom, the depth map containing depth values of at least one edge of a local region of the source image, the edge being defined by a predetermined image parameter having an estimated value exceeding a predefined threshold, wherein each depth value is based on the corresponding estimated value of the image parameter;
- a DIBR processor for processing the source image based on the depth map to render at least one deviated image to form with the source image at least one stereoscopic image pair; and - a 3D display for generating at least one stereoscopic view from the at least one stereoscopic image pair.
16. A system for generating 3D motion pictures from a sequence of monoscopic source images, the system comprising a tandem chain of:
an edge analyzer for receiving each source image and deriving a corresponding depth map therefrom;
- a DIBR processor for combining each source image with the corresponding depth map to render at least one corresponding deviated image forming with the source image at least one stereoscopic image pair; and - a 3D display device for sequentially generating at least one stereoscopic view from each stereoscopic image pair.
17. The system of any one of claims 15 and 16, further comprising an asymmetric smoother positioned between the edge analyzer and the DIBR processor, for smoothing the depth map to a near-saturation level.

13. The system any one of claims 15 and 16, wherein the depth map contains an array of depth values; and wherein the edge analyzer comprises means for identifying an edge of at least one local region of the source image, and for assigning each depth value, based on an estimated value of a predetermined image parameter defining the edge.
19. A depth map derived from a monoscopic source image, containing depth values of at least one edge of a local region of the source image, the edge being defined by a predetermined image parameter having an estimated value exceeding a predefined threshold, wherein each depth value is based on the corresponding estimated value of the image parameter.
20. The depth map of claim 19 smoothed to a near-saturation level to minimize dis-occlusions around each edge, wherein range end strength of smoothing are substantially higher in the vertical than the horizontal orientation.
CA002553473A 2005-07-26 2006-07-25 Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging Abandoned CA2553473A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US70227605P 2005-07-26 2005-07-26
US60/702,276 2005-07-26

Publications (1)

Publication Number Publication Date
CA2553473A1 true CA2553473A1 (en) 2007-01-26

Family

ID=37682462

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002553473A Abandoned CA2553473A1 (en) 2005-07-26 2006-07-25 Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging

Country Status (2)

Country Link
US (2) US8384763B2 (en)
CA (1) CA2553473A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8736669B2 (en) 2007-12-27 2014-05-27 Sterrix Technologies Ug Method and device for real-time multi-view production
CN106091984A (en) * 2016-06-06 2016-11-09 中国人民解放军信息工程大学 A kind of three dimensional point cloud acquisition methods based on line laser
CN111595337A (en) * 2020-04-13 2020-08-28 宁波深寻信息科技有限公司 Inertial positioning self-correction method based on visual modeling

Families Citing this family (297)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9286941B2 (en) 2001-05-04 2016-03-15 Legend3D, Inc. Image sequence enhancement and motion picture project management system
JP2005100176A (en) * 2003-09-25 2005-04-14 Sony Corp Image processor and its method
RU2411690C2 (en) * 2005-12-02 2011-02-10 Конинклейке Филипс Электроникс Н.В. Method and device for displaying stereoscopic images, method of generating 3d image data from input 2d image data, and device for generating 3d image data from input 2d image data
GB0613352D0 (en) * 2006-07-05 2006-08-16 Ashbey James A Improvements in stereoscopic imaging systems
TWI314832B (en) * 2006-10-03 2009-09-11 Univ Nat Taiwan Single lens auto focus system for stereo image generation and method thereof
JP2010507822A (en) * 2006-10-26 2010-03-11 シーリアル テクノロジーズ ソシエテ アノニム Content generation system
US8330801B2 (en) * 2006-12-22 2012-12-11 Qualcomm Incorporated Complexity-adaptive 2D-to-3D video sequence conversion
GB2445982A (en) * 2007-01-24 2008-07-30 Sharp Kk Image data processing method and apparatus for a multiview display device
KR100866491B1 (en) * 2007-01-30 2008-11-03 삼성전자주식회사 Image processing method and apparatus
BRPI0721462A2 (en) * 2007-03-23 2013-01-08 Thomson Licensing 2d image region classification system and method for 2d to 3d conversion
US8213711B2 (en) * 2007-04-03 2012-07-03 Her Majesty The Queen In Right Of Canada As Represented By The Minister Of Industry, Through The Communications Research Centre Canada Method and graphical user interface for modifying depth maps
CA2627999C (en) * 2007-04-03 2011-11-15 Her Majesty The Queen In Right Of Canada, As Represented By The Minister Of Industry Through The Communications Research Centre Canada Generation of a depth map from a monoscopic color image for rendering stereoscopic still and video images
US7920148B2 (en) * 2007-04-10 2011-04-05 Vivante Corporation Post-rendering anti-aliasing with a smoothing filter
RU2487488C2 (en) * 2007-06-26 2013-07-10 Конинклейке Филипс Электроникс Н.В. Method and system for encoding three-dimensional video signal, encapsulated three-dimensional video signal, method and system for three-dimensional video decoder
WO2009011492A1 (en) * 2007-07-13 2009-01-22 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding stereoscopic image format including both information of base view image and information of additional view image
US8086060B1 (en) * 2007-10-11 2011-12-27 Adobe Systems Incorporated Systems and methods for three-dimensional enhancement of two-dimensional images
KR101327794B1 (en) * 2007-10-23 2013-11-11 삼성전자주식회사 Method and apparatus for obtaining depth information
US8351685B2 (en) * 2007-11-16 2013-01-08 Gwangju Institute Of Science And Technology Device and method for estimating depth map, and method for generating intermediate image and method for encoding multi-view video using the same
US20120269458A1 (en) * 2007-12-11 2012-10-25 Graziosi Danillo B Method for Generating High Resolution Depth Images from Low Resolution Depth Images Using Edge Layers
CN101933335B (en) * 2008-01-29 2012-09-05 汤姆森特许公司 Method and system for converting 2d image data to stereoscopic image data
KR101420684B1 (en) * 2008-02-13 2014-07-21 삼성전자주식회사 Apparatus and method for matching color image and depth image
US9094675B2 (en) * 2008-02-29 2015-07-28 Disney Enterprises Inc. Processing image data from multiple cameras for motion pictures
US20090257676A1 (en) * 2008-04-09 2009-10-15 Masafumi Naka Systems and methods for picture edge enhancement
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US8866920B2 (en) 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
EP4336447A1 (en) 2008-05-20 2024-03-13 FotoNation Limited Capturing and processing of images using monolithic camera array with heterogeneous imagers
KR20110015452A (en) * 2008-06-06 2011-02-15 리얼디 인크. Blur enhancement of stereoscopic images
KR20100002032A (en) * 2008-06-24 2010-01-06 삼성전자주식회사 Image generating method, image processing method, and apparatus thereof
JP5569635B2 (en) * 2008-08-06 2014-08-13 ソニー株式会社 Image processing apparatus, image processing method, and program
JP5347717B2 (en) * 2008-08-06 2013-11-20 ソニー株式会社 Image processing apparatus, image processing method, and program
WO2010018880A1 (en) * 2008-08-11 2010-02-18 Postech Academy-Industry Foundation Apparatus and method for depth estimation from single image in real time
US20110115790A1 (en) * 2008-08-26 2011-05-19 Enhanced Chip Technology Inc Apparatus and method for converting 2d image signals into 3d image signals
KR20110063778A (en) * 2008-08-29 2011-06-14 톰슨 라이센싱 View synthesis with heuristic view blending
KR101497503B1 (en) * 2008-09-25 2015-03-04 삼성전자주식회사 Method and apparatus for generating depth map for conversion two dimensional image to three dimensional image
JP2012507181A (en) * 2008-10-28 2012-03-22 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Generation of occlusion data for image characteristics
US8233664B2 (en) * 2008-11-12 2012-07-31 Eastman Kodak Company Determining relative depth of points in multiple videos
KR101506926B1 (en) * 2008-12-04 2015-03-30 삼성전자주식회사 Method and appratus for estimating depth, and method and apparatus for converting 2d video to 3d video
US8248410B2 (en) * 2008-12-09 2012-08-21 Seiko Epson Corporation Synthesizing detailed depth maps from images
KR20100080704A (en) * 2009-01-02 2010-07-12 삼성전자주식회사 Method and apparatus for obtaining image data
KR101526866B1 (en) * 2009-01-21 2015-06-10 삼성전자주식회사 Method of filtering depth noise using depth information and apparatus for enabling the method
JP5792632B2 (en) * 2009-01-30 2015-10-14 トムソン ライセンシングThomson Licensing Depth map encoding
WO2010093351A1 (en) 2009-02-13 2010-08-19 Thomson Licensing Depth map coding to reduce rendered distortion
WO2010126613A2 (en) 2009-05-01 2010-11-04 Thomson Licensing Inter-layer dependency information for 3dv
US8170288B2 (en) * 2009-05-11 2012-05-01 Saudi Arabian Oil Company Reducing noise in 3D seismic data while preserving structural details
US9524700B2 (en) 2009-05-14 2016-12-20 Pure Depth Limited Method and system for displaying images of various formats on a single display
US8526754B2 (en) * 2009-05-28 2013-09-03 Aptina Imaging Corporation System for enhancing depth of field with digital image processing
US9124874B2 (en) 2009-06-05 2015-09-01 Qualcomm Incorporated Encoding of three-dimensional conversion information with two-dimensional video sequence
KR101590763B1 (en) * 2009-06-10 2016-02-02 삼성전자주식회사 Apparatus and method for generating 3d image using area extension of depth map object
KR20100135032A (en) * 2009-06-16 2010-12-24 삼성전자주식회사 Conversion device for two dimensional image to three dimensional image and method thereof
US9148673B2 (en) * 2009-06-25 2015-09-29 Thomson Licensing Depth map coding
CN101945295B (en) * 2009-07-06 2014-12-24 三星电子株式会社 Method and device for generating depth maps
US8553972B2 (en) * 2009-07-06 2013-10-08 Samsung Electronics Co., Ltd. Apparatus, method and computer-readable medium generating depth map
US8928682B2 (en) * 2009-07-07 2015-01-06 Pure Depth Limited Method and system of processing images for improved display
WO2011014419A1 (en) * 2009-07-31 2011-02-03 3Dmedia Corporation Methods, systems, and computer-readable storage media for creating three-dimensional (3d) images of a scene
US20110025830A1 (en) * 2009-07-31 2011-02-03 3Dmedia Corporation Methods, systems, and computer-readable storage media for generating stereoscopic content via depth map creation
US9380292B2 (en) 2009-07-31 2016-06-28 3Dmedia Corporation Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene
WO2011014420A1 (en) * 2009-07-31 2011-02-03 3Dmedia Corporation Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional (3d) images
US8917956B1 (en) * 2009-08-12 2014-12-23 Hewlett-Packard Development Company, L.P. Enhancing spatial resolution of an image
US8624959B1 (en) * 2009-09-11 2014-01-07 The Boeing Company Stereo video movies
JP5521913B2 (en) * 2009-10-28 2014-06-18 ソニー株式会社 Image processing apparatus, image processing method, and program
WO2011060579A1 (en) * 2009-11-18 2011-05-26 Industrial Technology Research Institute Method for generating depth maps from monocular images and systems using the same
EP2502115A4 (en) 2009-11-20 2013-11-06 Pelican Imaging Corp Capturing and processing of images using monolithic camera array with heterogeneous imagers
TWI398158B (en) * 2009-12-01 2013-06-01 Ind Tech Res Inst Method for generating the depth of a stereo image
JP5387377B2 (en) * 2009-12-14 2014-01-15 ソニー株式会社 Image processing apparatus, image processing method, and program
WO2011081646A1 (en) * 2009-12-15 2011-07-07 Thomson Licensing Stereo-image quality and disparity/depth indications
KR101281961B1 (en) * 2009-12-21 2013-07-03 한국전자통신연구원 Method and apparatus for editing depth video
KR101637491B1 (en) * 2009-12-30 2016-07-08 삼성전자주식회사 Method and apparatus for generating 3D image data
US20110216065A1 (en) * 2009-12-31 2011-09-08 Industrial Technology Research Institute Method and System for Rendering Multi-View Image
TWI387934B (en) * 2009-12-31 2013-03-01 Ind Tech Res Inst Method and system for rendering multi-view image
WO2011087289A2 (en) * 2010-01-13 2011-07-21 Samsung Electronics Co., Ltd. Method and system for rendering three dimensional views of a scene
KR101647408B1 (en) * 2010-02-03 2016-08-10 삼성전자주식회사 Apparatus and method for image processing
US9398289B2 (en) * 2010-02-09 2016-07-19 Samsung Electronics Co., Ltd. Method and apparatus for converting an overlay area into a 3D image
EP2537332A1 (en) * 2010-02-19 2012-12-26 Dual Aperture, Inc. Processing multi-aperture image data
US9495751B2 (en) 2010-02-19 2016-11-15 Dual Aperture International Co. Ltd. Processing multi-aperture image data
US8669979B2 (en) 2010-04-01 2014-03-11 Intel Corporation Multi-core processor supporting real-time 3D image rendering on an autostereoscopic display
TWI439960B (en) 2010-04-07 2014-06-01 Apple Inc Avatar editing environment
US20120012748A1 (en) 2010-05-12 2012-01-19 Pelican Imaging Corporation Architectures for imager arrays and array cameras
JP2012003233A (en) * 2010-05-17 2012-01-05 Sony Corp Image processing device, image processing method and program
US20110304618A1 (en) * 2010-06-14 2011-12-15 Qualcomm Incorporated Calculating disparity for three-dimensional images
KR20120003147A (en) * 2010-07-02 2012-01-10 삼성전자주식회사 Depth map coding and decoding apparatus using loop-filter
US8774267B2 (en) * 2010-07-07 2014-07-08 Spinella Ip Holdings, Inc. System and method for transmission, processing, and rendering of stereoscopic and multi-view images
KR20120007289A (en) * 2010-07-14 2012-01-20 삼성전자주식회사 Display apparatus and method for setting depth feeling thereof
US9344701B2 (en) 2010-07-23 2016-05-17 3Dmedia Corporation Methods, systems, and computer-readable storage media for identifying a rough depth map in a scene and for determining a stereo-base distance for three-dimensional (3D) content creation
WO2012014009A1 (en) * 2010-07-26 2012-02-02 City University Of Hong Kong Method for generating multi-view images from single image
US9571811B2 (en) 2010-07-28 2017-02-14 S.I.Sv.El. Societa' Italiana Per Lo Sviluppo Dell'elettronica S.P.A. Method and device for multiplexing and demultiplexing composite images relating to a three-dimensional content
IT1401367B1 (en) * 2010-07-28 2013-07-18 Sisvel Technology Srl METHOD TO COMBINE REFERENCE IMAGES TO A THREE-DIMENSIONAL CONTENT.
US10134150B2 (en) * 2010-08-10 2018-11-20 Monotype Imaging Inc. Displaying graphics in multi-view scenes
US9165367B2 (en) 2010-09-02 2015-10-20 Samsung Electronics Co., Ltd. Depth estimation system for two-dimensional images and method of operation thereof
US9883161B2 (en) 2010-09-14 2018-01-30 Thomson Licensing Compression methods and apparatus for occlusion data
US20120069038A1 (en) * 2010-09-20 2012-03-22 Himax Media Solutions, Inc. Image Processing Method and Image Display System Utilizing the Same
US8760517B2 (en) * 2010-09-27 2014-06-24 Apple Inc. Polarized images for security
US9035939B2 (en) * 2010-10-04 2015-05-19 Qualcomm Incorporated 3D video control system to adjust 3D video rendering based on user preferences
US8902283B2 (en) * 2010-10-07 2014-12-02 Sony Corporation Method and apparatus for converting a two-dimensional image into a three-dimensional stereoscopic image
US9305398B2 (en) * 2010-10-08 2016-04-05 City University Of Hong Kong Methods for creating and displaying two and three dimensional images on a digital canvas
US9628755B2 (en) * 2010-10-14 2017-04-18 Microsoft Technology Licensing, Llc Automatically tracking user movement in a video chat application
US20130211803A1 (en) * 2010-10-18 2013-08-15 Thomson Licensing Method and device for automatic prediction of a value associated with a data tuple
US9338426B2 (en) * 2010-10-27 2016-05-10 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional image processing apparatus, three-dimensional imaging apparatus, and three-dimensional image processing method
TWI492186B (en) * 2010-11-03 2015-07-11 Ind Tech Res Inst Apparatus and method for inpainting three-dimensional stereoscopic image
WO2012061549A2 (en) 2010-11-03 2012-05-10 3Dmedia Corporation Methods, systems, and computer program products for creating three-dimensional video sequences
US9865083B2 (en) 2010-11-03 2018-01-09 Industrial Technology Research Institute Apparatus and method for inpainting three-dimensional stereoscopic image
US9171372B2 (en) 2010-11-23 2015-10-27 Qualcomm Incorporated Depth estimation based on global motion
US9123115B2 (en) * 2010-11-23 2015-09-01 Qualcomm Incorporated Depth estimation based on global motion and optical flow
CN102026012B (en) * 2010-11-26 2012-11-14 清华大学 Generation method and device of depth map through three-dimensional conversion to planar video
US8670630B1 (en) 2010-12-09 2014-03-11 Google Inc. Fast randomized multi-scale energy minimization for image processing
US8878950B2 (en) 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
JP5655550B2 (en) * 2010-12-22 2015-01-21 ソニー株式会社 Image processing apparatus, image processing method, and program
US8682107B2 (en) * 2010-12-22 2014-03-25 Electronics And Telecommunications Research Institute Apparatus and method for creating 3D content for oriental painting
US8773427B2 (en) * 2010-12-22 2014-07-08 Sony Corporation Method and apparatus for multiview image generation using depth map information
US10200671B2 (en) 2010-12-27 2019-02-05 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
US8274552B2 (en) 2010-12-27 2012-09-25 3Dmedia Corporation Primary and auxiliary image capture devices for image processing and related methods
US9041774B2 (en) * 2011-01-07 2015-05-26 Sony Computer Entertainment America, LLC Dynamic adjustment of predetermined three-dimensional video settings based on scene content
WO2012098608A1 (en) * 2011-01-17 2012-07-26 パナソニック株式会社 Three-dimensional image processing device, three-dimensional image processing method, and program
US9282321B2 (en) 2011-02-17 2016-03-08 Legend3D, Inc. 3D model multi-reviewer system
US9288476B2 (en) 2011-02-17 2016-03-15 Legend3D, Inc. System and method for real-time depth modification of stereo images of a virtual reality environment
US9407904B2 (en) 2013-05-01 2016-08-02 Legend3D, Inc. Method for creating 3D virtual reality from 2D images
JP2012186781A (en) * 2011-02-18 2012-09-27 Sony Corp Image processing device and image processing method
RU2013137458A (en) * 2011-02-18 2015-02-20 Сони Корпорейшн IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
TWI419078B (en) * 2011-03-25 2013-12-11 Univ Chung Hua Apparatus for generating a real-time stereoscopic image and method thereof
US8824821B2 (en) * 2011-03-28 2014-09-02 Sony Corporation Method and apparatus for performing user inspired visual effects rendering on an image
JPWO2012131895A1 (en) * 2011-03-29 2014-07-24 株式会社東芝 Image coding apparatus, method and program, image decoding apparatus, method and program
EP2695027B1 (en) * 2011-04-06 2015-08-12 Koninklijke Philips N.V. Safety in dynamic 3d healthcare environment
JP5732986B2 (en) * 2011-04-08 2015-06-10 ソニー株式会社 Image processing apparatus, image processing method, and program
TWI467516B (en) * 2011-04-26 2015-01-01 Univ Nat Cheng Kung Method for color feature extraction
CN107404609B (en) 2011-05-11 2020-02-11 快图有限公司 Method for transferring image data of array camera
TR201104918A2 (en) 2011-05-20 2012-12-21 Vestel Elektroni̇k Sanayi̇ Ve Ti̇caret A.Ş. Method and device for creating depth map and 3D video.
JP2012253713A (en) 2011-06-07 2012-12-20 Sony Corp Image processing device, method for controlling image processing device, and program for causing computer to execute the method
US20120313932A1 (en) * 2011-06-10 2012-12-13 Samsung Electronics Co., Ltd. Image processing method and apparatus
JP5824896B2 (en) * 2011-06-17 2015-12-02 ソニー株式会社 Image processing apparatus and method, and program
US8743180B2 (en) 2011-06-28 2014-06-03 Cyberlink Corp. Systems and methods for generating a depth map and converting two-dimensional data to stereoscopic data
KR20140045458A (en) 2011-06-28 2014-04-16 펠리칸 이매징 코포레이션 Optical arrangements for use with an array camera
US20130265459A1 (en) 2011-06-28 2013-10-10 Pelican Imaging Corporation Optical arrangements for use with an array camera
RU2660613C1 (en) 2011-06-30 2018-07-06 Самсунг Электроникс Ко., Лтд. Method of video coding with adjustment of the bit depth for conversion with a fixed wrap and a device for the same, and also a method of a video decoding and a device for the same
JP2013026808A (en) * 2011-07-21 2013-02-04 Sony Corp Image processing apparatus, image processing method, and program
CN102903143A (en) * 2011-07-27 2013-01-30 国际商业机器公司 Method and system for converting two-dimensional image into three-dimensional image
US8553997B1 (en) * 2011-08-16 2013-10-08 Google Inc. Depthmap compression
US9787980B2 (en) * 2011-08-17 2017-10-10 Telefonaktiebolaget Lm Ericsson (Publ) Auxiliary information map upsampling
JP5968107B2 (en) * 2011-09-01 2016-08-10 キヤノン株式会社 Image processing method, image processing apparatus, and program
WO2013043751A1 (en) 2011-09-19 2013-03-28 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super resolution processing using pixel apertures
CN104081414B (en) 2011-09-28 2017-08-01 Fotonation开曼有限公司 System and method for coding and decoding light field image file
US9451232B2 (en) 2011-09-29 2016-09-20 Dolby Laboratories Licensing Corporation Representation and coding of multi-view images using tapestry encoding
TWI540538B (en) * 2011-10-27 2016-07-01 晨星半導體股份有限公司 Process method for processing a pair of stereo images
US9471988B2 (en) 2011-11-02 2016-10-18 Google Inc. Depth-map generation for an input image using an example approximate depth-map associated with an example similar image
CN103096100B (en) * 2011-11-04 2015-09-09 联咏科技股份有限公司 3-dimensional image processing method and the three-dimensional image display apparatus applying it
US9661307B1 (en) * 2011-11-15 2017-05-23 Google Inc. Depth map generation using motion cues for conversion of monoscopic visual content to stereoscopic 3D
TW201325200A (en) * 2011-12-02 2013-06-16 Ind Tech Res Inst Computer program product, computer readable medium, compression method and apparatus of depth map in 3D video
WO2013085513A1 (en) * 2011-12-07 2013-06-13 Intel Corporation Graphics rendering technique for autostereoscopic three dimensional display
WO2013086601A1 (en) * 2011-12-12 2013-06-20 The University Of British Columbia System and method for determining a depth map sequence for a two-dimensional video sequence
TWI489418B (en) * 2011-12-30 2015-06-21 Nat Univ Chung Cheng Parallax Estimation Depth Generation
US9137519B1 (en) 2012-01-04 2015-09-15 Google Inc. Generation of a stereo video from a mono video
EP2801198B1 (en) 2012-01-04 2023-10-11 InterDigital Madison Patent Holdings, SAS Processing 3d image sequences
KR20130081569A (en) * 2012-01-09 2013-07-17 삼성전자주식회사 Apparatus and method for outputting 3d image
US8824778B2 (en) 2012-01-13 2014-09-02 Cyberlink Corp. Systems and methods for depth map generation
WO2013109252A1 (en) * 2012-01-17 2013-07-25 Thomson Licensing Generating an image for another view
US20130202194A1 (en) * 2012-02-05 2013-08-08 Danillo Bracco Graziosi Method for generating high resolution depth images from low resolution depth images using edge information
US9111350B1 (en) 2012-02-10 2015-08-18 Google Inc. Conversion of monoscopic visual content to stereoscopic 3D
JP2013172190A (en) * 2012-02-17 2013-09-02 Sony Corp Image processing device and image processing method and program
EP2817955B1 (en) 2012-02-21 2018-04-11 FotoNation Cayman Limited Systems and methods for the manipulation of captured light field image data
US20130287289A1 (en) * 2012-04-25 2013-10-31 Dong Tian Synthetic Reference Picture Generation
US9210392B2 (en) 2012-05-01 2015-12-08 Pelican Imaging Coporation Camera modules patterned with pi filter groups
WO2013173749A1 (en) * 2012-05-17 2013-11-21 The Regents Of The University Of California Sampling-based multi-lateral filter method for depth map enhancement and codec
US20130329985A1 (en) * 2012-06-07 2013-12-12 Microsoft Corporation Generating a three-dimensional image
ES2653841T3 (en) * 2012-06-20 2018-02-09 Vestel Elektronik Sanayi Ve Ticaret A.S. Procedure and device to determine a depth image
WO2014005123A1 (en) 2012-06-28 2014-01-03 Pelican Imaging Corporation Systems and methods for detecting defective camera arrays, optic arrays, and sensors
US20140002441A1 (en) * 2012-06-29 2014-01-02 Hong Kong Applied Science and Technology Research Institute Company Limited Temporally consistent depth estimation from binocular videos
US20140002674A1 (en) 2012-06-30 2014-01-02 Pelican Imaging Corporation Systems and Methods for Manufacturing Camera Modules Using Active Alignment of Lens Stack Arrays and Sensors
EP3869797B1 (en) 2012-08-21 2023-07-19 Adeia Imaging LLC Method for depth detection in images captured using array cameras
WO2014032020A2 (en) 2012-08-23 2014-02-27 Pelican Imaging Corporation Feature based high resolution motion estimation from low resolution images captured using an array source
CN102802020B (en) * 2012-08-31 2015-08-12 清华大学 The method and apparatus of monitoring parallax information of binocular stereoscopic video
US9214013B2 (en) 2012-09-14 2015-12-15 Pelican Imaging Corporation Systems and methods for correcting user identified artifacts in light field images
CN104685860A (en) 2012-09-28 2015-06-03 派力肯影像公司 Generating images from light fields utilizing virtual viewpoints
US9098911B2 (en) 2012-11-01 2015-08-04 Google Inc. Depth map generation from a monoscopic image based on combined depth cues
WO2014078443A1 (en) 2012-11-13 2014-05-22 Pelican Imaging Corporation Systems and methods for array camera focal plane control
TW201421972A (en) * 2012-11-23 2014-06-01 Ind Tech Res Inst Method and system for encoding 3D video
US9547937B2 (en) 2012-11-30 2017-01-17 Legend3D, Inc. Three-dimensional annotation system and method
EP2747028B1 (en) 2012-12-18 2015-08-19 Universitat Pompeu Fabra Method for recovering a relative depth map from a single image or a sequence of still images
US9292927B2 (en) * 2012-12-27 2016-03-22 Intel Corporation Adaptive support windows for stereoscopic image correlation
CN103974055B (en) * 2013-02-06 2016-06-08 城市图像科技有限公司 3D photo generation system and method
US9462164B2 (en) 2013-02-21 2016-10-04 Pelican Imaging Corporation Systems and methods for generating compressed light field representation data using captured light fields, array geometry, and parallax information
WO2014133974A1 (en) 2013-02-24 2014-09-04 Pelican Imaging Corporation Thin form computational and modular array cameras
US9638883B1 (en) 2013-03-04 2017-05-02 Fotonation Cayman Limited Passive alignment of array camera modules constructed from lens stack arrays and sensors based upon alignment information obtained during manufacture of array camera modules using an active alignment process
US9917998B2 (en) 2013-03-08 2018-03-13 Fotonation Cayman Limited Systems and methods for measuring scene information while capturing images using array cameras
US8866912B2 (en) 2013-03-10 2014-10-21 Pelican Imaging Corporation System and methods for calibration of an array camera using a single captured image
US9521416B1 (en) 2013-03-11 2016-12-13 Kip Peli P1 Lp Systems and methods for image data compression
US9888194B2 (en) 2013-03-13 2018-02-06 Fotonation Cayman Limited Array camera architecture implementing quantum film image sensors
WO2014164550A2 (en) 2013-03-13 2014-10-09 Pelican Imaging Corporation System and methods for calibration of an array camera
US9519972B2 (en) 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
US9106784B2 (en) 2013-03-13 2015-08-11 Pelican Imaging Corporation Systems and methods for controlling aliasing in images captured by an array camera for use in super-resolution processing
US9100586B2 (en) 2013-03-14 2015-08-04 Pelican Imaging Corporation Systems and methods for photometric normalization in array cameras
WO2014159779A1 (en) 2013-03-14 2014-10-02 Pelican Imaging Corporation Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US9497429B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Extended color processing on pelican array cameras
US10122993B2 (en) 2013-03-15 2018-11-06 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US9445003B1 (en) 2013-03-15 2016-09-13 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US9674498B1 (en) 2013-03-15 2017-06-06 Google Inc. Detecting suitability for converting monoscopic visual content to stereoscopic 3D
US9633442B2 (en) 2013-03-15 2017-04-25 Fotonation Cayman Limited Array cameras including an array camera module augmented with a separate camera
US9497370B2 (en) 2013-03-15 2016-11-15 Pelican Imaging Corporation Array camera architecture implementing quantum dot color filters
US9438888B2 (en) 2013-03-15 2016-09-06 Pelican Imaging Corporation Systems and methods for stereo imaging with camera arrays
US9438878B2 (en) 2013-05-01 2016-09-06 Legend3D, Inc. Method of converting 2D video to 3D video using 3D object models
US9786062B2 (en) * 2013-05-06 2017-10-10 Disney Enterprises, Inc. Scene reconstruction from high spatio-angular resolution light fields
US20140363097A1 (en) * 2013-06-06 2014-12-11 Etron Technology, Inc. Image capture system and operation method thereof
CA2820305A1 (en) 2013-07-04 2015-01-04 University Of New Brunswick Systems and methods for generating and displaying stereoscopic image pairs of geographical areas
US9866813B2 (en) 2013-07-05 2018-01-09 Dolby Laboratories Licensing Corporation Autostereo tapestry representation
US9373171B2 (en) 2013-07-22 2016-06-21 Stmicroelectronics S.R.L. Method for generating a depth map, related system and computer program product
RU2535183C1 (en) * 2013-07-25 2014-12-10 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Южно-Российский государственный университет экономики и сервиса" (ФГБОУ ВПО "ЮРГУЭС") Apparatus for processing depth map of stereo images
US9736449B1 (en) * 2013-08-12 2017-08-15 Google Inc. Conversion of 2D image to 3D video
WO2015048694A2 (en) 2013-09-27 2015-04-02 Pelican Imaging Corporation Systems and methods for depth-assisted perspective distortion correction
US9355468B2 (en) * 2013-09-27 2016-05-31 Nvidia Corporation System, method, and computer program product for joint color and depth encoding
TWI602144B (en) * 2013-10-02 2017-10-11 國立成功大學 Method, device and system for packing color frame and original depth frame
US20150103200A1 (en) * 2013-10-16 2015-04-16 Broadcom Corporation Heterogeneous mix of sensors and calibration thereof
US9426343B2 (en) 2013-11-07 2016-08-23 Pelican Imaging Corporation Array cameras incorporating independently aligned lens stacks
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
EP3075140B1 (en) 2013-11-26 2018-06-13 FotoNation Cayman Limited Array camera configurations incorporating multiple constituent array cameras
US9336604B2 (en) 2014-02-08 2016-05-10 Honda Motor Co., Ltd. System and method for generating a depth map through iterative interpolation and warping
US10089740B2 (en) 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US9407896B2 (en) 2014-03-24 2016-08-02 Hong Kong Applied Science and Technology Research Institute Company, Limited Multi-view synthesis in real-time with fallback to 2D from 3D to reduce flicker in low or unstable stereo-matching image regions
US9247117B2 (en) 2014-04-07 2016-01-26 Pelican Imaging Corporation Systems and methods for correcting for warpage of a sensor array in an array camera module by introducing warpage into a focal plane of a lens stack array
US9521319B2 (en) 2014-06-18 2016-12-13 Pelican Imaging Corporation Array cameras and array camera modules including spectral filters disposed outside of a constituent image sensor
US10074158B2 (en) 2014-07-08 2018-09-11 Qualcomm Incorporated Systems and methods for stereo depth estimation using global minimization and depth interpolation
JP6446217B2 (en) * 2014-09-24 2018-12-26 株式会社スクウェア・エニックス Image display program, image display method, and image display system
CN113256730B (en) 2014-09-29 2023-09-05 快图有限公司 System and method for dynamic calibration of an array camera
US9772405B2 (en) * 2014-10-06 2017-09-26 The Boeing Company Backfilling clouds of 3D coordinates
CN105574926B (en) * 2014-10-17 2018-05-11 华为技术有限公司 The method and apparatus for generating 3-D view
US10179407B2 (en) * 2014-11-16 2019-01-15 Robologics Ltd. Dynamic multi-sensor and multi-robot interface system
TWI558167B (en) 2014-12-30 2016-11-11 友達光電股份有限公司 3d image display system and display method
US20160255323A1 (en) 2015-02-26 2016-09-01 Dual Aperture International Co. Ltd. Multi-Aperture Depth Map Using Blur Kernels and Down-Sampling
US9942474B2 (en) 2015-04-17 2018-04-10 Fotonation Cayman Limited Systems and methods for performing high speed video capture and depth estimation using array cameras
GB2537831A (en) * 2015-04-24 2016-11-02 Univ Oxford Innovation Ltd Method of generating a 3D representation of an environment and related apparatus
CN106296800B (en) * 2015-06-11 2020-07-24 联想(北京)有限公司 Information processing method and electronic equipment
WO2017014693A1 (en) * 2015-07-21 2017-01-26 Heptagon Micro Optics Pte. Ltd. Generating a disparity map based on stereo images of a scene
US9609307B1 (en) 2015-09-17 2017-03-28 Legend3D, Inc. Method of converting 2D video to 3D video using machine learning
CN105825499A (en) * 2016-03-09 2016-08-03 京东方科技集团股份有限公司 Reference plane determination method and determination system
US10841491B2 (en) 2016-03-16 2020-11-17 Analog Devices, Inc. Reducing power consumption for time-of-flight depth imaging
US20190302963A1 (en) * 2016-06-01 2019-10-03 Carnegie Mellon University Hybrid depth and infrared image sensing and method for enhanced touch tracking on ordinary surfaces
US9854156B1 (en) 2016-06-12 2017-12-26 Apple Inc. User interface for camera effects
EP3459251B1 (en) * 2016-06-17 2021-12-22 Huawei Technologies Co., Ltd. Devices and methods for 3d video coding
CN108377379B (en) * 2016-10-20 2020-10-09 聚晶半导体股份有限公司 Image depth information optimization method and image processing device
TWI595771B (en) * 2016-10-20 2017-08-11 聚晶半導體股份有限公司 Optimization method of image depth information and image processing apparatus
US10451714B2 (en) 2016-12-06 2019-10-22 Sony Corporation Optical micromesh for computerized devices
US10536684B2 (en) 2016-12-07 2020-01-14 Sony Corporation Color noise reduction in 3D depth map
US10178370B2 (en) 2016-12-19 2019-01-08 Sony Corporation Using multiple cameras to stitch a consolidated 3D depth map
US10181089B2 (en) 2016-12-19 2019-01-15 Sony Corporation Using pattern recognition to reduce noise in a 3D map
CN106780705B (en) * 2016-12-20 2020-10-16 南阳师范学院 Depth map robust smooth filtering method suitable for DIBR preprocessing process
US10445861B2 (en) * 2017-02-14 2019-10-15 Qualcomm Incorporated Refinement of structured light depth maps using RGB color data
US10495735B2 (en) 2017-02-14 2019-12-03 Sony Corporation Using micro mirrors to improve the field of view of a 3D depth map
US10795022B2 (en) 2017-03-02 2020-10-06 Sony Corporation 3D depth map
US10979687B2 (en) 2017-04-03 2021-04-13 Sony Corporation Using super imposition to render a 3D depth map
DK180859B1 (en) 2017-06-04 2022-05-23 Apple Inc USER INTERFACE CAMERA EFFECTS
US10482618B2 (en) 2017-08-21 2019-11-19 Fotonation Limited Systems and methods for hybrid depth regularization
WO2019075473A1 (en) * 2017-10-15 2019-04-18 Analog Devices, Inc. Time-of-flight depth image processing systems and methods
CN107801015A (en) * 2017-10-19 2018-03-13 成都旭思特科技有限公司 Image processing method based on low pass filter
US10484667B2 (en) 2017-10-31 2019-11-19 Sony Corporation Generating 3D depth map using parallax
DK180212B1 (en) 2018-05-07 2020-08-19 Apple Inc USER INTERFACE FOR CREATING AVATAR
US10375313B1 (en) * 2018-05-07 2019-08-06 Apple Inc. Creative camera
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
US11582402B2 (en) * 2018-06-07 2023-02-14 Eys3D Microelectronics, Co. Image processing device
US10549186B2 (en) 2018-06-26 2020-02-04 Sony Interactive Entertainment Inc. Multipoint SLAM capture
DK201870623A1 (en) 2018-09-11 2020-04-15 Apple Inc. User interfaces for simulated depth effects
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US10674072B1 (en) 2019-05-06 2020-06-02 Apple Inc. User interfaces for capturing and managing visual media
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11107261B2 (en) 2019-01-18 2021-08-31 Apple Inc. Virtual avatar animation based on facial feature movement
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
JP7273250B2 (en) 2019-09-17 2023-05-12 ボストン ポーラリメトリックス,インコーポレイティド Systems and methods for surface modeling using polarization cues
DE112020004813B4 (en) 2019-10-07 2023-02-09 Boston Polarimetrics, Inc. System for expanding sensor systems and imaging systems with polarization
RU2730215C1 (en) * 2019-11-18 2020-08-20 федеральное государственное бюджетное образовательное учреждение высшего образования "Донской государственный технический университет" (ДГТУ) Device for image reconstruction with search for similar units based on a neural network
RU2716311C1 (en) * 2019-11-18 2020-03-12 федеральное государственное бюджетное образовательное учреждение высшего образования "Донской государственный технический университет" (ДГТУ) Device for reconstructing a depth map with searching for similar blocks based on a neural network
US11330246B2 (en) * 2019-11-21 2022-05-10 Microsoft Technology Licensing, Llc Imaging system configured to use time-of-flight imaging and stereo imaging
KR20230116068A (en) 2019-11-30 2023-08-03 보스턴 폴라리메트릭스, 인크. System and method for segmenting transparent objects using polarization signals
CN112991254A (en) * 2019-12-13 2021-06-18 上海肇观电子科技有限公司 Disparity estimation system, method, electronic device, and computer-readable storage medium
US11195303B2 (en) 2020-01-29 2021-12-07 Boston Polarimetrics, Inc. Systems and methods for characterizing object pose detection and measurement systems
KR20220133973A (en) 2020-01-30 2022-10-05 인트린식 이노베이션 엘엘씨 Systems and methods for synthesizing data to train statistical models for different imaging modalities, including polarized images
US20230107179A1 (en) * 2020-03-31 2023-04-06 Sony Group Corporation Information processing apparatus and method, as well as program
US11688073B2 (en) 2020-04-14 2023-06-27 Samsung Electronics Co., Ltd. Method and system for depth map reconstruction
DK202070625A1 (en) 2020-05-11 2022-01-04 Apple Inc User interfaces related to time
US11921998B2 (en) 2020-05-11 2024-03-05 Apple Inc. Editing features of an avatar
WO2021243088A1 (en) 2020-05-27 2021-12-02 Boston Polarimetrics, Inc. Multi-aperture polarization optical systems using beam splitters
US11039074B1 (en) 2020-06-01 2021-06-15 Apple Inc. User interfaces for managing media
US11410580B2 (en) 2020-08-20 2022-08-09 Facebook Technologies, Llc. Display non-uniformity correction
US11212449B1 (en) 2020-09-25 2021-12-28 Apple Inc. User interfaces for media capture and management
EP3975105A1 (en) * 2020-09-25 2022-03-30 Aptiv Technologies Limited Method and system for interpolation and method and system for determining a map of a surrounding of a vehicle
RU2750416C1 (en) * 2020-10-21 2021-06-28 федеральное государственное бюджетное образовательное учреждение высшего образования «Донской государственный технический университет» (ДГТУ) Image compression device based on pixel reconstruction method
US11733773B1 (en) 2020-12-29 2023-08-22 Meta Platforms Technologies, Llc Dynamic uniformity correction for boundary regions
CN112561793B (en) * 2021-01-18 2021-07-06 深圳市图南文化设计有限公司 Planar design space conversion method and system
US11615594B2 (en) 2021-01-21 2023-03-28 Samsung Electronics Co., Ltd. Systems and methods for reconstruction of dense depth maps
WO2022191373A1 (en) * 2021-03-11 2022-09-15 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device
US11681363B2 (en) 2021-03-29 2023-06-20 Meta Platforms Technologies, Llc Waveguide correction map compression
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media
US11539876B2 (en) 2021-04-30 2022-12-27 Apple Inc. User interfaces for altering visual media
US11776190B2 (en) 2021-06-04 2023-10-03 Apple Inc. Techniques for managing an avatar on a lock screen
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
WO2022036338A2 (en) * 2021-11-09 2022-02-17 Futurewei Technologies, Inc. System and methods for depth-aware video processing and depth perception enhancement
US11710212B1 (en) * 2022-01-21 2023-07-25 Meta Platforms Technologies, Llc Display non-uniformity correction
US11754846B2 (en) 2022-01-21 2023-09-12 Meta Platforms Technologies, Llc Display non-uniformity correction
WO2023195911A1 (en) * 2022-04-05 2023-10-12 Ams-Osram Asia Pacific Pte. Ltd. Calibration of depth map generating system
WO2024050105A1 (en) * 2022-09-01 2024-03-07 Apple Inc. Perspective correction with gravitational smoothing

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4925294A (en) 1986-12-17 1990-05-15 Geshwind David M Method to convert two dimensional motion pictures for three-dimensional systems
FR2735936B1 (en) 1995-06-22 1997-08-29 Allio Pierre METHOD FOR ACQUIRING SIMULATED AUTOSTEREOSCOPIC IMAGES
US5847710A (en) 1995-11-24 1998-12-08 Imax Corp. Method and apparatus for creating three dimensional drawings
US6333788B1 (en) * 1996-02-28 2001-12-25 Canon Kabushiki Kaisha Image processing apparatus and method
WO1999030280A1 (en) 1997-12-05 1999-06-17 Dynamic Digital Depth Research Pty. Ltd. Improved image conversion and encoding techniques
US6515659B1 (en) 1998-05-27 2003-02-04 In-Three, Inc. Method and system for creating realistic smooth three-dimensional depth contours from two-dimensional images
US6208348B1 (en) 1998-05-27 2001-03-27 In-Three, Inc. System and method for dimensionalization processing of images in consideration of a pedetermined image projection format
US6573940B1 (en) * 1999-09-02 2003-06-03 Techwell, Inc Sample rate converters for video signals
CA2418800A1 (en) 2000-08-09 2002-02-14 Dynamic Digital Depth Research Pty Ltd. Image conversion and encoding techniques
US6990681B2 (en) 2001-08-09 2006-01-24 Sony Corporation Enhancing broadcast of an event with synthetic scene using a depth map
US7639838B2 (en) * 2002-08-30 2009-12-29 Jerry C Nims Multi-dimensional images system for digital image input and output
WO2004066212A2 (en) 2003-01-17 2004-08-05 Koninklijke Philips Electronics N.V. Full depth map acquisition
US7391895B2 (en) * 2003-07-24 2008-06-24 Carestream Health, Inc. Method of segmenting a radiographic image into diagnostically relevant and diagnostically irrelevant regions
KR101038452B1 (en) * 2003-08-05 2011-06-01 코닌클리케 필립스 일렉트로닉스 엔.브이. Multi-view image generation
US7015926B2 (en) 2004-06-28 2006-03-21 Microsoft Corporation System and process for generating a two-layer, 3D representation of a scene
US7664326B2 (en) * 2004-07-09 2010-02-16 Aloka Co., Ltd Method and apparatus of image processing to detect and enhance edges

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8736669B2 (en) 2007-12-27 2014-05-27 Sterrix Technologies Ug Method and device for real-time multi-view production
CN106091984A (en) * 2016-06-06 2016-11-09 中国人民解放军信息工程大学 A kind of three dimensional point cloud acquisition methods based on line laser
CN106091984B (en) * 2016-06-06 2019-01-25 中国人民解放军信息工程大学 A kind of three dimensional point cloud acquisition methods based on line laser
CN111595337A (en) * 2020-04-13 2020-08-28 宁波深寻信息科技有限公司 Inertial positioning self-correction method based on visual modeling
CN111595337B (en) * 2020-04-13 2023-09-26 浙江深寻科技有限公司 Inertial positioning self-correction method based on visual modeling

Also Published As

Publication number Publication date
US20070024614A1 (en) 2007-02-01
US20130009952A1 (en) 2013-01-10
US8384763B2 (en) 2013-02-26

Similar Documents

Publication Publication Date Title
CA2553473A1 (en) Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging
US10070115B2 (en) Methods for full parallax compressed light field synthesis utilizing depth information
Tam et al. 3D-TV content generation: 2D-to-3D conversion
CN102113015B (en) Use of inpainting techniques for image correction
Lee Nongeometric distortion smoothing approach for depth map preprocessing
CN103426163B (en) System and method for rendering affected pixels
Conze et al. Objective view synthesis quality assessment
US20160373715A1 (en) Multi view synthesis method and display devices with spatial and inter-view consistency
US20100002073A1 (en) Blur enhancement of stereoscopic images
Do et al. Quality improving techniques for free-viewpoint DIBR
Horng et al. Stereoscopic images generation with directional Gaussian filter
Ceulemans et al. Robust multiview synthesis for wide-baseline camera arrays
Liu et al. An enhanced depth map based rendering method with directional depth filter and image inpainting
Jantet et al. Joint projection filling method for occlusion handling in depth-image-based rendering
CN112529773B (en) QPD image post-processing method and QPD camera
Qiao et al. Color correction and depth-based hierarchical hole filling in free viewpoint generation
US20120170841A1 (en) Image processing apparatus and method
Mieloch et al. Graph-based multiview depth estimation using segmentation
Liu et al. Hole-filling based on disparity map and inpainting for depth-image-based rendering
Daribo et al. Bilateral depth-discontinuity filter for novel view synthesis
Gunnewiek et al. Coherent spatial and temporal occlusion generation
Do et al. Objective quality analysis for free-viewpoint DIBR
Wang et al. Depth image segmentation for improved virtual view image quality in 3-DTV
CN117061720B (en) Stereo image pair generation method based on monocular image and depth image rendering
Tran et al. On consistent inter-view synthesis for autostereoscopic displays

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued

Effective date: 20150522