PANORAMA BASED 3D VIDEO CODING
BACKGROUND
Γ00011 A video encoder compresses video information so that more information can be sent over a given bandwidth. The compressed signal mav then be transmitted to a receiver that decodes or decompresses the signal prior to displav.
[00021 3D video has become an emerging medium that can offer a richer visual experience than traditional 2D video. Potential applications include free-viewDoint video (FW). free-viewDoint television (FTV). 3D television (3DTV). IMAX theaters, immersive teleconferences, surveillance, etc. To support these applications, video svstems typically capture a scene from different viewpoints, which results in generating several video seauences from different cameras simultaneouslv.
[00031 3D Video Coding (3DVO refers to a new video compress standard that targets serving a variety of 3D displays. 3DVC is under development bv the ISO IEC Moving Picture Experts Group (MPEGV At present, one of the branches of 3DVC is built based on the latest conventional video coding standard. High Efficient Video Coding (HEVO. which is planned to be finalized bv the end of 2012. The other branch of 3DVC is built based on the H.264/AVC.
[00041 The ISO IEC Moving Picture Experts Group (MPEG) is now undertaking the standardization of 3D Video Coding (3DVCY The new 3DVC standard will likelv enable the generation of manv high-aualitv views from a limited amount of input data. For example, a Multiview Video plus Depth (MVD) concept mav be used to generate such high-aualitv views from a limited amount of input data. Further. 3DVC mav be utilized for advanced stereoscopic processing functionality and to support auto-stereoscopic displav and FTV that allows users to have a 3D visual experience while freelv changing their position in front of a 3D displav.
[00051 Generallv. there are two main components of Multiview Video plus Depth (MVD) concept that support the FTV functionality, multiview video and associate depth map information. Such multiview video typically refers to a scene being captured bv manv cameras and from different view positions. Such associate depth map information typically refers to each
texture view being associated with a det)th mao that tells how far from the camera the obiects in the scene are. From the multiview video and det)th information, virtual views can be generated at an arbitrary viewing Dosition.
[00061 The Multiview Video DIUS Det)th ( MVD concert is often used to reoresent the 3D video content, in which a number of views and associated det)th mans are tvmcallv coded and multirjlexed into a bitstream. Camera Darameters of each view are also tvmcallv Dacked into the bitstream for the ouroose of view svnthesis. One of the views, which are also tvmcallv referred to as the base view or the indeoendent view, is tvmcallv coded indeoendentlv of the other views. For the deoendent views, video and det)th can be oredicted from the ctures of other views or Dreviouslv coded mctures in the same view. According to the soecific atrolication. sub-bitstreams can be extracted at the decoder side bv discarding non-reauired bitstream oackets.
BRIEF DESCRIPTION OF THE DRAWINGS
[00071 The material described herein is illustrated bv wav of examrjle and not bv wav of limitation in the accomrjanving figures. For simrjlicitv and claritv of illustration, elements illustrated in the figures are not necessarilv drawn to scale. For examrjle. the dimensions of some elements mav be exaggerated relative to other elements for claritv. Further, where considered at)t)rot)riate. reference labels have been reoeated among the figures to indicate corresDonding or analogous elements. In the figures:
[00081 FIG. 1 is an illustrative diagram of an examrjle 3D video coding svstem;
[00091 FIG. 2 is an illustrative diagram of an examrjle 3D video coding svstem;
[00101 FIG. 3 is a flow chart illustrating an examrjle 3D video coding rjrocess;
[00111 FIG. 4 is an illustrative diagram of an examrjle 3D video coding orocess in
ODeration;
[00121 FIG. 5 is an illustrative diagram of examrjle Danorama based 3D video coding flow;
[00131 FIG. 6 is an illustrative diagram of an examrjle 3D video coding svstem;
[00141 FIG. 7 is an illustrative diagram of an examrjle svstem; and
[00151 FIG. 8 is an illustrative diagram of an examrjle svstem. all arranged in accordance with at least some imrjlementations of the oresent disclosure.
DETAILED DESCRIPTION
[00161 One or more embodiments or imrjlementations are now described with reference to the enclosed figures. While SDecific configurations and arrangements are discussed, it should be understood that this is done for illustrative mirooses onlv. Persons skilled in the relevant art will recognize that other configurations and arrangements mav be emrjloved without deoarting from the s rit and scooe of the descritrtion. It will be apparent to those skilled in the relevant art that techniaues and/or arrangements described herein mav also be employed in a variety of other svstems and applications other than what is described herein.
Γ00171 While the following description sets forth various implementations that mav be manifested in architectures such svstem-on-a-chip ( SoO architectures for example,
implementation of the techniaues and/or arrangements described herein are not restricted to particular architectures and/or computing svstems and mav be implemented bv anv architecture and/or commiting svstem for similar purposes. For instance, various architectures employing, for example, multiple integrated circuit (IO chips and/or packages, and/or various commiting devices and/or consumer electronic (CE devices such as set too boxes, smart phones, etc.. mav implement the techniaues and/or arrangements described herein. Further, while the following description mav set forth numerous specific details such as logic implementations, types and interrelationships of svstem components, logic partitioning/integration choices, etc.. claimed subject matter mav be macticed without such specific details. In other instances, some material such as. for example, control structures and full software instruction seauences. mav not be shown in detail in order not to obscure the material disclosed herein.
[00181 The material disclosed herein mav be implemented in hardware, firmware, software, or anv combination thereof. The material disclosed herein mav also be implemented as instructions stored on a machine-readable medium, which mav be read and executed bv one or more mocessors. A machine-readable medium mav include anv medium and/or mechanism for storing or transmitting information in a form readable bv a machine (e.g.. a commiting device . For examrjle. a machine-readable medium mav include read onlv memory (ROM ; random access memory (RAM); magnetic disk storage media; ODtical storage media; flash memory devices; electrical, ootical. acoustical or other forms of morjagated signals (e.g.. carrier waves, infrared signals, digital signals, etc. . and others.
Γ00191 References in the SDecification to "one imDlementation". "an imDlementation". "an examrjle imDlementation". etc.. indicate that the imDlementation described mav include a
Darticular feature, structure, or characteristic, but everv imDlementation mav not necessarilv include the Darticular feature, structure, or characteristic. Moreover, such Dhrases are not necessarilv referring to the same imDlementation. Further, when a Darticular feature, structure, or characteristic is described in connection with an imDlementation. it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other imDlementations whether or not exDlicitlv described herein.
[00201 Svstems. aDDaratus. articles, and methods are described below including oDerations for Danorama based 3D video coding.
[00211 As described above, in some cases, in conventional 3D video comDression coding, two or three views and associated deDth maDS mav be coded in a bitstream to suDDort various 3D video aDDlications. At the decoder side, virtual svnthesized views at a certain view Doint can be generated bv using the deDth image based rendering techniques. In order to be backward comDatible with conventional 2D video encoder/decoder, one view of the 3D video mav be marked as an indeDendent view and it must be coded indeDendentlv using a conventional 2D video encoder/decoder. In addition to the indeDendent views, other views mav be deDendent views that allow not onlv inter-view Drediction to exDloit the inter-view redundancv. but also intra-view Drediction to exDloit the SDatial and temDoral redundancies in the same view. However, a huge amount of 3D video data surges the required bandwidth in comDarison with single view videos. Hence. 3D video data mav need to be comDressed more efficientlv.
[00221 As will be described in greater detail below. ODerations for 3D video coding mav utilize a Danorama based 3D video coding method, which, in some embodiment, could be fullv comDatible with conventional 2D video coders. Instead of coding multiDle view sequences and associated deDth maD sequences, onlv a Danorama video sequence and a Danorama maD mav be coded and transmitted. Moreover, anv arbitrarv field of view can be extracted from such a Danorama sequence, and 3D video at anv intermediate view Doint can be derived directlv. Such Danorama based 3D video coding mav imDrove the coding efficiencv and flexibility of 3D video coding svstems.
[00231 FIG. 1 is an illustrative diagram of an examrjle 3D video coding svstem 100. arranged in accordance with at least some imrjlementations of the oresent disclosure. In the illustrated imDlementation. 3D video coding svstem 100 mav include one or more tvoes of disrjlavs (e.g.. an N-view disolav 140. a stereo disolav 142. a 2D disolav 144. or the like , one or more imaging devices (not shown , a 3D video encoder 103. a 3D video decoder 105. a stereo video decoder 107. a 2D video decoder 109. and/or a bitstream extractor 110.
[00241 In some examrjles. 3D video coding svstem 100 mav include additional items that have not been shown in FIG. 1 for the sake of claritv. For examrjle. 3D video coding svstem 100 mav include a orocessor. a radio freauencv-tvDe (RF) transceiver, and/or an antenna. Further. 3D video coding svstem 100 mav include additional items such as a sneaker, a microrjhone. an accelerometer. memorv. a router, network interface logic, etc. that have not been shown in FIG. 1 for the sake of claritv.
[00251 As used herein, the term "coder" mav refer to an encoder and/or a decoder.
Similarly, as used herein, the term "coding" mav refer to encoding via an encoder and/or decoding via a decoder. For examrjle 3D video encoder 103 and 3D video decoder 105 mav both be examrjles of coders caoable of 3D coding.
[00261 In some examrjles. a sender 102 mav receive multirjle views from mult le imaging devices (not shown . The inmit signal for 3D encoder 103 mav include mult le views (e.g.. video mctures 112 and 113). associated det)th mans (e.g.. det)th mans 114 and 115). and corresDonding camera Darameters (not shown). However. 3D video coding svstem 100 can also be ooerated without det)th data. The inmit comrjonent signals are coded into a bitstream using 3D video encoder 103. in which the base view mav be coded using a 2D video encoder, e.g. H264/AVC encoder or High Efficiency Video Coding (HEVC) encoder. If the bitstream from bitstream extractor 110 is decoded bv a 3D receiver 104 using 3D video decoder 105. videos (e.g.. video mctures 116 and 117). det)th data (e.g.. det)th mans 118 and 119). and/or camera Darameters (not shown) mav be reconstructed with the given fidelity.
[00271 In other examrjles. if the bitstream from bitstream extractor 110 is decoded bv a stereo receiver 106 for dissaving the 3D video on an auto stereosco c disolav (e.g.. stereo disrjlav 142). additional intermediate views (e.g.. two view mctures 120 and 121) mav be generated bv a deoth-image-based rendering (DIBR) algorithm using the reconstructed views and
derjth data. If 3D video decoder 103 is connected to a conventional stereo disolav (e.g.. stereo disrjlav 142 . intermediate view svnthesis 130 mav also generate a oak of stereo views, in case such a oak is not actuallv Dresent in the bitstream from bitstream extractor 110.
[00281 In further examrjles. if the bitstream from bitstream extractor 110 is decoded bv a
2D receiver 108. one of the decoded views (e.g.. indeoendent view mcture 122 or an intermediate view at an arbitrary virtual camera Dosition can also be used for dissaving a single view on a conventional 2D disolav (e.g.. 2D disolav 144 .
Γ00291 An examrjle of a tvmcal 3DV svstem for auto-stereoscomc disolav is shown as
Fig.1. The inmit signal for the encoder mav consist of mult le texture views, associated mult le detrth mans, and corresDonding camera Darameters. It should be noticed that the inmit data could also be mult le texture views onlv. When the coded 3D video bitstream is received at the receiver side, the mult le texture views, associated multirjle det)th mans, and corresDonding camera Darameters can be fullv reconstructed though the 3D video decoder. For dissaving the 3D video on an auto stereosco c disDlav. additional intermediate views are generated via det)th- image-based rendering (DIBR) techniaue using the reconstructed texture views and det)th maos.
[00301 FIG. 2 is an illustrative diagram of an examrjle 2D video coding svstem 200. arranged in accordance with at least some imrjlementations of the Dresent disclosure. In the illustrated imrjlementation. 2D video coding svstem 200 mav imrjlement ODerations for Danorama based 2D video coding.
[00311 As will be described in greater detail below, a Danorama video 210 mav contain the video content from video mcture views 112-113 and the oanorama video 210 can be generated bv using image stitching algorithms via image stitching and oanorama mao generation module 207. Note that the video data of mult le video mcture views 112-113 can be catrtured bv either Darallel camera arravs or arc camera arravs.
[00321 The oanorama mao 212 mav contain a series of Dersoective twoiection matrix which mans each raw image to the certain region in the oanorama video 210. a twoiection matrix between camera views, and a xel corresDondence (e.g.. 6-7 xel corresDondences between camera images. The inverse mao mav realize the mao from oanorama video 210 to the camera view (e.g.. raw images or synthesized views). The oanorama mao 212 can be constructed via
image stitching and panorama map generation module 207 bv stable pixel points conespondence (e.g.. 6-7 stable mxel points) between each video picture views 112-113 and panorama video 210; and the camera internal/external parameters 201-202. In order to blend the images to compensate for exposure differences and other misalignments such as illumination changes and ghost phenomenon, view blending techniaues for the target region of panorama mav be performed when the region comes from several different raw images. The view blending could be put in either the sender side before the 2D video encoder 203 or the receiver side after the 2D video decoder 204. such as Dart of 3D wanting techniaues via 3D warning and/or view blending module 217. If the view blending is out in the sender side, the computing mav be processed after the generation of panorama video 210 and before the 2D video encoder 203. On the other hand, if it is put in the receiver side, the commiting will be processed after the generation of panorama video 210 and before the 3D warning via 3D warning and/or view blending module 217.
[00331 2D video coding svstem 200 mav encode the panorama video 210 using a typical
2D video encoder 203. such as MPEG-2. H.264/AVC. HEVC. etc.. and the panorama map 212 could be coded and transmitted through MPEG-2 user data svntax. H.264/AVC SEI svntax. or HEVC SEI svntax.
[00341 At 3D receiver 104. the panorama video 210 and panorama mao 212 can be fully reconstructed bv the conesponding 2D video decoder 205. Then arbitrary view video at anv intermediate viewing position could be generated through 3D warning techniaues via 3D wanting and/or view blending module 217. For examttie. an auto-stereoscottic video can be disttiaved on disttiav 140. and user 202 mav supply input indicating what viewpoint the user desires. In response to the indicated viewpoint, an arbitrary view video at anv intermediate viewing position could be generated through 3D warping techniaues via 3D warping and/or view blending module 217. As a conseauence. an auto-stereoscopic video can be obtained. The random access of an arbitrary view within the input field of multiple views can be efficiently achieved bv the panorama based 3D video coding of 2D video coding svstem 200.
[00351 As will be discussed in greater detail below. 3D video coding svstem 200 mav be used to perform some or all of the various functions discussed below in connection with Figs. 3 and/or 4.
[00361 FIG. 3 is a flow chart illustrating an examrjle 2D video coding rjrocess 200. arranged in accordance with at least some imDlementations of the oresent disclosure. In the illustrated imrjlementation. rjrocess 300 mav include one or more ooerations. functions or actions as illustrated bv one or more of blocks 302. and/or 304. Bv wav of non-limiting examrjle. orocess 300 will be described herein with reference to examrjle 2D video coding svstem 200 of FIG. 2 and/or 6.
Γ00371 Process 300 mav be utilized as a comrjuter-imrjlemented method for oanorama based 3D video coding. Process 300 mav begin at block 302. "DECODE PANORAMA VIDEO AND PANORAMA MAP GENERATED BASED AT LEAST IN PART ON MULTIPLE TEXTURE VIEWS AND CAMERA PARAMETERS", where nanorama video and nanorama mans mav be decoded. For examrjle. oanorama video and oanorama mans that were generated based at least in Dart on mult le texture views and camera Darameters mav be decoded via a 2D decoder (not illustrated .
[00381 Processing mav continue from ooeration 302 to ooeration 304. "EXTRACT 3D
VIDEO BASED AT LEAST IN PART ON THE GENERATED PANORAMA VIDEO", where 3D video mav be extracted. For examrjle. 3D video mav be extracted based at least in Dart on the generated oanorama video and the associated oanorama mat).
[00391 Some additional and/or alternative details related to orocess 300 mav be illustrated in one or more examrjles of imDlementations discussed in greater detail below with regard to FIG. 4.
[00401 FIG. 4 is an illustrative diagram of examrjle 2D video coding svstem 200 and 3D video coding orocess 400 in ooeration. arranged in accordance with at least some
imDlementations of the oresent disclosure. In the illustrated imrjlementation. rjrocess 400 mav include one or more ooerations. functions or actions as illustrated bv one or more of actions 412. 414. 416. 418. 420. 422. 424. 426. 428. 430. 432. 434. and/or 436. Bv wav of non-limiting examrjle. orocess 400 will be described herein with reference to examrjle 2D video coding svstem 200 of FIG. 2 and/or 5.
[00411 In the illustrated imrjlementation. 2D video coding svstem 200 mav include logic modules 406. the like, and/or combinations thereof. For examrjle. logic modules 406. mav
include Danorama generation logic module 408. 3D video extraction logic module 410. the like, and/or combinations thereof. Although 3D video coding svstem 100. as shown in FIG. 4. mav include one Darticular set of blocks or actions associated with Darticular modules, these blocks or actions mav be associated with different modules than the Darticular module illustrated here.
[00421 Process 400 mav begin at block 412. "DETERMINE PIXEL
CORRESPONDENCE", where a Dixel corresDondence mav be determined. For examDle. on a 2D encoder side, a xel corresDondence mav be determined that is caoable of marking xel coordinates from the multirjle texture views via kev Doint features.
[00431 In some examrjles. during Dre-Drocessing bv using multiview video and camera
Darameters. the mxel corresDondence (e.g.. mathematical relationshiDs mav be established. Such mxel corresDondence mav be estimated via the matching of kev Doint features like SDeeded UD Robust Feature ( SURF or Scale-Invariant Feature Transform ( SIFT , for examDle.
[00441 Although Drocess 400. as illustrated, is directed to decoding, the conceDts and/or oDerations described mav be aDDlied in the same or similar manner to coding in general, including in encoding.
[00451 Processing mav continue from oDeration 412 to ODeration 414. "ESTIMATE
CAMERA EXTERNAL PARAMETERS", where camera external Darameters mav be estimated. The camera external Darameters mav include one or more of the following: a translation vector and a rotation matrix between multiDle cameras, the like, and/or combinations thereof.
[00461 Processing mav continue from ODeration 414 to ODeration 416. "DETERMINE
PROJECTION MATRIX", where a Droiection matrix mav be determined. For examDle. the Droiection matrix mav be determined based at least in Dart on the camera external Darameters and camera internal Darameters.
[00471 In some examDles. the Droiection matrix P mav be established from camera internal Darameters (given a priori) and external Darameters (e.g.. rotation matrix R and translation vector t . as illustrated in the following eauation: P = K\R, tl. where K is the camera matrix, which contains the scaling factor of the camera, and the oDtical center of the camera. The Droiection matrix mav maD from the 3D scene to the camera view (e.g.. raw images .
[00481 Processing mav continue from ODeration 416 to ODeration 418. "GENERATE
THE PANORAMA VIDEO", where the Danorama video mav be generated. For examDle. the Danorama video mav be generated from the multiDle texture views via an image stitching algorithm based at least in Dart on geometric mat)t)ing from the determined Droiection matrix and/or the determined Dixel corresDondence.
Γ00491 In some examrjles. the multiDle texture views mav be catrtured bv various camera setuD methods such as Darallel camera arrav. arc camera arrav. the like, and/or combinations thereof. In such examrjles. the Danorama video mav be a cvlindrical-tvDe Danorama or sDherical- tvDe Danorama.
Γ00501 Processing mav continue from oDeration 418 to ODeration 420. "GENERATE
THE ASSOCIATED PANORAMA MAP", where the associated Danorama man mav be generated. For examDle. the associated Danorama maD mav be generated and mav be caDable of maDDing Dixel coordinates between the multiDle texture views and the Danorama video as a DersDective Droiection from the multiDle texture views to the Danorama image.
Γ00511 Processing mav continue from ODeration 420 to ODeration 422. "ENCODE THE
PANORAMA VIDEO AND THE ASSOCIATED PANORAMA MAP", where the Danorama video and the associated Danorama maD mav be encoded. For examDle. the Danorama video and the associated Danorama maD mav be encoded via a 2D encoder (not shown .
Γ00521 Processing mav continue from ODeration 422 to ODeration 424. "DECODE THE
PANORAMA VIDEO AND THE ASSOCIATED PANORAMA MAP", where the Danorama video and the associated Danorama maD mav be decoded. For examDle. the Danorama video and the associated Danorama maD mav be decoded via a 2D decoder (not shown .
Γ00531 In some examDles. conventional 2D video encoder/decoder svstems mav be utilized to code the Danorama video and Danorama maD. The generated Danorama video could be coded with MPEG-2. H.264/AVC. HEVC. or other 2D video encoder, for examDle. Meanwhile, the generated Danorama maD mav be coded and transmitted to decoder through MPEG-2 user data svntax. H.264/AVC SEI svntax table, or HEVC SEI svntax table, for examDle. Note that the Danorama maD mav contain the Droiection matrix between camera views. Dixel corresDondences (e.g.. 6-7 between camera images, and the DersDective Droiection matrix from raw image to
oanorama video. In this case, the generated 3D bit stream mav be comoatible with conventional 2D video coding standards. Accordinglv. 3D outout mav be oresented to a user without reauiring us of a 3D video encoder/decoder svstem.
Γ00541 Processing mav continue from ooeration 424 to ooeration 426. "RECEIVE USER
INPUT", where user inout mav be received. For examole. a user mav orovide inout regarding what oortion of the Danorama view is of interest. In some examoles. at the receiver side, video at anv arbitrarv view oosition can be selectivelv decoded bv a 2D video decoder. In some examoles. such user inmit mav indicate camera internal oarameters like field of view, focal-length, etc. and/or external oarameters related to existing cameras in the original multi-view video. For instance, the rotation and translation to the first camera in the Danorama.
Γ00551 Processing mav continue from ooeration 426 to ooeration 428. "DETERMINE
USER VIEW PREFERENCE", where the user view oreference mav be determined. For examole. the user view oreference mav be determined at anv arbitrarv target view and an associated target region of the Danorama video based at least in Dart on the user inDut. The user view Dreference mav be defined via one or more of the following criteria: a view direction, viewooint Dosition. and a field-of-view of a target view, the like, and/or combinations thereof.
[00561 Processing mav continue from oDeration 428 to ODeration 430. "SET UP
VIRTUAL CAMERA", where a virtual camera mav be set UD. For examDle. a virtual camera mav be set UD based at least in Dart on a Drevision configuration on one or more of the following criteria: viewooint Dosition. field-of-view. and a determined view range in the Danorama video.
Γ00571 Processing mav continue from ODeration 430 to ODeration 432. "PERFORM
VIEW BLENDING ". where view blending mav be Derformed. For examDle. view blending mav be Derformed for the target region of the Danorama video when the target region comes from more than a single texture view. In some examDles such view blending occurs Drior to waroing. as illustrated here. Alternativelv. such view blending mav occur Drior to encoding at ODeration 422.
Γ00581 Processing mav continue from ODeration 432 to ODeration 434. "WARP TO AN
OUTPUT TEXTURE VIEW', where waroing mav be done to oroduce an outout texture view. For examole. the target region of the oanorama video mav be waroed to an outout texture view
via 3D warning techniques based at least in Dart on camera Darameters of the virtual camera and the associated Danorama maD.
Γ00591 Processing mav continue from oDeration 434 to ODeration 436. "DETERMINE
LEFT AND RIGHT VIEWS", where left and right views mav be determined. For exanrole. a left and right view mav be determined for the 3D video based at least in Dart on the outout texture view. Accordinglv. to Drovide viewers with a realistic 3D scene DerceDtion at an arbitrary view Doint. such left view and right view mav be derived and then shown to each eve simultaneously.
[00601 The 3D video mav be disDlaved at the user view Dreference based at least in Dart on the determined left and right view via a 3D disDlav (not shown .
[00611 Additionally or alternativelv. inter-Dicture Drediction of other Danorama video mav be Derformed based at least in Dart on the outout texture view, as will be described in greater detail below with reference to Fig. 5. For examDle. a modified 2D video coder mav decomDose the coded Danorama video into multiDle view Dictures. and then the decomDosed multiDle view Dictures could be inserted into a reference buffer for the inter-Drediction of other Danorama Dictures. In such an examDle. an in-looD decomDOsition module could imDrove coding efficiencv bv Droducing extra reference frames from the Danorama video and Danorama maD. for examDle.
[00621 In oDeration. Drocess 400 (and/or Drocess 300 mav Derform Danorama based video coding to imDrove video coding efficiencv. such as the coding efficiencv of a 3D video codec and/or a multi-view video codec. Process 400 (and/or Drocess 300 mav generate the Danorama video sequence via the multiDle view sequences and the corresDonding camera internal/external Darameters. Process 400 (and/or Drocess 300) mav convert the 3D video or multi-view videos into a Danorama video and a Danorama maD for encoding and transmission. And at the decoder side, the decoded Danorama video mav be decomDosed into multiDle view videos using the decoded Danorama maD information.
[00631 In oDeration. Drocess 400 (and/or Drocess 300) mav be advantageous as comDared with the existing 3D video coding methods. For examDle. Drocess 400 (and/or Drocess 300) mav decrease data redundancy and communication traffic in the channel. To be SDecific. the traditional multiview video coding (MVC) encodes all the inDut views one bv one. Although
inter-view rjrediction and intra-view orediction are exoloited in MVC to reduce the redundancies, the residual data after orediction are still much larger than Danorama video.
[00641 In another examrjle. Drocess 400 (and/or Drocess 300 mav generate a bitstream that could, in some imrjlementations. be totallv comoatible with traditional 2D encoder/decoder without modification to the 2D encoder/decoder. In some imrjlementations. no hardware changes would be taken to suDDort such Danorama based 3D video coding. Whereas in the traditional 3D video coding like MVC or currentlv on-going 3DV standard (e.g.. using multiview DIUS det)th 3D video format , the deoendent views mav not be comoatible with traditional 2D encoder/decoder due to the inter- view Drediction.
Γ00651 In a further examrjle. Drocess 400 (and/or Drocess 300 mav suDDorts head motion
Darallax while MVC cannot suDDort such a feature. Bv using the Dresented Danorama based 3D video coding, an arbitrary view video at anv intermediate viewing Dosition can be derived from the Danorama video bv Drocess 400 (and/or Drocess 300). However, such a number of outDut views cannot be varied in MVC (onlv decreased).
[00661 In a still further examrjle. Drocess 400 (and/or Drocess 300) mav not need to encode the det)th mans of multirjle views. The currentlv ongoing 3DV standardization tvmcallv encodes multiview DIUS deoth 3D video format. Nevertheless, the derivation of det)th man is still an obscure ooint. The existing det)th sensor and deoth estimation algorithm still needs to be develooed to achieve a high aualitv det)th man in such currentlv ongoing 3DV standardization methods.
[00671 In a still further examrjle. orocess 400 (and/or rjrocess 300) mav emrjlov an in-
IOOD multi-view decomDOsition module bv Droducing an extra reference frame from the Danorama video and the Danorama maD. Since the extracted multiview video mav be Droduced via view blending and 3D waroing techniques, the visual aualitv mav be maintained at a high level. Therefore, the coding efficiency mav be further imrjroved bv adding the Danorama-based reference frame.
[00681 While imrjlementation of examrjle orocesses 300 and 400. as illustrated in FIGS. 3 and 4. mav include the undertaking of all blocks shown in the order illustrated, the oresent disclosure is not limited in this regard and. in various examrjles. imrjlementation of orocesses 300
and 400 mav include the undertaking onlv a subset of the blocks shown and/or in a different order than illustrated.
Γ00691 In addition, anv one or more of the blocks of FIGS. 3 and 4 mav be undertaken in resDonse to instructions Drovided bv one or more commiter Drogram rjroducts. Such Drogram Droducts mav include signal bearing media Droviding instructions that, when executed bv. for examDle. a Drocessor. mav Drovide the functionality described herein. The commiter Drogram Droducts mav be Drovided in anv form of commiter readable medium. Thus, for examDle. a Drocessor including one or more mocessor coreis) mav undertake one or more of the blocks shown in FIGS. 3 and 4 in resDonse to instructions conveved to the Drocessor bv a commiter readable medium.
Γ00701 As used in anv imDlementation described herein, the term "module" refers to anv combination of software, firmware and/or hardware configured to Drovide the functionality described herein. The software mav be embodied as a software Dackage. code and/or instruction set or instructions, and "hardware", as used in anv imDlementation described herein, mav include, for examDle. singlv or in anv combination, hardwired circuitry. Drogrammable circuitry, state machine circuitry, and/or firmware that stores instructions executed bv Drogrammable circuitry. The modules mav. collectivelv or individually, be embodied as circuitry that forms Dart of a larger system, for examDle. an integrated circuit (ΊΟ. svstem on-chiD ( SoO. and so forth.
Γ00711 FIG. 5 is an illustrative diagram of examDle Danorama based 3D video coding flow of a modified 2D video coder 500 in accordance with at least some imDlementations of the Dresent disclosure. In the illustrated imDlementation. inter-Dicture Drediction of other Danorama video mav be Derformed via modified 2D video coder 500 based at least in Dart on the outDut texture view, as was discussed above in Fig. 4.
Γ00721 For examDle. Danorama video 504 mav be Dassed to a transform and quantization module 508. Transform and quantization module 508 mav Derform known video transform and quantization Drocesses. The outout of transform and quantization module 508 mav be Drovided to an entroDV coding module 509 and to a de-quantization and inverse transform module 510. De- quantization and inverse transform module 510 mav inurement the inverse of the ODerations undertaken bv transform and quantization module 508 to Drovide the outout of Danorama video 504 to in IOOD filters 514 ( e.g.. including a de-blocking filter, a samDle adaDtive offset filter, an
adatrtive IOOD filter, or the like , a buffer 520. a motion estimation module 522. a motion condensation module 524 and an intra-frame Drediction module 526. Those skilled in the art mav recognize that transform and quantization modules and de-auantization and inverse transform modules as described herein mav emrjlov scaling techniques. The outDut of IOOD filters 514 mav be fed back to multi-view decomrjosition module 518.
Γ00731 Accordinglv. in some embodiments, the Danorama video could be encoded using modified 2D video coder 500. as shown in Fig. 5. At the encoder/decoder side, in-looD multi- view decomrjosition module 518 mav be atrolied to extract multiview Dictures from coded the Danorama video and Danorama mat). Then, to imrjrove the coding efficiencv. the extracted multi- view Dictures could be inserted into reference buffer 520 for the inter-Drediction of other Danorama Dictures. For examDle. modified 2D video coder 500 mav decomDose the coded Danorama video into multiDle view Dictures. and then the decomDosed multiDle view Dictures could be inserted into reference buffer 520 for the inter-Drediction of other Danorama Dictures. In such an examDle. in-looD decomDOsition module 518 could imDrove coding efficiencv bv Droducing extra reference frames from the Danorama video and Danorama maD. for examDle.
Γ00741 FIG. 6 is an illustrative diagram of an examDle 2D video coding svstem 200. arranged in accordance with at least some imDlementations of the Dresent disclosure. In the illustrated imDlementation. 2D video coding svstem 200 mav include disDlav 602. imaging deviceis) 604. 2D video encoder 203. 2D video decoder 205. and/or logic modules 406. Logic modules 406 mav include Danorama generation logic module 408. 3D video extraction logic module 410. the like, and/or combinations thereof.
Γ00751 As illustrated. disDlav 602. 2D video decoder 205. Drocessor 606 and/or memorv store 608 mav be caDable of communication with one another and/or communication with Dortions of logic modules 406. Similarlv. imaging deviceis) 604 and 2D video encoder 203 mav be caDable of communication with one another and/or communication with Dortions of logic modules 406. Accordinglv. 2D video decoder 205 mav include all or Dortions of logic modules 406. while 2D video encoder 203 mav include similar logic modules. Although 2D video coding svstem 200 . as shown in FIG. 6. mav include one Darticular set of blocks or actions associated with Darticular modules, these blocks or actions mav be associated with different modules than the Darticular module illustrated here.
Γ00761 In some examrjles. disDlav device 602 mav be configured to Dresent video data.
Processors 606 mav be communicativelv couDled to disDlav device 602. Memorv stores 608 mav be communicativelv couDled to Drocessors 606. Panorama generation logic module 408 mav be communicativelv courjled to Drocessors 606 and mav be configured to generate Danorama video and Danorama mans. 2D encoder 203 mav be communicativelv couDled to Danorama generation logic module 408and mav be configured to encode the Danorama video and the associated Danorama maD. 2D decoder 205 mav be communicativelv couDled to 2D encoder 203 and mav be configured to decode a Danorama video and an associated Danorama maD. where the Danorama video and the associated Danorama maD were generated based at least in Dart on multiDle texture views and camera Darameters. 3D video extraction logic module 410 mav be communicativelv couDled to 2D decoder 205 and mav be configured to extract a 3D video based at least in Dart on the Danorama video and the associated Danorama maD.
Γ00771 In various embodiments. Danorama generation logic module 408 mav be imDlemented in hardware, while software mav inurement 3D video extraction logic module 410. For examDle. in some embodiments. Danorama generation logic module 408 mav be imDlemented bv aDDlication-sDecific integrated circuit ( ASIC logic while 3D video extraction logic module 410 mav be Drovided bv software instructions executed bv logic such as Drocessors 606. However, the Dresent disclosure is not limited in this regard and Danorama generation logic module 408 and/or 3D video extraction logic module 410 mav be imDlemented bv anv combination of hardware, firmware and/or software. In addition, memorv stores 608 mav be anv tvDe of memorv such as volatile memorv (e.g.. Static Random Access Memorv (SRAM). Dvnamic Random Access Memorv (DRAM), etc. or non-volatile memorv (e.g.. flash memorv. etc. . and so forth. In a non-limiting examDle. memorv stores 608 mav be imDlemented bv cache memorv.
Γ00781 FIG. 7 illustrates an examDle svstem 700 in accordance with the Dresent disclosure. In various imDlementations. svstem 700 mav be a media svstem although svstem 700 is not limited to this context. For examDle. svstem 700 mav be incoroorated into a Dersonal comDuter (PC). laDtoD comDuter. ultra-laDtoD comDuter. tablet, touch Dad. Dortable comDuter. handheld comDuter. DalmtoD comDuter. Dersonal digital assistant (PDA), cellular teleDhone.
combination cellular teleDhone PDA. television, smart device (e.g.. smart Dhone. smart tablet or
smart television , mobile internet device (MID), messaging device, data communication device, and so forth.
Γ00791 In various imDlementations. svstem 700 includes a olatform 702 courjled to a disrjlav 720. Platform 702 mav receive content from a content device such as content services device(s) 730 or content delivery device(s) 740 or other similar content sources. A navigation controller 750 including one or more navigation features mav be used to interact with, for examrjle. rjlatform 702 and/or disolav 720. Each of these comrjonents is described in greater detail below.
[00801 In various imDlementations. rjlatform 702 mav include anv combination of a chiDset 705. orocessor 710. memorv 712. storage 714. grarjhics subsvstem 715. amplications 716 and/or radio 718. Ch set 705 mav movide intercommunication among mocessor 710. memorv 712. storage 714. grarjhics subsvstem 715. amplications 716 and/or radio 718. For examrjle. chiDset 705 mav include a storage adatrter (not de cted) caoable of moviding
intercommunication with storage 714.
[00811 Processor 710 mav be imrjlemented as a Comdex Instruction Set Commiter
(CISC or Reduced Instruction Set Commiter (RISC) mocessors; x86 instruction set comoatible mocessors. multi-core, or anv other micromocessor or central mocessing unit (CPU). In various imDlementations. mocessor 710 mav be dual-core t)rocessor(s). dual-core mobile t)rocessor(s). and so forth.
[00821 Memorv 712 mav be imrjlemented as a volatile memorv device such as. but not limited to. a Random Access Memorv (RAM). Dvnamic Random Access Memorv (DRAM), or Static RAM (SRAM).
[00831 Storage 714 mav be imrjlemented as a non- volatile storage device such as. but not limited to. a magnetic disk drive. oDtical disk drive, taoe drive, an internal storage device, an attached storage device, flash memorv. batterv backed-uD SDRAM (svnchronous DRAM), and/or a network accessible storage device. In various imDlementations. storage 714 mav include technology to increase the storage Derformance enhanced orotection for valuable digital media when multirjle hard drives are included, for examrjle.
[00841 Grannies subsystem 715 may Derform orocessing of images such as still or video for disrjlav. Grarjhics subsystem 715 may be a grarjhics orocessing unit (GPLT) or a visual Drocessing unit (VPLT). for examole. An analog or digital interface may be used to
communicatively cout)le grarjhics subsystem 715 and disolav 720. For examrjle. the interface may be any of a High-Definition Multimedia Interface. Disolav Port, wireless HDMI. and/or wireless HD comDliant techniaues. Grarjhics subsystem 715 may be integrated into orocessor 710 or chiDset 705. In some imrjlementations. grarjhics subsystem 715 may be a stand-alone card communicatively courjled to ch set 705.
Γ00851 The grarjhics and/or video orocessing techniaues described herein may be imrjlemented in various hardware architectures. For examrjle. grarjhics and/or video functionality may be integrated within a ch set. Alternatively, a discrete grarjhics and/or video orocessor may be used. As still another imDlementation. the grarjhics and/or video functions may be orovided bv a general miroose orocessor. including a multi-core orocessor. In further embodiments, the functions may be imrjlemented in a consumer electronics device.
[00861 Radio 718 may include one or more radios caoable of transmitting and receiving signals using various suitable wireless communications techniaues. Such techniaues may involve communications across one or more wireless networks. Examrjle wireless networks include (but are not limited to wireless local area networks (WLANs). wireless r rsonal area networks (WPANs). wireless metroDolitan area network (WMANs). cellular networks, and satellite networks. In communicating across such networks, radio 718 may ooerate in accordance with one or more atrolicable standards in any version.
[00871 In various imrjlementations. disolav 720 may include any television tvoe monitor or disrjlav. Disolav 720 may include, for examrjle. a commiter disolav screen, touch screen disrjlav. video monitor, television-like device, and/or a television. Disolav 720 may be digital and/or analog. In various imrjlementations. disrjlav 720 may be a holograrjhic disrjlav. Also, disrjlav 720 may be a transDarent surface that may receive a visual twoiection. Such rjroiections may convey various forms of information, images, and/or objects. For examrjle. such rjroiections may be a visual overlay for a mobile augmented reality (MA atrolication. Under the control of one or more software atrolications 716. rjlatform 702 may disolav user interface 722 on disolav
[00881 In various imrjlementations. content services devicef s) 730 mav be hosted bv anv national, international and/or indeoendent service and thus accessible to rjlatform 702 via the Internet, for examrjle. Content services deviceis) 730 mav be courjled to rjlatform 702 and/or to disrjlav 720. Platform 702 and/or content services deviceis) 730 mav be courjled to a network 760 to communicate (e.g.. send and/or receive media information to and from network 760. Content delivery deviceis) 740 also mav be courjled to rjlatform 702 and/or to disolav 720.
Γ00891 In various imrjlementations. content services deviceis) 730 mav include a cable television box. oersonal commiter. network, telerjhone. Internet enabled devices or ambiance caoable of delivering digital information and/or content, and anv other similar device caoable of unidirectionallv or bidirectionallv communicating content between content moviders and rjlatform 702 and/disolav 720. via network 760 or directlv. It will be amweciated that the content mav be communicated unidirectionallv and/or bidirectionallv to and from anv one of the comrjonents in svstem 700 and a content movider via network 760. Examrjles of content mav include anv media information including, for examrjle. video, music, medical and gaming information, and so forth.
Γ00901 Content services deviceis) 730 mav receive content such as cable television rjrogramming including media information, digital information, and/or other content. Examrjles of content moviders mav include anv cable or satellite television or radio or Internet content moviders. The movided examrjles are not meant to limit imrjlementations in accordance with the esent disclosure in anv wav.
Γ00911 In various imrjlementations. matform 702 mav receive control signals from navigation controller 750 having one or more navigation features. The navigation features of controller 750 mav be used to interact with user interface 722. for examrjle. In embodiments, navigation controller 750 mav be a oointing device that mav be a commiter hardware comrjonent (srjecificallv. a human interface device that allows a user to in it soatial (e.g.. continuous and multi-dimensional) data into a commiter. Manv svstems such as grarjhical user interfaces (GUI), and televisions and monitors allow the user to control and movide data to the commiter or television using rjhvsical gestures.
Γ00921 Movements of the navigation features of controller 750 mav be rerjlicated on a disrjlav (e.g.. disolav 720) bv movements of a oointer. cursor, focus ring, or other visual
indicators disolaved on the disolav. For examrjle. under the control of software at)t)lications 716. the navigation features located on navigation controller 750 mav be manned to virtual navigation features disolaved on user interface 722. for examrjle. In embodiments, controller 750 mav not be a serjarate comrjonent but mav be integrated into rjlatform 702 and/or disolav 720. The oresent disclosure, however, is not limited to the elements or in the context shown or described herein.
Γ00931 In various imrjlementations. drivers (not shown mav include technology to enable users to instantlv turn on and off rjlatform 702 like a television with the touch of a button after initial boot-ut). when enabled, for examrjle. Program logic mav allow rjlatform 702 to stream content to media adatrtors or other content services devicei s 730 or content delivery devicei s 740 even when the rjlatform is turned "off." In addition, ch set 705 mav include hardware and/or software suDDort for (6. I) surround sound audio and/or high definition (7.1 surround sound audio, for examrjle. Drivers mav include a grarjhics driver for integrated grarjhics rjlatforms. In embodiments, the grarjhics driver mav comrjrise a Derirjheral comrjonent interconnect (PCI) Exrjress grarjhics card.
Γ00941 In various imrjlementations. anv one or more of the comrjonents shown in svstem
600 mav be integrated. For examrjle. rjlatform 602 and content services device(s 630 mav be integrated, or rjlatform 602 and content delivery device(s 640 mav be integrated, or rjlatform 602. content services device(s 630. and content delivery device(s 640 mav be integrated, for examrjle. In various embodiments, rjlatform 602 and disrjlav 620 mav be an integrated unit. Disrjlav 620 and content service device(s 630 mav be integrated, or disolav 620 and content delivery device(s 640 mav be integrated, for examrjle. These examrjles are not meant to limit the Dresent disclosure.
Γ00951 In various embodiments, svstem 600 mav be imrjlemented as a wireless svstem. a wired svstem. or a combination of both. When imrjlemented as a wireless svstem. svstem 600 mav include comrjonents and interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amrjlifiers. filters, control logic, and so forth. An examrjle of wireless shared media mav include Dortions of a wireless SDectrum. such as the RF soectrum and so forth. When imrjlemented as a wired svstem. svstem 600 mav include comrjonents and interfaces suitable for communicating over wired communications media, such as inrjut/outout (I/O) adatrters. rjhvsical connectors to connect the I/O adatrter with a corresDonding wired communications medium, a network interface card
(NIO. disc controller, video controller, audio controller, and the like. Examrjles of wired communications media mav include a wire, cable, metal leads, minted circuit board (PCB). backplane, switch fabric, semiconductor material. twisted-Dair wire, co-axial cable, fiber omics. and so forth.
[00961 Platform 602 mav establish one or more logical or rjhvsical channels to communicate information. The information mav include media information and control information. Media information mav refer to anv data re esenting content meant for a user. Examrjles of content mav include, for examrjle. data from a voice conversation, videoconference. streaming video, electronic mail ("email") message, voice mail message, a hanumeric svmbols. grarjhics. image, video, text and so forth. Data from a voice conversation mav be. for examrjle. soeech information, silence r riods. background noise, comfort noise, tones and so forth. Control information mav refer to anv data remesenting commands, instructions or control words meant for an automated svstem. For examrjle. control information mav be used to route media information through a svstem. or instruct a node to ocess the media information in a redetermined manner. The embodiments, however, are not limited to the elements or in the context shown or described in FIG. 6.
Γ00971 As described above, svstem 600 mav be embodied in varying rjhvsical stvles or form factors. FIG. 8 illustrates imrjlementations of a small form factor device 800 in which svstem 600 mav be embodied. In embodiments, for examrjle. device 800 mav be imrjlemented as a mobile commiting device having wireless carjabilities. A mobile commiting device mav refer to anv device having a mocessing svstem and a mobile Dower source or SUDDIV. such as one or more batteries, for examrjle.
Γ00981 As described above, examrjles of a mobile commiting device mav include a oersonal commiter (PQ. lamoD commiter. ultra-lamoD commiter. tablet, touch Dad. Dortable commiter. handheld commiter. oalmtoD commiter. oersonal digital assistant (PDA , cellular telerjhone. combination cellular teleohone/PDA. television, smart device (e.g.. smart t)hone. smart tablet or smart television , mobile internet device (MID , messaging device, data communication device, and so forth.
Γ00991 Examrjles of a mobile commiting device also mav include commiters that are arranged to be worn bv a r rson. such as a wrist commiter. finger commiter. ring commiter.
eveglass comuuter. belt-cliu comuuter. arm-band comuuter. shoe commiters. clothing commiters. and other wearable commiters. In various embodiments, for examule. a mobile commiting device mav be imulemented as a smart uhone cauable of executing commiter amplications, as well as voice communications and/or data communications. Although some embodiments mav be described with a mobile commiting device imulemented as a smart uhone bv wav of examule. it mav be auureciated that other embodiments mav be imulemented using other wireless mobile commiting devices as well. The embodiments are not limited in this context. ίΟΟΙΟΟΙ As shown in FIG. 8. device 800 mav include a housing 802. a disulav 804. an inuut/outuut (I/O) device 806. and an antenna 808. Device 800 also mav include navigation features 812. Disulav 804 mav include anv suitable disulav unit for dissaving information auurouriate for a mobile commiting device. I/O device 806 mav include anv suitable I/O device for entering information into a mobile commiting device. Examules for I/O device 806 mav include an aluhanumeric kevboard. a numeric kevuad. a touch Dad. inmit kevs. buttons, switches, rocker switches, microuhones. sneakers, voice recognition device and software, and so forth. Information also mav be entered into device 800 bv wav of microuhone (not shown . Such information mav be digitized bv a voice recognition device (not shown . The embodiments are not limited in this context.
[001011 Various embodiments mav be imulemented using hardware elements, software elements, or a combination of both. Examules of hardware elements mav include urocessors. microurocessors. circuits, circuit elements (e.g.. transistors, resistors, cauacitors. inductors, and so forth , integrated circuits, amplication suecific integrated circuits (ASIC). urogrammable logic devices (PLD). digital signal urocessors (DSP), field Drogrammable gate arrav (FPGA). logic gates, registers, semiconductor device, chius. microchius. chiu sets, and so forth. Examules of software mav include software comuonents. urograms, atrolications. commiter urograms, amplication urograms, svstem urograms, machine urograms, ouerating svstem software, middleware, firmware, software modules, routines, subroutines, functions, methods, urocedures. software interfaces, amplication urogram interfaces (API), instruction sets, commiting code, comuuter code, code segments, comuuter code segments, words, values, svmbols. or anv combination thereof. Determining whether an embodiment is imulemented using hardware elements and/or software elements mav varv in accordance with anv number of factors, such as desired comuutational rate, uower levels, heat tolerances, urocessing cvcle budget, inuut data
rates, outout data rates, memory resources, data bus SDeeds and other design or Derformance constraints.
Γ001021 One or more asDects of at least one embodiment mav be imrjlemented bv rerjresentative instructions stored on a machine-readable medium which reoresents various logic within the Drocessor. which when read bv a machine causes the machine to fabricate logic to Derform the techniques described herein. Such rerjresentations. known as "IP cores" mav be stored on a tangible, machine readable medium and sutrolied to various customers or
manufacturing facilities to load into the fabrication machines that actuallv make the logic or Drocessor.
Γ001031 While certain features set forth herein have been described with reference to various imrjlementations. this descritrtion is not intended to be construed in a limiting sense. Hence, various modifications of the imrjlementations described herein, as well as other imrjlementations. which are atroarent to Dersons skilled in the art to which the Dresent disclosure Dertains are deemed to lie within the s rit and scoDe of the Dresent disclosure.
Γ001041 The following examrjles Dertain to further embodiments.
Γ001051 In one examrjle. a comrjuter-imrjlemented method for video coding mav include decoding a Danorama video and an associated Danorama maD. via a 2D decoder. The Danorama video and the associated Danorama maD mav have been generated based at least in Dart on multiDle texture views and camera Darameters. A 3D video mav be extracted based at least in Dart on the Danorama video and the associated Danorama maD.
[001061 In another examDle. a comDuter-imDlemented method for video coding mav further include, on a 2D encoder side, determining a Dixel corresDondence caDable of maDDing Dixel coordinates from the multiDle texture views via kev Doint features. Camera external Darameters mav be estimated, where the camera external Darameters include one or more of the following: a translation vector and a rotation matrix between multiDle cameras. A Droiection matrix mav be determined based at least in Dart on the camera external Darameters and camera internal Darameters. The Danorama video mav be generated from the multiDle texture views via an image stitching algorithm based at least in Dart on geometric maDDing from the determined Droiection matrix and/or the determined Dixel corresDondence. The associated Danorama maD
mav be generated an mav be caoable of marking oixel coordinates between the multiole texture views and the oanorama video as a oersoective oroiection from the multiole texture views to the oanorama image. The oanorama video and the associated oanorama man mav be encoded. On the 2D decoder side, the extraction of the 3D video mav further include receiving user inout. A user view Dreference mav be determined at anv arbitrary target view and an associated target region of the Danorama video based at least in Dart on the user inDut. where the user view Dreference mav be defined via one or more of the following criteria: a view direction, viewooint Dosition. and a field-of-view of a target view. A virtual camera mav be set UD based at least in Dart on a
Drevision configuration on one or more of the following criteria: viewooint Dosition. field-of- view. and a determined view range in the Danorama video. View blending mav be Derformed for the target region of the Danorama video when the target region comes from more than a single texture view, where the view blending occurs Drior to waroing or Drior to encoding. The target region of the Danorama video mav be waroed to an outout texture view via 3D waroing techniques based at least in Dart on camera Darameters of the virtual camera and the associated Danorama maD. A left and right view mav be determined for the 3D video based at least in Dart on the outout texture view. The 3D video mav be disDlaved at the user view Dreference based at least in Dart on the determined left and right view. Inter-oicture Drediction of other Danorama video mav be Derformed based at least in Dart on the outout texture view.
Γ001071 In other examoles. a svstem for video coding on a comouter mav include a disolav device, one or more orocessors. one or more memorv stores, a 2D decoder, a 3D video extraction logic module, the like, and/or combinations thereof. The disolav device mav be configured to oresent video data. The one or more orocessors mav be communicativelv couoled to the disolav device. The one or more memorv stores mav be communicativelv couoled to the one or more orocessors. The 2D decoder mav be communicativelv couoled to the one or more orocessors and mav be configured to decode a oanorama video and an associated oanorama mao. where the oanorama video and the associated oanorama mao were generated based at least in oart on multiole texture views and camera oarameters. The 3D video extraction logic module mav be communicativelv couoled to the 2D decoder and mav be configured to extract a 3D video based at least in oart on the oanorama video and the associated oanorama mao.
[001081 In another examole. the svstem for video coding on a comouter mav further include a oanorama generation logic module configured to determine a oixel corresoondence
caoable of marking oixel coordinates from the multiole texture views via kev ooint features; estimate camera external Darameters. where the camera external oarameters include one or more of the following: a translation vector and a rotation matrix between multiole cameras; determine a oroiection matrix based at least in Dart on the camera external Darameters and camera internal Darameters; generate the Danorama video from the multiole texture views via an image stitching algorithm based at least in Dart on geometric maooing from the determined Droiection matrix and/or the determined oixel corresDondence; and generate the associated Danorama maD caDable of maDDing oixel coordinates between the multiole texture views and the Danorama video as a DersDective Droiection from the multiDle texture views to the Danorama image. The svstem mav further include a 2D encoder configured to encode the Danorama video and the associated Danorama maD. The 3D video extraction logic module mav be further configured to receive user inDut and determine a user view Dreference at anv arbitrary target view and an associated target region of the Danorama video based at least in Dart on the user inDut where the user view Dreference mav be defined via one or more of the following criteria: a view direction, viewooint Dosition. and a field-of-view of a target view. The 3D video extraction logic module mav be further configured to set UD a virtual camera based at least in Dart on a Drevision configuration on one or more of the following criteria: viewooint oosition. field-of-view. and a determined view range in the Danorama video; Derform view blending for the target region of the Danorama video when the target region comes from more than a single texture view, where the view blending occurs Drior to waroing or Drior to encoding; waro the target region of the Danorama video to an outDut texture view via 3D waroing techniaues based at least in oart on camera Darameters of the virtual camera and the associated Danorama maD; and determine a left and right view for the 3D video based at least in Dart on the outout texture view. The disDlav mav be further configured to disDlav the 3D video at the user view Dreference based at least in Dart on the determined left and right view. The 2D decoder mav be further configured to Derform inter-oicture Drediction of other Danorama video based at least in Dart on the outout texture view.
Γ001091 The above examoles mav include soecific combination of features. However, such the above examoles are not limited in this regard and. in various imolementations. the above examoles mav include the undertaking onlv a subset of such features, undertaking a different order of such features, undertaking a different combination of such features, and/or undertaking additional features than those features exolicitlv listed. For examole. all features described with
resoect to the examrjle methods mav be imDlemented with resoect to the examrjle am)aratus. the examole systems, and/or the examrjle articles, and vice versa.