US20090207904A1 - Multi-View Video Coding Method and Device - Google Patents

Multi-View Video Coding Method and Device Download PDF

Info

Publication number
US20090207904A1
US20090207904A1 US12/224,817 US22481707A US2009207904A1 US 20090207904 A1 US20090207904 A1 US 20090207904A1 US 22481707 A US22481707 A US 22481707A US 2009207904 A1 US2009207904 A1 US 2009207904A1
Authority
US
United States
Prior art keywords
view
views
syntax element
parameter
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/224,817
Other languages
English (en)
Inventor
Purvin Bibhas Pandit
Yeping Su
Peng Yin
Cristina Gomila
Jill MacDonald Boyce
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
InterDigital Madison Patent Holdings SAS
Original Assignee
Purvin Bibhas Pandit
Yeping Su
Peng Yin
Cristina Gomila
Boyce Jill Macdonald
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Purvin Bibhas Pandit, Yeping Su, Peng Yin, Cristina Gomila, Boyce Jill Macdonald filed Critical Purvin Bibhas Pandit
Priority to US12/224,817 priority Critical patent/US20090207904A1/en
Assigned to THOMSON LICENSING reassignment THOMSON LICENSING ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOYCE, JILL MACDONALD, GOMILA, CRISTINA, PANDIT, PURVIN BIBHAS, YIN, PENG, SU, YEPING
Publication of US20090207904A1 publication Critical patent/US20090207904A1/en
Assigned to THOMSON LICENSING DTV reassignment THOMSON LICENSING DTV ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING
Assigned to THOMSON LICENSING DTV reassignment THOMSON LICENSING DTV ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING
Assigned to INTERDIGITAL MADISON PATENT HOLDINGS reassignment INTERDIGITAL MADISON PATENT HOLDINGS ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING DTV
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/107Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present principles relate generally to video encoding and decoding and, more particularly, to methods and apparatus for use in a multi-view video coding system.
  • Multi-view Video Coding MVC
  • ISO/IEC International Organization for Standardization/International Electrotechnical Commission
  • MPEG-4 MPEG-4 Part 10 Advanced Video Coding
  • ITU-T Telecommunication Sector
  • MPEG-4 AVC standard Multi-view Video Coding
  • a method has been proposed to enable efficient random access in multi-view compressed bit streams.
  • a V-picture type and a View Dependency Supplemental Enhancement Information (SEI) message are defined.
  • SEI View Dependency Supplemental Enhancement Information
  • a feature required in the proposed V-picture type is that V-pictures shall have no temporal dependence on other pictures in the same camera and may only be predicted from pictures in other cameras at the same time.
  • the proposed View Dependency Supplemental Enhancement Information message will describe exactly which views a V-picture, as well as the preceding and following sequences of pictures, may depend on. The following are the details of the proposed changes.
  • V-Picture syntax and semantics a particular syntax table relating to the MPEG-4 AVC standard is extended to include a Network Abstraction Layer (NAL) unit type of 14 corresponding to a V-picture. Also, the V-picture type is defined to have the following semantics:
  • NAL Network Abstraction Layer
  • V-picture A coded picture in which all slices reference only slices with the same temporal index (i.e., only slices in other views and not slices in the current view). When a V-picture would be output or displayed, it also causes the decoding process to mark all pictures from the same view which are not IDR-pictures or V-pictures and which precede the V-picture in output order to be marked as “unused for reference”. Each V-picture shall be associated with a View Dependency SEI message occurring in the same NAL.
  • a View Dependency Supplemental Enhancement Information message is defined with the following syntax:
  • view_dependency ( payloadSize ) ⁇ num_seq_reference_views ue(v) seq_reference_view_0 ue(v) seq_reference_view_1 ue(v) ... seq_reference_view_N ue(v) num_pic_reference_views ue(v) pic_reference_view_0 ue(v) pic_reference_view_1 ue(v) ...
  • num_seq_reference_views/num_pic_reference_views denotes the number of potential views that can be used as a reference for the current sequence/picture
  • seq_reference_view_i/pic_reference_view_i denotes the view number for the i th reference view.
  • the picture associated with a View Dependency Supplemental. Enhancement Information message shall only reference the specified views described by pic_reference_view_i. Similarly, all subsequent pictures in output order of that view until the next View Dependency Supplemental Enhancement Information message in that view shall only reference the specified views described by seq_reference_view_i.
  • a View Dependency Supplemental Enhancement Information message shall be associated with each Instantaneous Decoding Refresh (IDR) picture and V-picture.
  • At least one drawback of this method is the complexity introduced in the decoder due to the dependency being recursively obtained. Additionally, this method requires that every V-picture carry an SEI message (which is a non-normative part of the MPEG-4 AVC standard), resulting in the dependency being unable to be used for normative behavior such as reference picture selection.
  • an apparatus includes an encoder for encoding at least two views corresponding to multi-view video content into a resultant bitstream, wherein the resultant bitstream is encoded to include view specific information.
  • the view specific information indicates a decoding interdependency between at least some of the at least two views.
  • the method includes encoding at least two views corresponding to multi-view video content into a resultant bitstream, wherein the resultant bitstream is encoded to include view specific information.
  • the view specific information indicates a decoding interdependency between at least some of the at least two views.
  • an apparatus includes a decoder for decoding at least two views corresponding to multi-view video content from a bitstream, wherein the bitstream is decoded to determine view specific information included therein, the view specific information indicating a decoding interdependency between at least some of the at least two views.
  • the method includes decoding at least two views corresponding to multi-view video content from a bitstream, wherein the bitstream is decoded to determine view specific information included therein.
  • the view specific information indicates a decoding interdependency between at least some of the at least two views.
  • an apparatus includes an encoder for encoding at least two views corresponding to multi-view video content by defining as a base view any of the at least two views that, for a decoding thereof, is independent of any other of the at least two views.
  • the method includes encoding at least two views corresponding to multi-view video content by defining as a base view any of the at least two views that, for a decoding thereof, is independent of any other of the at least two views.
  • an apparatus includes a decoder for decoding at least two views corresponding to multi-view video content, wherein the decoder determines which, if any, of the at least two views is a base view that, for a decoding thereof, is independent of any other of the at least two views.
  • the method includes decoding at least two views corresponding to multi-view video content, wherein the decoding step determines which, if any, of the at least two views is a base view that, for a decoding thereof, is independent of any other of the at least two views.
  • an apparatus includes an encoder for encoding at least two views corresponding to multi-view video content by encoding at least one of the at least two views in a resultant bitstream that is syntax compliant with the International Organization for Standardization/International Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/international Telecommunication Union, Telecommunication Sector H.264 recommendation, for backwards compatibility therewith.
  • the method includes encoding at least two views corresponding to multi-view video content by encoding at least one of the at least two views in a resultant bitstream that is syntax compliant with the International Organization for Standardization/International Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/International Telecommunication Union, Telecommunication Sector H.264 recommendation, for backwards compatibility therewith.
  • an apparatus includes a decoder for decoding at least two views corresponding to multi-view video content, wherein at least one of the at least two views is included in a bitstream that is syntax compliant with the International Organization for Standardization/International Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/International Telecommunication Union, Telecommunication Sector H.264 recommendation, for backwards compatibility therewith.
  • the method includes decoding at least two views corresponding to multi-view video content, wherein at least one of the at least two views is included in a bitstream that is syntax compliant with the International Organization for Standardization/International Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/International Telecommunication Union, Telecommunication Sector H.264 recommendation, for backwards compatibility therewith.
  • an apparatus includes an encoder for encoding at least one of at least two views corresponding to multi-view video content by selecting between one of two-pre-defined slice types.
  • the method includes encoding at least one of at least two views corresponding to multi-view video content by selecting between one of two-pre-defined slice types.
  • an apparatus includes a decoder for decoding at least one of at least two views corresponding to multi-view video content by determining between one of two-pre-defined slice types.
  • the method includes decoding at least one of at least two views corresponding to multi-view video content by determining between one of two-pre-defined slice types.
  • an apparatus includes an encoder for encoding at least two views corresponding to multi-view content into a resultant bitstream, wherein the resultant bitstream is encoded to include at least one camera parameter corresponding to at least one of the at least two views.
  • the method includes encoding at least two views corresponding to multi-view content into a resultant bitstream, wherein the resultant bitstream is encoded to include at least one camera parameter corresponding to at least one of the at least two views.
  • an apparatus includes a decoder for decoding at least two views corresponding to multi-view content from a bitstream, wherein the bitstream is decoded to determine at least one camera parameter included therein.
  • the at least one camera parameter corresponds to at least one of the at least two views.
  • the method includes decoding at least two views corresponding to multi-view content from a bitstream, wherein the bitstream is decoded to determine at least one camera parameter included therein.
  • the at least one camera parameter corresponds to at least one of the at least two views.
  • an apparatus includes an encoder for encoding at least two views corresponding to multi-view video content into a resultant bitstream, wherein the resultant bitstream is encoded to include at least one syntax element related to at least one camera parameter for at least one of the at least two views.
  • the method includes encoding at least two views corresponding to multi-view video content into a resultant bitstream, wherein the resultant bitstream is encoded to include at least one syntax element related to at least one camera parameter for at least one of the at least two views.
  • an apparatus includes a decoder for decoding at least two views corresponding to multi-view video content from a bitstream, wherein the bitstream is decoded to determine at least one camera parameter for at least one of the at least two views based on at least one syntax element included in the bitstream.
  • the method includes decoding at least two views corresponding to multi-view video content from a bitstream, wherein the bitstream is decoded to determine at least one camera parameter for at least one of the at least two views based on at least one syntax element included in the bitstream.
  • FIG. 1 is a block diagram for an exemplary video encoder to which the present principles may be applied, in accordance with an embodiment of the present principles
  • FIG. 2 is a block diagram for an exemplary video decoder to which the present principles may be applied, in accordance with an embodiment of the present principles;
  • FIG. 3 is a diagram for an inter-view-temporal prediction structure based on the MPEG-4 AVC standard, using hierarchical B pictures, in accordance with an embodiment of the present principles
  • FIG. 4 is a flow diagram for an exemplary method for encoding multiple views of multi-view video content, in accordance with an embodiment of the present principles.
  • FIG. 5 is a flow diagram for an exemplary method for decoding multiple views of multi-view video content, in accordance with an embodiment of the present principles.
  • the present principles are directed to methods and apparatus for use in a multi-view video coding system.
  • processor or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage.
  • DSP digital signal processor
  • ROM read-only memory
  • RAM random access memory
  • any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
  • any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function.
  • the present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
  • FIG. 1 an exemplary video encoder to which the present principles may be applied is indicated generally by the reference numeral 100 .
  • An input to the video encoder 100 is connected in signal communication with a non-inverting input of a combiner 110 .
  • the output of the combiner 110 is connected in signal communication with a transformer/quantizer 120 .
  • the output of the transformer/quantizer 120 is connected in signal communication with an entropy coder 140 .
  • An output of the entropy coder 140 is available as an output of the encoder 100 .
  • the output of the transformer/quantizer 120 is further connected in signal communication with an inverse transformer/quantizer 150 .
  • An output of the inverse transformer/quantizer 150 is connected in signal communication with an input of a deblock filter 160 .
  • An output of the deblock filter 160 is connected in signal communication with reference picture stores 170 .
  • a first output of the reference picture stores 170 is connected in signal communication with a first input of a motion estimator 180 .
  • the input to the encoder 100 is further connected in signal communication with a second input of the motion estimator 180 .
  • the output of the motion estimator 180 is connected in signal communication with a first input of a motion compensator 190 .
  • a second output of the reference picture stores 170 is connected in signal communication with a second input of the motion compensator 190 .
  • the output of the motion compensator 190 is connected in signal communication with an inverting input of the combiner 110 .
  • FIG. 2 an exemplary video decoder to which the present principles may be applied is indicated generally by the reference numeral 200 .
  • the video decoder 200 includes an entropy decoder 210 for receiving a video sequence.
  • a first output of the entropy decoder 210 is connected in signal communication with an input of an inverse quantizer/transformer 220 .
  • An output of the inverse quantizer/transformer 220 is connected in signal communication with a first non-inverting input of a combiner 240 .
  • the output of the combiner 240 is connected in signal communication with an input of a deblock filter 290 .
  • An output of the deblock filter 290 is connected in signal communication with an input of a reference picture stores 250 .
  • the output of the reference picture stores 250 is connected in signal communication with a first input of a motion compensator 260 .
  • An output of the motion compensator 260 is connected in signal communication with a second non-inverting input of the combiner 240 .
  • a second output of the entropy decoder 210 is connected in signal communication with a second input of the motion compensator 260 .
  • the output of the deblock filter 290 is available as an output of the video decoder 200 .
  • a high level syntax is proposed for efficient processing of a multi-view sequence.
  • VPS View Parameter Set
  • the NAL unit types including a view identifier (id) in the NAL header to identify to which view the slice belongs.
  • view identifier id
  • high level syntax refers to syntax present in the bitstream that resides hierarchically above the macroblock layer.
  • high level syntax may refer to, but is not limited to, syntax at the slice header level, Supplemental Enhancement Information (SEI) level, picture parameter set level, and sequence parameter set level.
  • SEI Supplemental Enhancement Information
  • a base view may or may not be compatible with the MPEG-4 AVC standard, but an MPEG-4 AVC compatible view is always a base view.
  • an inter-view-temporal prediction structure based on the MPEG-4 AVC standard, using hierarchical B pictures is indicated generally by the reference numeral 30 b .
  • the variable I denotes an intra coded picture
  • the variable P denotes a predictively coded picture
  • the variable B denotes a bi-predictively coded picture
  • the variable T denotes a location of a particular picture
  • the variable S denotes a particular view to which corresponds a particular picture.
  • Anchor picture is defined as a picture the decoding of which does not involve any picture sampled at a different time instance.
  • An anchor picture is signaled by setting the nal_ref_idc to 3.
  • all pictures in locations T 0 , T 8 . . . , T 96 , and T 100 are examples of anchor pictures.
  • Non-anchor picture is defined as a picture which does not have the above constraint specified for an anchor picture.
  • pictures B 2 , B 3 , and B 4 are non-anchor pictures.
  • Base view is a view which does not depend on any other view and can be independently decoded.
  • view S 0 is an example of base view.
  • a new parameter set is proposed called the View Parameter Set with its own NAL unit type and two new NAL unit types to support Multi-view Video Coding slices.
  • the MPEG-4 AVC standard includes the following two parameter sets: (1) Sequence Parameter Set (SPS), which includes information that is not expected to change over an entire sequence; and (2) Picture Parameter Set (PPS), which includes information that is not expected to change for each picture.
  • SPS Sequence Parameter Set
  • PPS Picture Parameter Set
  • Multi-view Video Coding has additional information which is specific to each view, we have created a separate View Parameter Set (VPS) in order to transmit this information. All the information that is needed to determine the dependency between the different views is indicated in the View Parameter Set.
  • the syntax table for the proposed View Parameter Set is shown in TABLE 1 (View Parameter Set RBSP syntax). This View Parameter Set is included in a new NAL unit type, for example, type 14 as shown in TABLE 2 (NAL unit type codes).
  • view_parameter_set_id identifies the view parameter set that is referred to in the slice header.
  • the value of the view_parameter_set_id shall be in the range of 0 to 255.
  • number_of_views_minus — 1 plus 1 identifies the total number of views in the bitstream.
  • the value of the number_of_view_minus — 1 shall be in the range of 0 to 255.
  • avc_compatible_view_id indicates the view id of the AVC compatible view.
  • the value of avc_compatible_view_id shall be in the range of 0 to 255.
  • is_base_view_flag[i] 1 indicates that the view i is a base view and is independently decodable.
  • is_base_view_flag[i] 0 indicates that the view i is not a base view.
  • the value of is_base_view_flag[i] shall be equal to 1 for an AVC compatible view i.
  • dependency_update_flag 1 indicates that dependency information for this view is updated in the VPS.
  • dependency_update_flag 0 indicates that the dependency information for this view is not updated and should not be changed.
  • anchor_picture_dependency_maps[i][j] 1 indicates the anchor pictures with view_id equal to j will depend on the anchor pictures with view_id equal to i.
  • non_anchor_picture_dependency_maps[i][j] 1 indicates the non-anchor pictures with view_id equal to j will depend on the non-anchor pictures with view_id equal to i.
  • non_anchor_picture_dependency_maps[i][j] is present only when anchor_picture_dependency_maps[i][i] equals 1. If anchor_picture_dependency_maps[i][j] is present and is equal to zero non_anchor_picture_dependency_maps[i][j] shall be inferred as being equal to 0.
  • Optional parameters in the View Parameter Set include the following:
  • camera_parameters_present_flag 1 indicates that a projection matrix is signaled as follows.
  • camera_parameters presuming camera parameter is conveyed in the form of a 3 ⁇ 4 projection matrix P, which can be used to map a point in the 3D world to the 2D image coordinate:
  • Each element camera_parameters_*_* can be represented according to the IEEE single precision floating point (32 bits) standard.
  • the decoder can create a map using all the dependency information once it receives the View Parameter Set. This enables it to know before it receives any slice which views are needed for decoding a particular view. As a result of this, we only need to parse the slice header to obtain the view_id and determine if this view is needed to decode a target view as indicated by a user. Thus, we do not need to buffer any frames or wait until a certain point to determine which frames are needed for decoding a particular view.
  • the dependency information and whether it is a base view is indicated in the View Parameter Set. Even an MPEG-4 AVC compatible base view has associated with it information that is specific to that view (e.g., camera parameters). This information may be used by other views for several purposes including view interpolation/synthesis.
  • Non-Multi-view Video Coding decoder By restricting it to just one such view, it is guaranteed that a non-Multi-view Video Coding decoder will be able to correctly decode the view and a Multi-view Video Coding decoder can easily identify such a view from the View Parameter Set using the syntax avc_compatible_view_id. All other base views (non-MPEG-4 AVC compatible) can be identified using the is_base_view_flag.
  • a new slice header for Multi-view Video Coding slices is proposed.
  • the View Parameter Set is identified using the view_parameter_set_id.
  • the view_id information is needed for several Multi-view Video Coding requirements including view interpolation/synthesis, view random access, parallel processing, and so forth. This information can also be useful for special coding modes that only relate to cross-view prediction.
  • view_parameter_set_id specifies the view parameter set in use.
  • the value of the view_parameter_set_id shall be in the range 0 to 255.
  • view_id indicates the view id of the current view.
  • the value of the view_parameter_set_id shall be in the range 0 to 255.
  • View random access is a Multi-view Video Coding requirement. The goal is to get access to any view with minimum decoding effort. Let us consider a simple example of view random access for the prediction structure shown in FIG. 3 .
  • view_parameter_set 0
  • number_of_views_minus — 1 is set to 7.
  • avc_compatible_view_id could be set to 0.
  • is_base_view_flag is set to 1 and for other views it is set to 0.
  • the dependency map for S 0 , S 1 , S 2 , S 3 , and S 4 will look as shown in TABLE 4A (Dependency table for S 0 anchor_picture_dependency_map) and TABLE 4B (dependency table for S 0 non_anchor_picture_dependency_map).
  • the dependency map for the other views can be written in a similar way.
  • the decoder can easily determine if a slice it receives is needed to decode a particular view.
  • the decoder only needs to parse the slice header to determine the view id of the current slice and for the target view S 3 it can look up the S 3 columns in the two tables (TABLE 4a and TABLE 4B) to determine whether or not it should keep the current slice.
  • the decoder needs to distinguish between anchor pictures and non-anchor pictures since they may have different dependencies as can be seen from TABLE 4a and TABLE 4b.
  • For the target view S 3 we need to decode the anchor pictures of views S 0 , S 2 , and S 4 but only need to decode the non-anchor pictures of views S 2 and S 4 .
  • FIG. 4 an exemplary method for encoding multiple views of multi-view video content is indicated generally by the reference numeral 400 .
  • the method 400 includes a start block 405 that passes control to a function block 410 .
  • the function block 410 reads a configuration file for the encoding parameters to be used to encode the multiple views, and passes control to a function block 415 .
  • the function block sets N to be equal to the number of views to be encoded, and passes control to a function block 420 .
  • the function block 420 sets number_of_views_minus — 1 equal to N ⁇ 1, sets avc_compatible_view_id equal to the view_id of the MPEG-4 AVC compatible view, and passes control to a function block 425 .
  • the function block 425 sets view_parameter_set_id equal to a valid integer, initializes a variable i to be equal to zero, and passes control to a decision block 430 .
  • the decision block 430 determines whether or not i is greater than N. If so, then control is passed to a decision block 435 . Otherwise, control is passed to a function block 470 .
  • the decision block 435 determines whether or not the current view is a base view. If so, then control is passed to a function block 440 . Otherwise, control is passed to a function block 480 .
  • the function block 440 sets is_base_view_flag[i] equal to one, and passes control to a decision block 445 .
  • the decision block 445 determines whether or not the dependency is being updated. If so, the control is passed to a function block 450 . Otherwise, control is passed to a function block 485 .
  • the function block 450 sets dependency_update_flag equal to one, and passes control to a function block 455 .
  • the function block 455 sets a variable equal to 0, and passes control to a decision block 460 .
  • the decision block 460 determines whether or not j is less than N. If so, then control is passed to a function block 465 . Otherwise, control is passed to the function block 487 .
  • the function block 465 sets anchor_picture_dependency_maps[i][j] and non_anchor_picture_dependency_maps[i][j] to values indicated by configuration file, and passes control to a function block 467 .
  • the function block 467 increments the variable j by one, and returns control to the decision block 460 .
  • the function block 470 sets camera_parameters_present_flag equal to one when camera parameters are present, sets camera_parameters_present_flag equal to zero otherwise, and passes control to a decision block 472 .
  • the decision block 472 determines whether or not camera_parameters_present_flag is equal to one. If so, then control is passed to a function block 432 . Otherwise, control is passed to a function block 434 .
  • the function block 432 writes the camera parameters, and passes control to the function block 434 .
  • the function block 434 writes the View Parameter Set (VPS) or the Sequence Parameter Set (SPS), and passes control to an end block 499 .
  • VPS View Parameter Set
  • SPS Sequence Parameter Set
  • the function block 480 sets is_base_view_flag[i] equal to zero, and passes control to the decision block 445 .
  • the function block 485 sets dependency_update_flag equal to zero, and passes control to a function block 487 .
  • the function block 487 increments the variable i by 1, and returns control to the decision block 430 .
  • an exemplary method for decoding multiple views of multi-view video content is indicated generally by the reference numeral 500 .
  • the method 500 includes a start block 505 that passes control to a function block 510 .
  • the function block 510 parses a Sequence Parameter Set (SPS) or View Parameter Set (VPS), view_parameter_set_id, number_of_views_minus 1, avc_compatible_view_id, sets variables I and j equal to zero, sets N equal to number_of_views_minus — 1, and passes control to a decision block 515 .
  • the decision block 515 determines whether or not i is less than or equal to N. If so, then control is passed to a function block 570 . Otherwise, control is passed to a function block 525 .
  • the function block 570 parses camera_parameters_present_flag, and passes control to a decision block 572 .
  • the decision block 572 determines whether or not camera_parameters_resent flag is equal to one. If so, then control is passed to a function block 574 . Otherwise, control is passed to a function block 576 .
  • the function block 574 parses the camera parameters, and passes control to the function block 576 .
  • the function block 576 continues decoding, and passes control to an end block 599 .
  • the function block 525 parses is_base_view_flag[i] and dependency_update_flag, and passes control to a decision block 530 .
  • the decision block 530 determines whether or not dependency_update_flag is equal to zero. If so, then control is passes to a function block 532 . Otherwise, control is passed to a decision block 535 .
  • the function block 532 increments i by one, and returns control to the decision block 515 .
  • the decision block 535 determines whether or not j is less than or equal to N. If so, then control is passed to a function block 540 . Otherwise, control is passes to a function block 537 .
  • the function block 540 parses anchor_picture_dependency_maps[i][j], and passes control to a decision block 545 .
  • the decision block 545 determines whether or not non_anchor_picture_dependency_maps[i][j] is equal to one. If so, then control is passed to a function block 550 . Otherwise, control is passes to a function block 547 .
  • the function block 550 parses the non_anchor_picture_dependency_maps[i][j], and passes control to the function block 547 .
  • the function block 547 increments j by one, and returns control to the decision block 535 .
  • the function block 537 increments i by one, and returns control to the function block 515 .
  • one advantage/feature is an apparatus that includes an encoder for encoding at least two views corresponding to multi-view video content into a resultant bitstream, wherein the resultant bitstream is encoded to include view specific information.
  • the view specific information indicates a decoding interdependency between at least some of the at least two views.
  • Another advantage/feature is the apparatus having the encoder as described above, wherein the decoding interdependency allows a corresponding decoding of at least one of the at least two views using only a subset of the at least two views for the corresponding decoding.
  • Yet another advantage/feature is the apparatus having the encoder as described above, wherein the decoding interdependency indicated in the view specific information is used for random access of at least one of the at least two views by dropping slices related to any other ones of the at least two views indicated as non-interdependent with respect to the at least one view. Still another advantage/feature is the apparatus having the encoder as described above, wherein the view specific information is included in a high level syntax. A further advantage/feature is the apparatus having the encoder as described above, wherein the view specific information is included in a parameter set compliant with the International Organization for Standardization/international Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/international Telecommunication Union, Telecommunication Sector H.264 recommendation.
  • a yet further advantage/feature is the apparatus having the encoder as described above, wherein the view specific information is included in a View Parameter Set.
  • a still further advantage/feature is the apparatus having the encoder wherein the view specific information is included in a View Parameter Set as described above, wherein the View Parameter Set is assigned a NAL unit type specifically for use only with the View Parameter Set.
  • An additional advantage/feature is the apparatus having the encoder wherein a NAL unit type is assigned specifically for use only with the View Parameter Set as described above, wherein the NAL unit type is 14.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the view specific information includes at least one syntax element for indicating a View Parameter Set id.
  • another advantage/feature is the apparatus having the encoder wherein the view specific information includes at least one syntax element for indicating a View Parameter Set id as described above, wherein the at least one syntax element is denoted by a view_parameter_set_id syntax element. Also, another advantage/feature is the apparatus having the encoder as described above, wherein the view specific information includes at least one syntax element for indicating a number of views. Additionally, another advantage/feature is the apparatus having the encoder wherein the view specific information includes at least one syntax element for indicating a number of views as described above, wherein the at least one syntax element is denoted by a number_of_views_minus — 1 syntax element.
  • the apparatus having the encoder as described above, wherein the view specific information includes at least one syntax element for indicating a view id for a particular one of the at least two views, when the particular one of the at least two views is encoded in a resultant bitstream that is compliant with the International Organization for Standardization/international Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/International Telecommunication Union, Telecommunication Sector H.264 recommendation.
  • another advantage/feature is the apparatus having the encoder wherein the view specific information includes at least one syntax element for indicating a view id for a particular one of the at least two views as described above, wherein the at least one syntax element is denoted by an avc_compatible_view_id syntax element.
  • the view specific information includes at least one syntax element or is implicitly derivable from a high level syntax, the at least one syntax element and the high level syntax for indicating that a particular one of the at least two views is compatible with the International Organization for Standardization/International Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/international Telecommunication Union, Telecommunication Sector H.264 recommendation.
  • another advantage/feature is the apparatus having the encoder wherein the view specific information includes at least one syntax element or is implicitly derivable from a high level syntax as described above, wherein the at least one syntax element is denoted by an is_base_view_flag syntax element.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the view specific information includes at least one syntax element for indicating whether dependency information for at least one of the at least two views is present in the resultant bitstream.
  • another advantage/feature is the apparatus having the encoder wherein the view specific information includes at least one syntax element for indicating whether dependency information for at least one of the at least two views is present in the resultant bitstream as described above, wherein the at least one syntax element is denoted by a dependency_update_flag syntax element. Also, another advantage/feature is the apparatus having the encoder as described above, wherein the view specific information includes at least one syntax element for indicating whether at least one anchor picture in a current one of the at least two views is used for decoding any other ones of the at least two views.
  • another advantage/feature is the apparatus having the encoder wherein the view specific information includes at least one syntax element for indicating whether at least one anchor picture in a current one of the at least two views is used for decoding any other ones of the at least two views as described above, wherein the at least one syntax element is denoted by an anchor_picture_dependency_maps[i][j] syntax element. Also, another advantage/feature is the apparatus having the encoder as described above, wherein the view specific information includes at least one syntax element for indicating whether at least one non-anchor picture in a current one of the at least two views is used for decoding any other ones of the at least two views.
  • another advantage/feature is the apparatus having the encoder wherein the view specific information includes at least one syntax element for indicating whether at least one non-anchor picture in a current one of the at least two views is used for decoding any other ones of the at least two views as described above, wherein the at least one syntax element is denoted by a non_anchor_picture_dependency_maps[i][j] syntax element.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the resultant bitstream is encoded to include at least one syntax element related to at least one camera parameter for at least one of the at least two views.
  • another advantage/feature is the apparatus having the encoder wherein the resultant bitstream is encoded to include at least one syntax element related to at least one camera parameter for at least one of the at least two views as described above, wherein the at least one syntax is included in a parameter set corresponding to the resultant bitstream.
  • another advantage/feature is an apparatus that includes an encoder for encoding at least two views corresponding to multi-view video content by defining as a base view any of the at least two views that, for a decoding thereof, is independent of any other of the at least two views.
  • another advantage/feature is an apparatus that includes an encoder for encoding at least two views corresponding to multi-view video content by encoding at least one of the at least two views in a resultant bitstream that is syntax compliant with the International Organization for Standardization/international Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/International Telecommunication Union, Telecommunication Sector H.264 recommendation, for backwards compatibility therewith.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the at least one view is a base view that, for a decoding thereof, is independent of any other of the at least two views.
  • an avc_compatible_view_id syntax element identifies the at least one view as being encoded in the resultant bitstream that is syntax compliant with the International Organization for Standardization/international Electrotechnical Commission Moving Picture Experts Group-4 Part 10 Advanced Video Coding standard/International Telecommunication Union, Telecommunication Sector H.264 recommendation for backwards compatibility.
  • another advantage/feature is an apparatus that includes an encoder for encoding at least one of at least two views corresponding to multi-view video content by selecting between one of two-pre-defined slice types. Further, another advantage/feature is the apparatus having the encoder as described above, wherein the two pre-defined slice types are an Instantaneous Decoding Refresh slice type and a non-Instantaneous Decoding Refresh slice type.
  • another advantage/feature is the apparatus having the encoder that selects between the Instantaneous Decoding Refresh slice type and the non-Instantaneous Decoding Refresh slice type as described above, wherein NAL unit type 22 is used for the Instantaneous Decoding Refresh slice type and NAL unit type 23 is used for the non-Instantaneous Decoding Refresh slices.
  • another advantage/feature is the apparatus having the encoder as described above, wherein slice headers for at least one of the at least two slices includes view specific syntax.
  • another advantage/feature is the apparatus having the encoder wherein slice headers for at least one of the at least two slices includes view specific syntax as described above, wherein the view specific syntax is conditioned on NAL unit type 23 and NAL unit type 24.
  • another advantage/feature is the apparatus having the encoder wherein slice headers for at least one of the at least two slices includes view specific syntax as described above, wherein the view specific syntax includes a view parameter set identifier and a view identifier. Also, another advantage/feature is the apparatus having the encoder wherein the view specific syntax includes a view parameter set identifier and a view identifier as described above, wherein the view parameter set identifier is denoted by a view_parameter_set_id syntax element and the view identifier is denoted by a view_id syntax element.
  • another advantage/feature is an apparatus that includes an encoder for encoding at least two views corresponding to multi-view content into a resultant bitstream, wherein the resultant bitstream is encoded to include at least one camera parameter corresponding to at least one of the at least two views.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the resultant bitstream is encoded to include a View Parameter Set, and the at least one camera parameter is included in the View Parameter Set.
  • another advantage/feature is the apparatus having the encoder as described above, wherein a presence of the at least one camera parameter is indicated by a syntax element.
  • another advantage/feature is the apparatus having the encoder wherein a presence of the at least one camera parameter is indicated by a syntax element as described above, wherein the syntax element is a camera_parameters_present_flag syntax element. Additionally, another advantage/feature is the apparatus having the encoder as described above, wherein the at least one camera parameter is denoted by a camera_parameters syntax element.
  • another advantage/feature is an apparatus that includes an encoder for encoding at least two views corresponding to multi-view video content into a resultant bitstream, wherein the resultant bitstream is encoded to include at least one syntax element related to at least one camera parameter for at least one of the at least two views.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the at least one syntax is a high level syntax element.
  • another advantage/feature is the apparatus having the encoder as described above, wherein the at least one syntax is included in a parameter set corresponding to the resultant bitstream.
  • the teachings of the present principles are implemented as a combination of hardware and software.
  • the software may be implemented as an application program tangibly embodied on a program storage unit.
  • the application program may be uploaded to, and executed by, a machine comprising any suitable architecture.
  • the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random access memory (“RAM”), and input/output (“I/O”) interfaces.
  • CPU central processing units
  • RAM random access memory
  • I/O input/output
  • the computer platform may also include an operating system and microinstruction code.
  • the various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU.
  • various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
US12/224,817 2006-03-29 2007-02-27 Multi-View Video Coding Method and Device Abandoned US20090207904A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/224,817 US20090207904A1 (en) 2006-03-29 2007-02-27 Multi-View Video Coding Method and Device

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US78709206P 2006-03-29 2006-03-29
US12/224,817 US20090207904A1 (en) 2006-03-29 2007-02-27 Multi-View Video Coding Method and Device
PCT/US2007/004971 WO2007126508A2 (en) 2006-03-29 2007-02-27 Multi-view video coding method and device

Publications (1)

Publication Number Publication Date
US20090207904A1 true US20090207904A1 (en) 2009-08-20

Family

ID=38515387

Family Applications (3)

Application Number Title Priority Date Filing Date
US12/224,817 Abandoned US20090207904A1 (en) 2006-03-29 2007-02-27 Multi-View Video Coding Method and Device
US12/224,814 Expired - Fee Related US9100659B2 (en) 2006-03-29 2007-02-27 Multi-view video coding method and device using a base view
US12/224,816 Abandoned US20090225826A1 (en) 2006-03-29 2007-02-27 Multi-View Video Coding Method and Device

Family Applications After (2)

Application Number Title Priority Date Filing Date
US12/224,814 Expired - Fee Related US9100659B2 (en) 2006-03-29 2007-02-27 Multi-view video coding method and device using a base view
US12/224,816 Abandoned US20090225826A1 (en) 2006-03-29 2007-02-27 Multi-View Video Coding Method and Device

Country Status (11)

Country Link
US (3) US20090207904A1 (ru)
EP (3) EP1999968A2 (ru)
JP (8) JP5213064B2 (ru)
KR (3) KR101361896B1 (ru)
CN (3) CN101416518B (ru)
AU (2) AU2007243935A1 (ru)
BR (3) BRPI0709194A2 (ru)
MX (2) MX2008012382A (ru)
RU (2) RU2488973C2 (ru)
WO (3) WO2007126508A2 (ru)
ZA (2) ZA200807023B (ru)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100053863A1 (en) * 2006-04-27 2010-03-04 Research In Motion Limited Handheld electronic device having hidden sound openings offset from an audio source
US20100118978A1 (en) * 2008-11-12 2010-05-13 Rodriguez Arturo A Facilitating fast channel changes through promotion of pictures
US20100266010A1 (en) * 2009-01-19 2010-10-21 Chong Soon Lim Coding method, decoding method, coding apparatus, decoding apparatus, program, and integrated circuit
US20110222837A1 (en) * 2010-03-11 2011-09-15 Cisco Technology, Inc. Management of picture referencing in video streams for plural playback modes
US8326131B2 (en) 2009-02-20 2012-12-04 Cisco Technology, Inc. Signalling of decodable sub-sequences
US8416858B2 (en) 2008-02-29 2013-04-09 Cisco Technology, Inc. Signalling picture encoding schemes and associated picture properties
US8416859B2 (en) * 2006-11-13 2013-04-09 Cisco Technology, Inc. Signalling and extraction in compressed video of pictures belonging to interdependency tiers
US8699578B2 (en) 2008-06-17 2014-04-15 Cisco Technology, Inc. Methods and systems for processing multi-latticed video streams
US8705631B2 (en) 2008-06-17 2014-04-22 Cisco Technology, Inc. Time-shifted transport of multi-latticed video for resiliency from burst-error effects
US8718388B2 (en) 2007-12-11 2014-05-06 Cisco Technology, Inc. Video processing with tiered interdependencies of pictures
US8782261B1 (en) 2009-04-03 2014-07-15 Cisco Technology, Inc. System and method for authorization of segment boundary notifications
US8804845B2 (en) 2007-07-31 2014-08-12 Cisco Technology, Inc. Non-enhancing media redundancy coding for mitigating transmission impairments
US8804843B2 (en) 2008-01-09 2014-08-12 Cisco Technology, Inc. Processing and managing splice points for the concatenation of two video streams
US8875199B2 (en) 2006-11-13 2014-10-28 Cisco Technology, Inc. Indicating picture usefulness for playback optimization
US8886022B2 (en) 2008-06-12 2014-11-11 Cisco Technology, Inc. Picture interdependencies signals in context of MMCO to assist stream manipulation
US8891612B2 (en) * 2005-03-31 2014-11-18 Samsung Electronics Co., Ltd. Encoding and decoding multi-view video while accommodating absent or unreliable camera parameters
US8949883B2 (en) 2009-05-12 2015-02-03 Cisco Technology, Inc. Signalling buffer characteristics for splicing operations of video streams
US8958486B2 (en) 2007-07-31 2015-02-17 Cisco Technology, Inc. Simultaneous processing of media and redundancy streams for mitigating impairments
US8971402B2 (en) 2008-06-17 2015-03-03 Cisco Technology, Inc. Processing of impaired and incomplete multi-latticed video streams
US9467696B2 (en) 2009-06-18 2016-10-11 Tech 5 Dynamic streaming plural lattice video coding representations of video
US11496760B2 (en) 2011-07-22 2022-11-08 Qualcomm Incorporated Slice header prediction for depth maps in three-dimensional video codecs

Families Citing this family (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100091845A1 (en) * 2006-03-30 2010-04-15 Byeong Moon Jeon Method and apparatus for decoding/encoding a video signal
WO2007114612A1 (en) 2006-03-30 2007-10-11 Lg Electronics Inc. A method and apparatus for decoding/encoding a video signal
WO2007148909A1 (en) * 2006-06-19 2007-12-27 Lg Electronics, Inc. Method and apparatus for processing a vedeo signal
KR101450921B1 (ko) * 2006-07-05 2014-10-15 톰슨 라이센싱 멀티뷰 비디오 엔코딩 및 디코딩을 위한 방법 및 장치
WO2008023968A1 (en) 2006-08-25 2008-02-28 Lg Electronics Inc A method and apparatus for decoding/encoding a video signal
JP5143829B2 (ja) * 2006-09-07 2013-02-13 エルジー エレクトロニクス インコーポレイティド スケーラブルビデオコーディングされたビットストリームのデコーディング方法及び装置
US7742524B2 (en) * 2006-11-17 2010-06-22 Lg Electronics Inc. Method and apparatus for decoding/encoding a video signal using inter-layer prediction
DK2103136T3 (en) * 2006-12-21 2017-12-04 Thomson Licensing METHODS AND APPARATUS FOR IMPROVED SIGNALING USING HIGH-LEVEL SYNTHOLOGY FOR MULTIVIEW VIDEO AND DECODING
KR100801968B1 (ko) * 2007-02-06 2008-02-12 광주과학기술원 변위를 측정하는 방법, 중간화면 합성방법과 이를 이용한다시점 비디오 인코딩 방법, 디코딩 방법, 및 인코더와디코더
EP4210330A1 (en) 2007-04-12 2023-07-12 Dolby International AB Tiling in video decoding and encoding
EP2143278B1 (en) * 2007-04-25 2017-03-22 Thomson Licensing Inter-view prediction with downsampled reference pictures
BR122012021796A2 (pt) * 2007-10-05 2015-08-04 Thomson Licensing Método para incorporar informação de usabilidade de vídeo (vui) em um sistema de codificação de vídeo de múltiplas visualizações (mvc)
CN101911700A (zh) * 2008-01-11 2010-12-08 汤姆逊许可证公司 视频和深度编码
CN101562745B (zh) * 2008-04-18 2012-07-04 华为技术有限公司 一种多视点视频图像编码和解码的方法及装置
BRPI0911672A2 (pt) 2008-04-25 2018-03-13 Thomson Licensing modos de pulo intervisualizações com profundidade
AU2011250757B2 (en) * 2009-01-19 2012-09-06 Panasonic Intellectual Property Corporation Of America Coding method, decoding method, coding apparatus, decoding apparatus, program, and integrated circuit
WO2010085361A2 (en) 2009-01-26 2010-07-29 Thomson Licensing Frame packing for video coding
ES2439316T3 (es) * 2009-02-19 2014-01-22 Panasonic Corporation Medio de grabación y dispositivo de reproducción
JP4962525B2 (ja) * 2009-04-08 2012-06-27 ソニー株式会社 再生装置、再生方法、およびプログラム
CA2718447C (en) 2009-04-28 2014-10-21 Panasonic Corporation Image decoding method, image coding method, image decoding apparatus, and image coding apparatus
EP2425626A2 (en) 2009-05-01 2012-03-07 Thomson Licensing Inter-layer dependency information for 3dv
US8411746B2 (en) * 2009-06-12 2013-04-02 Qualcomm Incorporated Multiview video coding over MPEG-2 systems
US8780999B2 (en) 2009-06-12 2014-07-15 Qualcomm Incorporated Assembling multiview video coding sub-BITSTREAMS in MPEG-2 systems
US8948241B2 (en) * 2009-08-07 2015-02-03 Qualcomm Incorporated Signaling characteristics of an MVC operation point
JP5722349B2 (ja) 2010-01-29 2015-05-20 トムソン ライセンシングThomson Licensing ブロックに基づくインターリーブ
US20110216827A1 (en) * 2010-02-23 2011-09-08 Jiancong Luo Method and apparatus for efficient encoding of multi-view coded video data
US9226045B2 (en) * 2010-08-05 2015-12-29 Qualcomm Incorporated Signaling attributes for network-streamed video data
WO2012036903A1 (en) 2010-09-14 2012-03-22 Thomson Licensing Compression methods and apparatus for occlusion data
KR20120038385A (ko) * 2010-10-13 2012-04-23 한국전자통신연구원 스테레오스코픽 영상 정보의 전송 방법 및 장치
EP2642758A4 (en) * 2010-11-15 2015-07-29 Lg Electronics Inc METHOD FOR CONVERTING A FRAME FORMAT AND APPARATUS USING THE SAME
CN103339946B (zh) * 2010-12-03 2015-10-07 Lg电子株式会社 用于接收多视图三维广播信号的接收设备和方法
CN103190153B (zh) * 2010-12-13 2015-11-25 韩国电子通信研究院 用于立体感视频服务的信号传送方法和使用该方法的设备
US9635355B2 (en) 2011-07-28 2017-04-25 Qualcomm Incorporated Multiview video coding
US9674525B2 (en) 2011-07-28 2017-06-06 Qualcomm Incorporated Multiview video coding
BR112014003165A2 (pt) * 2011-08-09 2017-03-01 Samsung Electronics Co Ltd método para codificar um mapa de profundidade de dados de vídeo de múltiplas visualizações, aparelho para codificar um mapa de profundidade de dados de vídeo de múltiplas visualizações, método para decodificar um mapa de profundidade de dados de vídeo de múltiplas visualizações, e aparelho para decodificar um mapa de profundidade de dados de vídeo de múltiplas visualizações
WO2013030458A1 (en) * 2011-08-31 2013-03-07 Nokia Corporation Multiview video coding and decoding
US9258559B2 (en) 2011-12-20 2016-02-09 Qualcomm Incorporated Reference picture list construction for multi-view and three-dimensional video coding
US9451252B2 (en) 2012-01-14 2016-09-20 Qualcomm Incorporated Coding parameter sets and NAL unit headers for video coding
WO2013115562A1 (ko) * 2012-01-30 2013-08-08 삼성전자 주식회사 시점변환을 위한 예측구조에 기초한 다시점 비디오 부호화 방법 및 그 장치, 시점변환을 위한 예측구조에 기초한 다시점 비디오 복호화 방법 및 그 장치
TW201342884A (zh) 2012-01-31 2013-10-16 Sony Corp 編碼裝置及編碼方法、以及解碼裝置及解碼方法
KR20130116782A (ko) * 2012-04-16 2013-10-24 한국전자통신연구원 계층적 비디오 부호화에서의 계층정보 표현방식
US10205961B2 (en) 2012-04-23 2019-02-12 Qualcomm Incorporated View dependency in multi-view coding and 3D coding
CN103379333B (zh) * 2012-04-25 2018-12-04 浙江大学 编解码方法、视频序列码流的编解码方法及其对应的装置
US9473771B2 (en) 2013-04-08 2016-10-18 Qualcomm Incorporated Coding video data for an output layer set
KR20160003070A (ko) * 2013-07-19 2016-01-08 미디어텍 인크. 3d 비디오 코딩에서의 카메라 파라미터 시그널링의 방법 및 장치
CN104980763B (zh) * 2014-04-05 2020-01-17 浙江大学 一种视频码流、视频编解码方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6056012A (en) * 1999-02-25 2000-05-02 Ecolab Inc. Inline check valve
US6161382A (en) * 1992-07-30 2000-12-19 Brotz; Gregory R. Thermoelectric actuator
US6192186B1 (en) * 1997-11-06 2001-02-20 Sanyo Electric Co. Ltd Method and apparatus for providing/reproducing MPEG data
US20020012315A1 (en) * 2000-02-25 2002-01-31 Sony Corporation Recording medium, recording apparatus, and reading apparatus
US20060165169A1 (en) * 2005-01-21 2006-07-27 Stmicroelectronics, Inc. Spatio-temporal graph-segmentation encoding for multiple video streams
US20070110150A1 (en) * 2005-10-11 2007-05-17 Nokia Corporation System and method for efficient scalable stream adaptation
US20090026645A1 (en) * 2006-03-02 2009-01-29 Daisen Industry Co., Ltd. Foamed Resin Molding Machine and Method of Operating the Same
US7903737B2 (en) * 2005-11-30 2011-03-08 Mitsubishi Electric Research Laboratories, Inc. Method and system for randomly accessing multiview videos with known prediction dependency

Family Cites Families (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5557331A (en) * 1993-03-11 1996-09-17 Matsushita Electric Industrial Co., Ltd. Image encoding method, an image encoding circuit, an image encoding apparatus, and an optical disk
DE4331376C1 (de) * 1993-09-15 1994-11-10 Fraunhofer Ges Forschung Verfahren zum Bestimmen der zu wählenden Codierungsart für die Codierung von wenigstens zwei Signalen
US5771081A (en) * 1994-02-28 1998-06-23 Korea Telecommunication Authority Bit system for transmitting digital video data
US5619256A (en) * 1995-05-26 1997-04-08 Lucent Technologies Inc. Digital 3D/stereoscopic video compression technique utilizing disparity and motion compensated predictions
US6055012A (en) * 1995-12-29 2000-04-25 Lucent Technologies Inc. Digital multi-view video compression with complexity and compatibility constraints
US5763943A (en) * 1996-01-29 1998-06-09 International Business Machines Corporation Electronic modules with integral sensor arrays
JPH09261653A (ja) 1996-03-18 1997-10-03 Sharp Corp 多視点画像符号化装置
KR980007751A (ko) * 1996-06-26 1998-03-30 구자홍 엠펙2 가변길이 디코더의 병렬처리장치 및 방법
US6055274A (en) * 1997-12-30 2000-04-25 Intel Corporation Method and apparatus for compressing multi-view video
BR9906523A (pt) 1998-06-11 2000-07-25 Koninkl Philips Electonics N V Aparelho e processo para gravar um sinal de informação de vìdeo digital em um portador de gravação, e, portador de gravação
US6151362A (en) * 1998-10-30 2000-11-21 Motorola, Inc. Joint rate control for stereoscopic video coding
KR100433516B1 (ko) * 2000-12-08 2004-05-31 삼성전자주식회사 트랜스코딩 방법
KR100433625B1 (ko) 2001-11-17 2004-06-02 학교법인 포항공과대학교 스테레오 카메라의 두영상과 양안차도를 이용한 다시점영상 합성 장치
KR100446635B1 (ko) * 2001-11-27 2004-09-04 삼성전자주식회사 깊이 이미지 기반 3차원 객체 표현 장치 및 방법
RU2237283C2 (ru) * 2001-11-27 2004-09-27 Самсунг Электроникс Ко., Лтд. Устройство и способ представления трехмерного объекта на основе изображений с глубиной
US7292691B2 (en) * 2002-01-02 2007-11-06 Sony Corporation Progressive video refresh slice detection
KR100481732B1 (ko) * 2002-04-20 2005-04-11 전자부품연구원 다 시점 동영상 부호화 장치
KR100475060B1 (ko) 2002-08-07 2005-03-10 한국전자통신연구원 다시점 3차원 동영상에 대한 사용자 요구가 반영된 다중화장치 및 방법
JP4045913B2 (ja) 2002-09-27 2008-02-13 三菱電機株式会社 画像符号化装置、画像符号化方法、および画像処理装置
TWI249356B (en) * 2002-11-06 2006-02-11 Nokia Corp Picture buffering for prediction references and display
DE602004029551D1 (de) * 2003-01-28 2010-11-25 Thomson Licensing Staggercasting im robusten modus
US7778328B2 (en) * 2003-08-07 2010-08-17 Sony Corporation Semantics-based motion estimation for multi-view video coding
US7961786B2 (en) 2003-09-07 2011-06-14 Microsoft Corporation Signaling field type information
KR100965881B1 (ko) * 2003-10-10 2010-06-24 삼성전자주식회사 비디오 데이터 인코딩 시스템 및 디코딩 시스템
KR100987775B1 (ko) * 2004-01-20 2010-10-13 삼성전자주식회사 영상의 3차원 부호화 방법
EP1727091A1 (en) 2004-02-27 2006-11-29 Tdvision Corporation S.A. DE C.V. Method and system for digital coding 3d stereoscopic video images
KR100679740B1 (ko) * 2004-06-25 2007-02-07 학교법인연세대학교 시점 선택이 가능한 다시점 동영상 부호화/복호화 방법
US7515759B2 (en) * 2004-07-14 2009-04-07 Sharp Laboratories Of America, Inc. 3D video coding using sub-sequences
US7444664B2 (en) * 2004-07-27 2008-10-28 Microsoft Corp. Multi-view video format
US20060028846A1 (en) * 2004-08-06 2006-02-09 Hsiao-Chung Yang Connection device for solar panels in a solar powered lantern to enable thesolar panels to extend horizontally to the solar powered lantern
US8155186B2 (en) 2004-08-11 2012-04-10 Hitachi, Ltd. Bit stream recording medium, video encoder, and video decoder
US8369406B2 (en) * 2005-07-18 2013-02-05 Electronics And Telecommunications Research Institute Apparatus of predictive coding/decoding using view-temporal reference picture buffers and method using the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6161382A (en) * 1992-07-30 2000-12-19 Brotz; Gregory R. Thermoelectric actuator
US6192186B1 (en) * 1997-11-06 2001-02-20 Sanyo Electric Co. Ltd Method and apparatus for providing/reproducing MPEG data
US6056012A (en) * 1999-02-25 2000-05-02 Ecolab Inc. Inline check valve
US20020012315A1 (en) * 2000-02-25 2002-01-31 Sony Corporation Recording medium, recording apparatus, and reading apparatus
US20060165169A1 (en) * 2005-01-21 2006-07-27 Stmicroelectronics, Inc. Spatio-temporal graph-segmentation encoding for multiple video streams
US20070110150A1 (en) * 2005-10-11 2007-05-17 Nokia Corporation System and method for efficient scalable stream adaptation
US7903737B2 (en) * 2005-11-30 2011-03-08 Mitsubishi Electric Research Laboratories, Inc. Method and system for randomly accessing multiview videos with known prediction dependency
US20090026645A1 (en) * 2006-03-02 2009-01-29 Daisen Industry Co., Ltd. Foamed Resin Molding Machine and Method of Operating the Same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Wang US 2007/0110150 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8891612B2 (en) * 2005-03-31 2014-11-18 Samsung Electronics Co., Ltd. Encoding and decoding multi-view video while accommodating absent or unreliable camera parameters
US20100053863A1 (en) * 2006-04-27 2010-03-04 Research In Motion Limited Handheld electronic device having hidden sound openings offset from an audio source
US9521420B2 (en) 2006-11-13 2016-12-13 Tech 5 Managing splice points for non-seamless concatenated bitstreams
US9716883B2 (en) 2006-11-13 2017-07-25 Cisco Technology, Inc. Tracking and determining pictures in successive interdependency levels
US8416859B2 (en) * 2006-11-13 2013-04-09 Cisco Technology, Inc. Signalling and extraction in compressed video of pictures belonging to interdependency tiers
US8875199B2 (en) 2006-11-13 2014-10-28 Cisco Technology, Inc. Indicating picture usefulness for playback optimization
US8958486B2 (en) 2007-07-31 2015-02-17 Cisco Technology, Inc. Simultaneous processing of media and redundancy streams for mitigating impairments
US8804845B2 (en) 2007-07-31 2014-08-12 Cisco Technology, Inc. Non-enhancing media redundancy coding for mitigating transmission impairments
US8718388B2 (en) 2007-12-11 2014-05-06 Cisco Technology, Inc. Video processing with tiered interdependencies of pictures
US8873932B2 (en) 2007-12-11 2014-10-28 Cisco Technology, Inc. Inferential processing to ascertain plural levels of picture interdependencies
US8804843B2 (en) 2008-01-09 2014-08-12 Cisco Technology, Inc. Processing and managing splice points for the concatenation of two video streams
US8416858B2 (en) 2008-02-29 2013-04-09 Cisco Technology, Inc. Signalling picture encoding schemes and associated picture properties
US9819899B2 (en) 2008-06-12 2017-11-14 Cisco Technology, Inc. Signaling tier information to assist MMCO stream manipulation
US8886022B2 (en) 2008-06-12 2014-11-11 Cisco Technology, Inc. Picture interdependencies signals in context of MMCO to assist stream manipulation
US8971402B2 (en) 2008-06-17 2015-03-03 Cisco Technology, Inc. Processing of impaired and incomplete multi-latticed video streams
US8705631B2 (en) 2008-06-17 2014-04-22 Cisco Technology, Inc. Time-shifted transport of multi-latticed video for resiliency from burst-error effects
US8699578B2 (en) 2008-06-17 2014-04-15 Cisco Technology, Inc. Methods and systems for processing multi-latticed video streams
US9723333B2 (en) 2008-06-17 2017-08-01 Cisco Technology, Inc. Output of a video signal from decoded and derived picture information
US9407935B2 (en) 2008-06-17 2016-08-02 Cisco Technology, Inc. Reconstructing a multi-latticed video signal
US9350999B2 (en) 2008-06-17 2016-05-24 Tech 5 Methods and systems for processing latticed time-skewed video streams
US8259814B2 (en) 2008-11-12 2012-09-04 Cisco Technology, Inc. Processing of a video program having plural processed representations of a single video signal for reconstruction and output
US8320465B2 (en) 2008-11-12 2012-11-27 Cisco Technology, Inc. Error concealment of plural processed representations of a single video signal received in a video program
US8259817B2 (en) 2008-11-12 2012-09-04 Cisco Technology, Inc. Facilitating fast channel changes through promotion of pictures
US8681876B2 (en) 2008-11-12 2014-03-25 Cisco Technology, Inc. Targeted bit appropriations based on picture importance
US20100118978A1 (en) * 2008-11-12 2010-05-13 Rodriguez Arturo A Facilitating fast channel changes through promotion of pictures
US8761266B2 (en) 2008-11-12 2014-06-24 Cisco Technology, Inc. Processing latticed and non-latticed pictures of a video program
US8548040B2 (en) 2009-01-19 2013-10-01 Panasonic Corporation Coding method, decoding method, coding apparatus, decoding apparatus, program, and integrated circuit
US8451890B2 (en) 2009-01-19 2013-05-28 Panasonic Corporation Coding method, decoding method, coding apparatus, decoding apparatus, program, and integrated circuit
US8553761B2 (en) 2009-01-19 2013-10-08 Panasonic Corporation Coding method, decoding method, coding apparatus, decoding apparatus, program, and integrated circuit
US20100266010A1 (en) * 2009-01-19 2010-10-21 Chong Soon Lim Coding method, decoding method, coding apparatus, decoding apparatus, program, and integrated circuit
US8326131B2 (en) 2009-02-20 2012-12-04 Cisco Technology, Inc. Signalling of decodable sub-sequences
US8782261B1 (en) 2009-04-03 2014-07-15 Cisco Technology, Inc. System and method for authorization of segment boundary notifications
US8949883B2 (en) 2009-05-12 2015-02-03 Cisco Technology, Inc. Signalling buffer characteristics for splicing operations of video streams
US9609039B2 (en) 2009-05-12 2017-03-28 Cisco Technology, Inc. Splice signalling buffer characteristics
US9467696B2 (en) 2009-06-18 2016-10-11 Tech 5 Dynamic streaming plural lattice video coding representations of video
US20110222837A1 (en) * 2010-03-11 2011-09-15 Cisco Technology, Inc. Management of picture referencing in video streams for plural playback modes
US11496760B2 (en) 2011-07-22 2022-11-08 Qualcomm Incorporated Slice header prediction for depth maps in three-dimensional video codecs

Also Published As

Publication number Publication date
ZA200807142B (en) 2010-02-24
WO2007126508A2 (en) 2007-11-08
RU2008142774A (ru) 2010-05-10
CN101416518A (zh) 2009-04-22
JP2009531968A (ja) 2009-09-03
KR101353193B1 (ko) 2014-01-21
BRPI0709167A2 (pt) 2011-06-28
BRPI0708305A2 (pt) 2011-05-24
WO2007126509A2 (en) 2007-11-08
EP1999967A2 (en) 2008-12-10
WO2007126511A2 (en) 2007-11-08
US20090185616A1 (en) 2009-07-23
JP2016054526A (ja) 2016-04-14
AU2007243933B2 (en) 2012-09-13
CN101416518B (zh) 2013-07-10
KR20080108448A (ko) 2008-12-15
RU2488973C2 (ru) 2013-07-27
US9100659B2 (en) 2015-08-04
JP5845299B2 (ja) 2016-01-20
ZA200807023B (en) 2009-11-25
WO2007126509A3 (en) 2008-06-19
KR20090007293A (ko) 2009-01-16
JP5669273B2 (ja) 2015-02-12
CN101416519A (zh) 2009-04-22
RU2529881C2 (ru) 2014-10-10
MX2008011652A (es) 2008-09-22
CN101416519B (zh) 2012-01-11
JP2009531966A (ja) 2009-09-03
JP5213088B2 (ja) 2013-06-19
EP1999968A2 (en) 2008-12-10
AU2007243935A1 (en) 2007-11-08
JP2013017215A (ja) 2013-01-24
JP5213064B2 (ja) 2013-06-19
WO2007126511A3 (en) 2008-01-03
JP2009531967A (ja) 2009-09-03
KR101383735B1 (ko) 2014-04-08
JP2014131348A (ja) 2014-07-10
KR101361896B1 (ko) 2014-02-12
US20090225826A1 (en) 2009-09-10
EP1999966A2 (en) 2008-12-10
CN101416517A (zh) 2009-04-22
JP2012235478A (ja) 2012-11-29
JP5255558B2 (ja) 2013-08-07
RU2008142771A (ru) 2010-05-10
AU2007243933A1 (en) 2007-11-08
KR20080108449A (ko) 2008-12-15
BRPI0709194A2 (pt) 2011-06-28
WO2007126508A3 (en) 2008-03-27
MX2008012382A (es) 2008-11-18
JP2013118671A (ja) 2013-06-13

Similar Documents

Publication Publication Date Title
US9100659B2 (en) Multi-view video coding method and device using a base view
US20090323824A1 (en) Methods and Apparatus for Use in Multi-View Video Coding
JP6395667B2 (ja) 多視点映像符号化及び復号化用の、ハイレベルシンタックスを使用した改善されたシグナリングのための方法及び装置
KR101450921B1 (ko) 멀티뷰 비디오 엔코딩 및 디코딩을 위한 방법 및 장치
KR101558627B1 (ko) 다시점 비디오 코딩시스템에 있어 비디오 사용성 정보(vui)를 통합하기 위한 방법 및 장치
US20090147860A1 (en) Method and apparatus for signaling view scalability in multi-view video coding
AU2012203039B2 (en) Methods and apparatus for use in a multi-view video coding system
AU2012261656A1 (en) Methods and apparatus for use in a multi-view video coding system

Legal Events

Date Code Title Description
AS Assignment

Owner name: THOMSON LICENSING, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PANDIT, PURVIN BIBHAS;SU, YEPING;YIN, PENG;AND OTHERS;REEL/FRAME:021518/0450;SIGNING DATES FROM 20071029 TO 20071112

AS Assignment

Owner name: THOMSON LICENSING DTV, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:041370/0433

Effective date: 20170113

AS Assignment

Owner name: THOMSON LICENSING DTV, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:041378/0630

Effective date: 20170113

AS Assignment

Owner name: INTERDIGITAL MADISON PATENT HOLDINGS, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING DTV;REEL/FRAME:046763/0001

Effective date: 20180723

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION