WO2013157826A1 - 영상 정보 디코딩 방법, 영상 디코딩 방법 및 이를 이용하는 장치 - Google Patents
영상 정보 디코딩 방법, 영상 디코딩 방법 및 이를 이용하는 장치 Download PDFInfo
- Publication number
- WO2013157826A1 WO2013157826A1 PCT/KR2013/003204 KR2013003204W WO2013157826A1 WO 2013157826 A1 WO2013157826 A1 WO 2013157826A1 KR 2013003204 W KR2013003204 W KR 2013003204W WO 2013157826 A1 WO2013157826 A1 WO 2013157826A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- parameter set
- layer
- picture
- reference picture
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/188—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a video data packet, e.g. a network abstraction layer [NAL] unit
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/593—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
Definitions
- the present invention relates to video encoding and decoding processing, and more particularly, to a method and apparatus for decoding information of a video in a bitstream.
- an inter prediction technique for predicting a pixel value included in a current picture from a previous and / or subsequent picture in time, and for predicting a pixel value included in a current picture using pixel information in the current picture.
- An intra prediction technique an entropy encoding technique of allocating a short code to a symbol with a high frequency of appearance and a long code to a symbol with a low frequency of appearance may be used.
- Video compression technology is a technology that provides a constant network bandwidth under a limited operating environment of hardware without considering a fluid network environment.
- a new compression technique is required to compress image data applied to a network environment in which bandwidth changes frequently, and a scalable video encoding / decoding method may be used for this purpose.
- An object of the present invention is to provide a method and apparatus for describing extraction and scalability information in a hierarchical bitstream.
- Another technical problem of the present invention is to provide a method and apparatus for representing scalability information of various types of bitstreams in a flexible manner.
- Another technical problem of the present invention is to provide a method and apparatus for adaptively converting extraction and scalability information in a hierarchical bitstream at a packet level.
- a method of decoding image information comprising: receiving a bitstream including a network abstraction layer (NAL) unit including information related to an encoded image, and parsing a NAL unit header of the NAL unit; And the NAL unit header may not include 1-bit flag information indicating whether the NAL unit is a non-reference picture or a reference picture in the entire bitstream upon encoding.
- NAL network abstraction layer
- An image decoding method includes decoding a received picture, displaying the decoded picture as a reference picture in a decoded picture buffer (DPB), and slice for a next picture of the decoded picture. Parsing a header, and indicating whether the decoded picture is a reference picture or a non-reference picture based on reference picture information included in the slice header.
- DPB decoded picture buffer
- the method may further include receiving a Supplemental Enhancement Information (SEI) message including information on a parameter set being activated, and parsing information on the parameter set.
- SEI Supplemental Enhancement Information
- a method and apparatus for describing extraction and scalability information in a hierarchical bitstream may be provided.
- a method and apparatus for representing scalability information of various kinds of bitstreams in a flexible manner are provided. .
- a method and apparatus for adaptively converting at a packet level extracting and scalability information in a hierarchical bitstream is provided.
- FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment.
- FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment.
- FIG. 3 is a conceptual diagram schematically illustrating an embodiment of a scalable video coding structure using multiple layers to which the present invention can be applied.
- FIG. 4 is a control flowchart illustrating a method of encoding video information according to the present invention.
- FIG. 5 is a control flowchart illustrating a decoding method of image information according to the present invention.
- first and second may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
- the first component may be referred to as the second component, and similarly, the second component may also be referred to as the first component.
- each component shown in the embodiments of the present invention are shown independently to represent different characteristic functions, and do not mean that each component is made of separate hardware or one software component unit.
- each component is included in each component for convenience of description, and at least two of the components may be combined into one component, or one component may be divided into a plurality of components to perform a function.
- Integrated and separate embodiments of the components are also included within the scope of the present invention without departing from the spirit of the invention.
- the components may not be essential components for performing essential functions in the present invention, but may be optional components for improving performance.
- the present invention can be implemented including only the components essential for implementing the essentials of the present invention except for the components used for improving performance, and the structure including only the essential components except for the optional components used for improving performance. Also included in the scope of the present invention.
- FIG. 1 is a block diagram illustrating a configuration of an image encoding apparatus according to an embodiment.
- a scalable video encoding / decoding method or apparatus may be implemented by an extension of a general video encoding / decoding method or apparatus that does not provide scalability, and the block diagram of FIG. 1 is scalable.
- An embodiment of an image encoding apparatus that may be the basis of a video encoding apparatus is illustrated.
- the image encoding apparatus 100 may include a motion predictor 111, a motion compensator 112, an intra predictor 120, a switch 115, a subtractor 125, and a converter 130. And a quantization unit 140, an entropy encoding unit 150, an inverse quantization unit 160, an inverse transform unit 170, an adder 175, a filter unit 180, and a reference image buffer 190.
- the image encoding apparatus 100 may perform encoding in an intra mode or an inter mode on an input image and output a bit stream.
- Intra prediction means intra prediction and inter prediction means inter prediction.
- the switch 115 is switched to intra, and in the inter mode, the switch 115 is switched to inter.
- the image encoding apparatus 100 may generate a prediction block for an input block of an input image and then encode a difference between the input block and the prediction block.
- the intra predictor 120 may generate a predictive block by performing spatial prediction using pixel values of blocks that are already encoded around the current block.
- the motion predictor 111 may obtain a motion vector by searching for a region that best matches an input block in the reference image stored in the reference image buffer 190 during the motion prediction process.
- the motion compensator 112 may generate a prediction block by performing motion compensation using the motion vector and the reference image stored in the reference image buffer 190.
- the subtractor 125 may generate a residual block by the difference between the input block and the generated prediction block.
- the transform unit 130 may output a transform coefficient by performing transform on the residual block.
- the quantization unit 140 may output the quantized coefficient by quantizing the input transform coefficient according to the quantization parameter.
- the entropy encoding unit 150 entropy encodes a symbol according to a probability distribution based on values calculated by the quantization unit 140 or encoding parameter values calculated in the encoding process, thereby generating a bit stream. You can print
- the entropy encoding method is a method of receiving a symbol having various values and expressing it in a decodable column while removing statistical redundancy.
- Encoding parameters are parameters necessary for encoding and decoding, and may include information that may be inferred during encoding or decoding, as well as information encoded by an encoder and transmitted to a decoder, such as syntax elements. Means necessary information. Coding parameters may be, for example, intra / inter prediction modes, moving / motion vectors, reference picture indexes, coding block patterns, presence or absence of residual signals, transform coefficients, quantized transform coefficients, quantization parameters, block sizes, block division information, or the like. May include statistics.
- the residual signal may mean a difference between the original signal and the prediction signal, and a signal in which the difference between the original signal and the prediction signal is transformed or a signal in which the difference between the original signal and the prediction signal is converted and quantized It may mean.
- the residual signal may be referred to as a residual block in block units.
- the entropy encoder 150 may store a table for performing entropy encoding, such as a variable length coding (VLC) table, and the entropy encoder 150 may store the stored variable length encoding. Entropy encoding may be performed using the (VLC) table. In addition, the entropy encoder 150 derives a binarization method of a target symbol and a probability model of a target symbol / bin, and then performs entropy encoding using the derived binarization method or a probability model. You may.
- VLC variable length coding
- CABAC context-adaptive binary arithmetic coding
- the quantized coefficients may be inversely quantized by the inverse quantizer 160 and inversely transformed by the inverse transformer 170.
- the inverse quantized and inverse transformed coefficients are added to the prediction block through the adder 175 and a reconstruction block can be generated.
- the reconstruction block passes through the filter unit 180, and the filter unit 180 applies at least one or more of a deblocking filter, a sample adaptive offset (SAO), and an adaptive loop filter (ALF) to the reconstructed block or reconstructed picture. can do.
- the reconstructed block that has passed through the filter unit 180 may be stored in the reference image buffer 190.
- FIG. 2 is a block diagram illustrating a configuration of an image decoding apparatus according to an embodiment.
- a scalable video encoding / decoding method or apparatus may be implemented by extension of a general video encoding / decoding method or apparatus that does not provide scalability
- the block diagram of FIG. 2 is scalable video decoding.
- An embodiment of an image decoding apparatus that may be the basis of an apparatus is shown.
- the image decoding apparatus 200 may include an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, a motion compensator 250, and a filter. 260 and a reference picture buffer 270.
- the image decoding apparatus 200 may receive a bitstream output from the encoder and perform decoding in an intra mode or an inter mode, and output a reconstructed image, that is, a reconstructed image.
- the switch In the intra mode, the switch may be switched to intra, and in the inter mode, the switch may be switched to inter.
- the image decoding apparatus 200 may generate a reconstructed block, that is, a reconstructed block by obtaining a residual block reconstructed from the received bitstream, generating a prediction block, and adding the reconstructed residual block and the prediction block.
- the entropy decoder 210 may entropy decode the input bitstream according to a probability distribution to generate symbols including symbols in the form of quantized coefficients.
- the entropy decoding method is a method of generating each symbol by receiving a binary string.
- the entropy decoding method is similar to the entropy coding method described above.
- the quantized coefficients are inversely quantized by the inverse quantizer 220 and inversely transformed by the inverse transformer 230, and as a result of the inverse quantization / inverse transformation of the quantized coefficients, a reconstructed residual block may be generated.
- the intra predictor 240 may generate a predictive block by performing spatial prediction using pixel values of blocks that are already encoded around the current block.
- the motion compensator 250 may generate a prediction block by performing motion compensation using the motion vector and the reference image stored in the reference image buffer 270.
- the reconstructed residual block and the prediction block are added through the adder 255, and the added block passes through the filter unit 260.
- the filter unit 260 may apply at least one or more of the deblocking filter, SAO, and ALF to the reconstructed block or the reconstructed picture.
- the filter unit 260 outputs the reconstructed image, that is, the reconstructed image.
- the reconstructed picture may be stored in the reference picture buffer 270 to be used for inter prediction.
- components directly related to the decoding of an image in the reference image buffer 270 for example, an entropy decoder 210, an inverse quantizer 220, an inverse transformer 230, an intra predictor 240, and motion compensation.
- the unit 250, the filter unit 260, and the like may be distinguished from other components and expressed as a decoder or a decoder.
- the image decoding apparatus 200 may further include a parsing unit (not shown) which parses information related to an encoded image included in a bitstream.
- the parser may include the entropy decoder 210 or may be included in the entropy decoder 210. Such a parser may also be implemented as one component of the decoder.
- FIG. 3 is a conceptual diagram schematically illustrating an embodiment of a scalable video coding structure using multiple layers to which the present invention can be applied.
- a GOP Group of Picture
- FIG. 3 a GOP (Group of Picture) represents a picture group, that is, a group of pictures.
- a transmission medium In order to transmit image data, a transmission medium is required, and its performance varies depending on the transmission medium according to various network environments.
- a scalable video coding method may be provided for application to such various transmission media or network environments.
- the scalable video coding method is a coding method that improves encoding / decoding performance by removing redundancy between layers by using texture information, motion information, and residual signals between layers.
- the scalable video coding method may provide various scalability in terms of spatial, temporal, and image quality according to ambient conditions such as a transmission bit rate, a transmission error rate, and a system resource.
- Scalable video coding may be performed using multiple layers structure to provide a bitstream applicable to various network situations.
- the scalable video coding structure may include a base layer that compresses and processes image data by using a general image encoding method, and compresses the image data by using the encoding information of the base layer and a general image encoding method together. May include an enhancement layer for processing.
- a layer is an image and a bit divided based on spatial (eg, image size), temporal (eg, coding order, image output order, frame rate), image quality, complexity, and the like.
- the base layer may mean a reference layer or a base layer
- the enhancement layer may mean an enhancement layer.
- the plurality of layers may have a dependency between each other.
- the base layer may be defined as a standard definition (SD), a frame rate of 15 Hz, and a 1 Mbps bit rate
- the first enhancement layer may be a high definition (HD), a frame rate of 30 Hz, and a 3.9 Mbps bit rate
- the second enhancement layer may be defined as an ultra high definition (4K-UHE), a frame rate of 60 Hz, and a bit rate of 27.2 Mbps.
- 4K-UHE ultra high definition
- the format, frame rate, bit rate, etc. are exemplary and may be determined differently as necessary.
- the number of hierarchies used is not limited to this embodiment and may be determined differently according to a situation.
- the frame rate of the first enhancement layer HD may be reduced and transmitted at 15 Hz or less.
- the scalable video coding method can provide temporal, spatial and image quality scalability by the method described above in the embodiment of FIG. 3.
- Scalable video coding has the same meaning as scalable video coding from a coding point of view and scalable video decoding from a decoding point of view.
- Scalable Video Coding an extension of Advanced Video Coding (AVC) was developed to generate bitstreams with a wide range of bitrates while maintaining maximum compression efficiency.
- SVC bitstreams can be easily extracted in various ways to meet the characteristics and changes of various devices and networks.
- the SVC standard provides spatial, temporal, image quality (SNR) scalability.
- the bitstream including a plurality of layers is composed of Network Abstraction Layer (NAL) units that facilitate the adaptive transmission of video through a packet-switching network.
- NAL Network Abstraction Layer
- the relationship between the plurality of viewpoints is a spatial layer in video supporting the plurality of layers. Similar to the relationship between.
- the scalability information of the bitstream is very important to effectively and efficiently convert the bitstream at all nodes in the content delivery path.
- temporal_id having a length of 3 bits indicates a temporal layer of the video bitstream
- reserved_one_5bits corresponds to an area for later indicating other layer information.
- the temporal layer refers to a layer of a temporally scaleable bitstream composed of a video coding layer (VCL) NAL unit, and the temporal layer has a specific temporal_id value.
- VCL video coding layer
- the present invention relates to a method for effectively describing extraction information and scalability information of an image in a bitstream supporting a plurality of layers, and to signaling the same, and an apparatus for implementing the same.
- bitstreams two types, a base type supporting only temporal scalability, and an extended type capable of having scalability supporting space / image quality / time including time Divided into and explained.
- the first type of bitstream is for the bitstream supporting single layer video
- the second type is for the enhancement layer in HEVC based hierarchical video coding.
- an improvement scheme for representing scalability information of two bitstream types is proposed.
- 5 bits of reserved_one_5bits in the extended type may be used as a layer_id indicating an identifier of the scalable layer.
- nal_ref_flag is used to indicate a non-reference picture. This information indicates an approximate priority between the non-reference picture and the reference picture, but the use of nal_ref_flag for transmission is somewhat limited.
- a reference picture refers to a picture including samples that can be used for inter prediction when decoding subsequent pictures that follow in decoding order.
- a non-reference picture refers to a picture including samples that are not used for inter prediction when decoding a subsequent picture in decoding order.
- nal_ref_flag is a flag indicating information indicating whether the corresponding nal unit is a non-reference picture or a reference picture on the entire bitstream at the time of encoding.
- NALU means to include a sequence parameter set (SPS), picture parameter set (PPS), adaptation parameter set (APS), or slice of a reference picture. If nal_ref_flag is 0, NALU is part of a non-reference picture. Or it means to include a slice containing all.
- a NALU having a nal_ref_flag value of 1 may include a slice of a reference picture, and nal_ref_flag may have a value of 1 for NALUs of a video parameter set (VPS), a sequence parameter set (SPS), and a picture parameter set (PPS).
- VPS video parameter set
- SPS sequence parameter set
- PPS picture parameter set
- nal_ref_flag has a value of 0 for all VCL NALUs of the picture.
- nal_ref_flag of all pictures remaining after extraction becomes 1.
- temporal_id may be more effective in supporting adaptive transformation (extraction). That is, a bitstream including a desired temporal layer may be extracted using the total number of temporal layers included in the bitstream and the temporal_id value of the NALU header.
- nal_ref_flag when nal_ref_flag decodes (restores) a picture composed of NALUs including nal_ref_flag and stores it in a memory such as a decoded picture buffer (DPB), nal_ref_flag may also be used to indicate whether to use the picture later as a reference picture. . When nal_ref_flag is 1, it may be indicated to be used as a reference picture later. If nal_ref_flag is 0, it may be indicated that it is not used as a reference picture later.
- the nal_ref_flag may be used as a reference picture when the decoded picture is stored in the DPB without determining whether the corresponding NALU is a non-reference picture or a reference picture. In this case, even if the decoded picture is a non-reference picture but is displayed as a reference picture, in decoding the next picture of the picture in decoding order, the picture is included in the reference picture list delivered to the next picture header. Since it will not be, the problem does not occur.
- the reference picture list included in the slice header indicates whether the previously decoded picture is a reference picture or a non-reference picture. Therefore, even if it is determined whether the picture decoded through nal_ref_flag is a reference picture and is displayed as a reference picture, there is no problem in determining the decoded picture as a reference picture or a non-reference picture.
- the present invention proposes to delete nal_ref_flag or change the semantics of nal_ref_flag in the NALU header.
- An embodiment related to deleting nal_ref_flag is as follows.
- a value of slice_ref_flag of 1 indicates that the slice is part of the reference picture, and 0 indicates that the slice is part of the non-reference picture.
- the syntax of the access unit delimiter may be as shown in Table 2.
- au_ref_flag 1
- au_ref_flag 1
- nal_ref_flag which is one-bit flag information indicating whether a reference picture or a reference picture is deleted in the entire bitstream during encoding
- the determination as to whether a picture performed by nal_ref_flag is a reference picture may be performed through another process. have.
- the decoded picture is unconditionally indicated as a reference picture in a decoded picture buffer (DPB). That is, it may be displayed as a reference picture without determining whether the decoded picture is a reference picture.
- DPB decoded picture buffer
- the slice header for the next picture of the decoded picture may be parsed, and it may be indicated whether the decoded picture is a reference picture or a non-reference picture based on reference picture information included in the slice header.
- the temporal_id may be used to delete nal_ref_flag from the NALU header and indicate NALU information of a non-reference picture.
- the temporal_id may be a predetermined value except “7”, or the number of maximum temporal layers included in the bitstream-1 (ie, max_temporal_layers_minus1), or “0”.
- Nal_ref_flag may be deleted from the NALU header, and reserved_one_5bits may be used as the priority_id component to indicate the NALU information of the non-reference picture.
- priority_id is an identifier indicating a priority of a corresponding NALU and is used to provide a bitstream ejection function according to priority regardless of different space, time and image quality.
- temporal_id Ta is an identifier of the highest temporal layer
- nal_ref_flag One bit used to signal nal_ref_flag may be used as one of the following.
- nal_unit_type may be a 7-bit signal, and the number of NALU types may be doubled.
- the temporal_id may be a 4bist signal and the number of maximum temporal layers may be doubled.
- the layer_id means an identifier of the scalable layer of the hierarchical bitstream and may be signaled by the reserved_one_5bits syntax element.
- One bit used for signaling nal_ref_flag is added to 5 bits of reserved_one_5bits used to identify the scalable layer, so that the layer_id may be a 6 bit signal. Using 6 bits can identify 64 scalable layers.
- nal_ref_flag is not deleted from the NALU header, the meaning of nal_ref_flag can be modified as follows.
- NALU indicates that it contains only slices of non-reference pictures, and if nal_ref_flag is 1, NALU indicates that it may include slices of reference pictures or non-reference pictures.
- the video parameter set includes most basic information for decoding an image and may include contents existing in the existing SPS.
- the video parameter set includes information about sub-layers that refer to temporal scalability and spatial, quality, and view scalability. It may include information about the number of layers. That is, the video parameter set may include a plurality of layer information, that is, syntax for HEVC extension.
- -video_parameter_set_id means an identifier of a video parameter set, and can be referenced in a sequence parameter set (SPS), supplemental enhancement information (SEI), and access unit delimiters.
- priority_id_flag 1
- reserved_one_5bits may be used in the same manner as priority_id of the SVC standard. If priority_id_flag is 0, reserved_one_5bits is used as layer_id.
- extension_info_flag indicates that the bitstream follows the single layer standard of HEVC, and if 1, it indicates an enhancement layer (if HEVC extension is supported) for scalability support and information related to the layer is provided.
- vps_id syntax element may be added to the SPS.
- the added SPS syntax of vps_id is shown in Table 4.
- the syntax deleted in Table 4 is represented by the line passing through the middle of the syntax.
- vps_id indicates an identifier for identifying a video parameter set referred to by the SPS, and vps_id may have a range of 0 to X.
- the slice header includes index information of a picture parameter set referred to by the slice, and the picture parameter set includes index information of a sequence parameter set referenced by the picture.
- the sequence parameter set includes information about the video parameter set referenced by the sequence. In this way, parsing information about a parameter set and referring to the parsed corresponding parameter set information is called activation.
- an extractor When extracting a part of a sublayer (temporal layer) of a bitstream including a single layer, an extractor needs to analyze (parse) a NALU header and a plurality of parameter sets.
- the extractor must parse the upper parameter set sequentially from the slice header. This means that the extractor must understand all the syntax elements of the parameter sets and slice header.
- activation of a parameter set means that the extractor signals to know which parameter set is activated without analyzing the slice header and its associated picture parameter set (PPS).
- PPS picture parameter set
- the extractor can reduce the burden of analyzing all slice headers and associated PPS.
- the video parameter set may be updated.
- One of the following methods may be used so that the extractor knows the currently active VPS and its associated SPS or PPS without analyzing the slice header.
- vps_id, sps_id, and pps_id may be included in an access unit delimiter.
- vps_id, sps_id, and pps_id respectively indicate identifiers of a video parameter set, a sequence parameter set, and a picture parameter set used for NALUs in an associated AU.
- vps_id_present_flag In order to indicate whether each identifier exists in an access unit delimiter, vps_id_present_flag, sps_id_present_flag, and pps_id_present_flag are used.
- the syntax of the proposed access unit delimiter is shown in Table 5.
- vps_id may be included in an access unit delimiter except for sps_id and pps_id as shown in Table 6.
- the SEI message includes syntax for indicating the presence or absence of vps_id, sps_id, pps_id indicating identifiers of the video parameter set, sequence parameter set, picture parameter set used for NALUs in the associated AU.
- vps_id_present_flag, sps_id_present_flag, pps_id_present_flag syntax may be used to indicate the existence of each identifier, and the SEI syntax is shown in Table 7 below.
- sps_id and vps_id may be included in an SEI message to inform activation.
- the sps_id and vps_id included in the SEI message may include sps_id and vps_id referenced by the video coding layer NALU of an access unit associated with the corresponding SEI message. Accordingly, sps_id and vps_id may represent information of a parameter set that may be activated.
- vps_id represents a video_parameter_set_id of a video parameter set currently activated.
- the vps_id value may have a value of 0 to 15.
- sps_id_present_flag When sps_id_present_flag has a value of 1, it indicates that the sequence_parameter_set_id of the currently activated sequence parameter set is included in the corresponding SEI message. When sps_id_present_flag has a value of 0, the sequence_parameter_set_id of the activated sequence parameter set is not included in the corresponding SEI message. Indicates no.
- sps_id represents the sequence_parameter_set_id of the currently activated sequence parameter set.
- sps_id may have a value of 0 to 31, more specifically 0 to 15.
- psr_extension_flag 0 indicates that the parameter set reference SEI message extension syntax element is not included in the parameter set reference SEI message. If psr_extension_flag is 1, the parameter set reference SEI message extension syntax element is extended to include the parameter set reference SEI message. Means to use.
- psr_extension_length represents the length of psr_extension_data.
- psr_extension_length may have a value in the range of 0 to 256, and psr_extension_data_byte may have any value.
- one or more sps_ids and vps_ids may be included in the SEI message except for the pps_id and may be signaled.
- vps_id represents a video_parameter_set_id of a video parameter set currently activated.
- vps_id may have a value of 0 to 15.
- num_reference_sps represents the number of sequence parameter sets referring to the currently active vps_id.
- sps_id (i) represents the sequence_parameter_set_id of the currently activated sequence parameter set, and sps_id may have a value of 0 to 31, more specifically 0 to 15.
- Another method for activation signaling of a video parameter set is to include information indicating vps_id, sps_id, and pps_id in a buffering period SEI message.
- Table 11 shows a syntax including vps_id_present_flag, sps_id_present_flag, and pps_id_present_flag for indicating whether or not the identifier vps_id, sps_id, or pps_id is present.
- vps_id may be included in the buffering period SEI message to signal activation of a parameter set.
- Another method for activation signaling of a parameter set is to include information indicating vps_id, sps_id, and pps_id in a recovery point SEI message.
- Table 13 shows a syntax including vps_id_present_flag, sps_id_present_flag, and pps_id_present_flag for indicating whether the identifier of vps_id, sps_id, and pps_id is present.
- vps_id or sps_id may be included in an intra random access point (IRAP) access unit.
- IRAP intra random access point
- the extractor finds the values of vps_id, sps_id, pps_id and manages one or more vps / sps / pps through the signaling method in order to extract the bitstream. Can be.
- the decoding apparatus or the decoding unit that performs decoding may find out the vps_id, sps_id, and pps_id values through the signaling method, and activate the corresponding parameter set to decode AUs associated with the parameter set.
- extension_info () of a VPS and a new SEI message are proposed to display and signal information about the scalable layer.
- the following information may be signaled.
- layer_id signals whether or not to convey a priority value of a layer.
- the spatial layer (identified by the dependency_id value), the image quality layer (identified by the quality_id value), and the viewpoints (identified by the view_id value), etc. may be signaled corresponding to each layer_id value
- the temporal layer may be a NALU header. It can be identified by the temporal_id of.
- the region of the video associated with the layer_id may be signaled by the region_id.
- dependency information among scalable layers, bitrate information of each scalable layer, and quality information of each scalable layer may be signaled.
- extension_info () syntax is shown in Table 15.
- num_frame_sizes_minus1 plus 1 indicates size information of other types of images included in an encoded video sequence (for example, pic_width_in_luma_samples [i], pic_height_in_luma_samples [i], pic_cropping_flag [i], pic_cropping_flag [i], pic_crop_left_offset [i], pic_crop_top_offset [i], pic_crop_bottom_offset [i]).
- the num_frame_sizes_minus1 value may have a range of 0 to X.
- Another kind of image may include an image having a different resolution.
- num_rep_formats_minus1 plus 1 is another type of bit depth and chroma format (e.g., bit_depth_luma_minus8 [i], bit_depth_chroma_minus8 [i], and chroma_format_idc values [i]) included in the encoded video sequence. Represents the maximum number of.
- the num_rep_formats_minus1 value has a range of 0 to X.
- bit_depth_luma_minus8 [i] represent the i-th bit_depth_luma_minus8, bit_depth_chroma_minus8, and chroma_format_idc values of the encoded video sequence.
- num_layers_minus1 indicates the number of scalable layers possible in the bitstream.
- dependency_id_flag When dependency_id_flag is 1, it indicates that there is at least one dependency_id value associated with the layer_id value.
- quality_id_flag When quality_id_flag is 1, it indicates that there is at least one quality_id value associated with the layer_id value.
- view_id_flag 1, it indicates that there is at least one view_id value associated with the layer_id value.
- region_id_flag is 1, it indicates that there is at least one region_id value associated with the layer_id value.
- layer_dependency_info_flag 1 If layer_dependency_info_flag is 1, it indicates that dependency information of the scalable layer is provided.
- frame_size_idx [i] indicates an index to a set of frame sizes applied to the layer whose layer_id value is i.
- frame_size_idx [i] has a value ranging from 0 to X.
- rep_format_idx [i] indicates an index into a set of bit depths and chroma formats applied to a layer having a layer_id value of i.
- rep_format_idx [i] has a value in the range of 0 to X.
- one_dependency_id_flag [i] is 1, only one dependency_id value of layer_id is associated with i. If one_dependency_id_flag [i] is 0, two or more dependency_id values of layer_id value are associated with i.
- dependency_id [i] indicates a dependency_id value whose layer_id value is associated with i.
- dependency_id_min [i] and dependency_id_max [i] indicate a minimum dependency_id value and a maximum dependency_id value with layer_id values associated with i, respectively.
- one_quality_id_flag [i] is 1, it indicates that there is only one quality_id whose layer_id value is associated with i. If one_quality_id_flag [i] is 0, it indicates that two or more quality_id values related to layer_id value i exist.
- quality_id [i] indicates a quality_id value whose layer_id value is associated with i.
- quality_id_min [i] and quality_id_max [i] indicate a minimum qualtiy_id value and a maximum quality_id value with layer_id values associated with i, respectively.
- one_view_id_flag [i] is 1, one view_id value of layer_id is associated with i. If 0, two or more view_id values of layer_id value are associated with i.
- view_id [i] indicates a view_id value whose layer_id value is associated with i.
- depth_flag [i] When depth_flag [i] is 1, it indicates that the current scalable layer having the layer_id value of i includes depth information of the 3D video bitstream.
- view_id_min [i] and view_id_max [i] indicate a minimum view_id value and a maximum view_id value with layer_id values associated with i, respectively.
- num_regions_minus1 plus1 indicates the number of regions whose layer_id value is associated with i.
- region_id [j] indicates an identifier of a region j whose layer_id value is associated with i.
- num_directly_dependent_layers [i] indicates the number of scalable layers (layers necessary for forming a prediction signal during decoding) to which the current scalable layer i is directly associated.
- directly_dependent_layer_id_delta_minus1 [i] [j] plus 1 represents the difference between the layer identifier of the j-th scalable layer to which the current scalable layer is directly associated with layer_id [i] which is the current scalable layer.
- the layer identifier of the jth directly related scalable layer is equal to (layer_id [i]? directly_dependent_layer_id_delta_minus1 [i] [j]? 1).
- extension_info () syntax according to another embodiment is shown in Table 16.
- pic_width_in_luma_samples [i], pic_height_in_luma_samples [i], bit_depth_luma_minus8 [i], bit_depth_chroma_minus8 [i], and chroma_format_idc [i] may be signaled with information about different images, that is, pictures having different resolutions.
- num_layers_minus1 indicates the number of scalable layers that can be provided in the bitstream.
- bitrate_info_flag 1 indicates that bit rate information for each scalable layer is provided.
- quality_info_flag 1
- quality_type_flag 1
- max_bitrate [i] indicates the maximum bit rate of the scalable layer whose layer_id value is i
- average_bitrate [i] indicates the average bit rate of the scalable layer whose layer_id value is i.
- quality_value [i] represents the quality value of scalable layer i.
- QualityTypeUriIdx is a QualityTypeUriIdx-th byte of a null0terminated string encoded with UTF-8 characters and indicates a URI (universal resource identifier) including a representation of the type of quality values.
- a first method for indicating a mapping method between layer_id and scalability dimension ID as a method of indicating a relationship between layer_id and scalability dimension ID in a bitstream supporting a plurality of layers;
- a dimension type refers to a type of scalability such as spatial scalability and quality scalability
- a dimension ID refers to a dimension type that a specific dimension type may have. It can mean an index to a layer.
- a particular dimension may have a specific layer (for example, a temporal scalability in a single layer's bitstream, for example, sub-layer 3). It may be common to refer directly to this next lower layer (eg a sub-layer).
- spatial layer 2 directly refers to the next lower spatial layer 1.
- Table 18 shows the syntax that maps the layer_id and the scalability dimension ID using the first method.
- the meaning of the syntax of Table 18 is as follows.
- all_default_dependency_flag is 0, it indicates that not all hierarchical dimensions have default references. If all_default_dependency_flag is 0, the following num_default_dim_minus1 is signaled.
- num_default_dim_minus1 indicates the number of dimensions having a default dependency.
- dimension [j] specifies the type of the hierarchical dimension with the default dependency. That is, one by one the number of dimensions that have a default dependency
- the direct reference of layer C to layer B means that in order to decode layer C, the decoder must use the information of layer B (decoded or not decoded). However, if layer B directly uses layer A's information, layer C is not considered to refer directly to layer A.
- Table 19 shows a syntax for allocating bits of layer_id to a scalability dimension type using a second method and signaling a length of the allocated dimension type.
- Num_dimensions_minus1 included in Table 19 indicates the number of hierarchical dimensions existing in the NALU header. That is, the number of hierarchical dimensions existing in the NALU header is identified, and the number of bits allocated to the hierarchical type and the dimension type existing for each hierarchical dimension is determined.
- Tables 20 and 21 show syntax different from Tables 18 and 19.
- Table 20 shows another syntax indicating a default reference when using the first method
- Table 21 shows another syntax indicating a default reference when using the second method.
- the new syntax default_dependency_flag [i] included in Table 20 and Table 21 indicates whether dimension type i uses a default reference.
- num_dimensions_minus1 and dimension_type [i] it signals whether the corresponding dimension type uses a basic reference. Otherwise, it signals information about a layer directly referenced by the layer.
- Table 22 shows the dimension types according to the present invention.
- a type representing dimension types 4 and 5, namely priority ID and region ID, has been added from the existing dimension types.
- dimension_type [i] [j] can basically have a value between 0 and 5. Other values can be defined later, and the decoder can ignore the value of dimension_type [i] [j] if it is not between 0 and 5.
- the corresponding dimension_id indicates an ID of a priority layer of the bitstream in the SVC standard.
- the corresponding dimension_id indicates an id of a specific region of the bitstream.
- the particular region may be one or more spatial-temporal segments in the bitstream.
- FIG. 4 is a control flowchart illustrating a method of encoding video information according to the present invention.
- the encoding apparatus encodes a network abstraction layer (NAL) unit including information related to an image (S401).
- NAL network abstraction layer
- the NAL unit header of the NAL unit does not include information indicating whether the NAL unit includes a slice including at least some or all of the non-reference picture.
- the NAL unit header includes layer identification information for identifying the scalable layer in the bitstream supporting the scalable layer.
- a bit used to signal information indicating whether a NAL unit not included in the NAL unit header includes a slice including at least some or all of the non-reference picture may be used to signal layer identification information.
- the NAL unit may include information on various said parameter sets necessary for decoding of an image.
- the encoding apparatus may encode a Supplemental Enhancement Information (SEI) message including information on an activated parameter set into an independent NAL unit.
- SEI Supplemental Enhancement Information
- the information about the parameter set to be activated may include at least one of information indexing the video parameter set to be activated and information indexing the sequence parameter set to be activated.
- the information on the parameter set to be activated may include information indexing the video parameter set to be activated, information indicating the number of sequence parameter sets referring to the video parameter set to be activated, and information indexing the sequence parameter set. .
- the information about this parameter set may be used when the decoding apparatus extracts a sub layer providing temporal scalability.
- the decoding apparatus or the decoding unit that performs the decoding may use the information on the parameter set when activating the parameter set necessary for decoding the video coding layer NALU.
- the encoding apparatus transmits the NAL unit including the information related to the encoded image in the bitstream (S402).
- FIG. 5 is a control flowchart illustrating a decoding method of image information according to the present invention.
- the decoding apparatus receives a NAL unit including information related to an image encoded through a bitstream (S501).
- the decoding apparatus parses the header of the NAL unit and the NAL payload (S502). Parsing of the image information may be performed by an entropy decoding unit or a separate parsing unit.
- the decoding apparatus may obtain various information included in the NAL unit header and the NAL payload through parsing.
- the NAL unit header may include layer identification information for identifying the scalable layer in the bitstream supporting the scalable layer, and 1 indicating whether the NAL unit is a non-reference picture or a reference picture in the entire bitstream at encoding. It may not include flag information of bits.
- a bit used to signal information indicating whether a NAL unit not included in the NAL unit header includes a slice including at least some or all of the non-reference picture may be used to signal layer identification information.
- the decoding apparatus may obtain information about a parameter set necessary for decoding the NALU associated with the corresponding SEI message included in the SEI message through parsing.
- the information on the parameter set to be activated may include at least one of information indexing the video parameter set to be activated and information indexing the sequence parameter set to be activated.
- the information on the parameter set to be activated may include information indexing the video parameter set to be activated, information indicating the number of sequence parameter sets referring to the video parameter set to be activated, and information indexing the sequence parameter set. .
- the information about this parameter set may be used when the decoding apparatus extracts a sub layer providing temporal scalability.
- the information on the parameter set may be used when decoding the bitstream or during session negotiation (eg, session negotiation during streaming on an IP network).
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Control Of El Displays (AREA)
Abstract
Description
Claims (18)
- 인코딩된 영상에 관련된 정보를 포함하는 NAL(Network Abstraction Layer) 유닛을 포함하는 비트스트림을 수신하는 단계와;상기 NAL 유닛의 NAL 유닛 헤더를 파싱하는 단계를 포함하고,상기 NAL 유닛 헤더는 상기 NAL 유닛이 인코딩 시 전체 비트스트림에서 비 참조 픽처 인지 또는 참조 픽처 인지 여부를 나타내는 1비트의 플래그 정보를 포함하지 않는 것을 특징으로 하는 영상 정보 디코딩 방법.
- 제1항에 있어서,상기 NAL 유닛 헤더는 스케일러블 계층을 지원하는 비트스트림에서 상기 스케일러블 계층을 식별하기 위한 계층 식별 정보를 포함하는 것을 특징으로 하는 영상 정보 디코딩 방법.
- 제2항에 있어서,상기 NAL 유닛이 인코딩 시 전체 비트스트림에서 비 참조 픽처 인지 또는 참조 픽처 인지 여부를 나타내는 1 비트의 플래그 정보를 시그널링 하기 위하여 사용되었던 상기 1 비트는 상기 계층 식별 정보를 시그널링 하기 위하여 사용되는 것을 특징으로 하는 영상 정보 디코딩 방법.
- 수신된 픽처를 디코딩하는 단계와;디코딩된 픽처를 DPB(decoded picture buffer)에 참조 픽처로 표시하는 단계와;상기 디코딩된 픽처의 다음 픽처에 대한 슬라이스 헤더를 파싱하는 단계와;상기 슬라이스 헤더에 포함되어 있는 참조 픽처 정보에 기초하여 상기 디코딩된 픽처가 참조 픽처인지 비 참조 픽처인지 여부를 표시하는 단계를 포함하는 영상 디코딩 방법.
- 활성화되는 파라미터 세트에 대한 정보를 포함하는 SEI(Supplemental enhancement information) 메시지를 수신하는 단계와;상기 파라미터 세트에 대한 정보를 파싱하는 영상 정보 디코딩 방법.
- 제5항에 있어서,상기 활성화되는 파라미터 세트에 대한 정보는 활성화되는 비디오 파라미터 세트를 인덱싱하는 정보 및 활성화되는 시퀀스 파라미터 세트를 인덱싱하는 정보 중 적어도 하나를 포함하는 것을 특징으로 하는 영상 정보 디코딩 방법.
- 제5항에 있어서,상기 활성화되는 파라미터 세트에 대한 정보는 활성화되는 비디오 파라미터 세트를 인덱싱하는 정보와, 상기 활성화되는 비디오 파라미터 세트를 참조하는 시퀀스 파라미터 세트의 개수를 나타내는 정보 및 상기 시퀀스 파라미터 세트를 인덱싱하는 정보를 포함하는 것을 특징으로 하는 영상 정보 디코딩 방법.
- 제5항에 있어서,상기 파라미터 세트에 대한 정보는 시간적 스케일러빌러티를 제공하는 서브레이어를 추출할 때 이용되는 것을 특징으로 하는 영상 정보 디코딩 방법.
- 제5항에 있어서,상기 파라미터 세트에 대한 정보는 비디오 코딩 레이어 NALU을 디코딩하기 위한 파라미터 세트들을 참조(활성화)하기 위해 이용되는 것을 특징으로 하는 영상 정보 디코딩 방법.
- 인코딩된 영상에 관련된 정보를 포함하는 NAL(Network Abstraction Layer) 유닛을 수신하고, 상기 NAL 유닛의 NAL 유닛 헤더를 파싱하는 파싱부를 포함하고,상기 NAL 유닛 헤더는 상기 NAL 유닛이 인코딩 시 전체 비트스트림에서 비 참조 픽처 인지 또는 참조 픽처 인지 여부를 나타내는 1비트의 플래그 정보를 포함하지 않는 것을 특징으로 하는 영상 정보 디코딩 장치.
- 제10항에 있어서,상기 NAL 유닛 헤더는 스케일러블 계층을 지원하는 비트스트림에서 상기 스케일러블 계층을 식별하기 위한 계층 식별 정보를 포함하는 것을 특징으로 하는 영상 정보 디코딩 장치.
- 제10항에 있어서,상기 NAL 유닛이 인코딩 시 전체 비트스트림에서 비 참조 픽처 인지 또는 참조 픽처 인지 여부를 나타내는 1 비트의 플래그 정보를 시그널링 하기 위하여 사용되었던 비트는 상기 계층 식별 정보를 시그널링 하기 위하여 사용되는 것을 특징으로 하는 영상 정보 디코딩 장치.
- 수신된 픽처의 슬라이스 헤더를 파싱하는 파싱부와;수신된 픽처를 디코딩하는 디코딩부와디코딩된 픽처를 저장하는 DPB(decoded picture buffer)를 포함하고,상기 디코딩된 픽처는 상기 DPB에 참조 픽처로 표시되고, 상기 디코딩된 픽처의 다음 픽처에 대한 슬라이스 헤더에 포함되어 있는 참조 픽처 정보에 기초하여 상기 디코딩된 픽처가 참조 픽처인지 비 참조 픽처인지 여부가 재 표시되는 것을 특징으로 하는 영상 디코딩 장치.
- 활성화되는 파라미터 세트에 대한 정보를 포함하는 SEI(Supplemental enhancement information) 메시지를 수신하고, 상기 파라미터 세트에 대한 정보를 파싱하는 파싱부를 포함하는 영상 정보 디코딩 장치.
- 제14항에 있어서,상기 활성화되는 파라미터 세트에 대한 정보는 활성화되는 비디오 파라미터 세트를 인덱싱하는 정보 및 활성화되는 시퀀스 파라미터 세트를 인덱싱하는 정보 중 적어도 하나를 포함하는 것을 특징으로 하는 영상 정보 디코딩 장치.
- 제14항에 있어서,상기 활성화되는 파라미터 세트에 대한 정보는 활성화되는 비디오 파라미터 세트를 인덱싱하는 정보와, 상기 활성화되는 비디오 파라미터 세트를 참조하는 시퀀스 파라미터 세트의 개수를 나타내는 정보 및 상기 시퀀스 파라미터 세트를 인덱싱하는 정보를 포함하는 것을 특징으로 하는 영상 정보 디코딩 장치.
- 제14항에 있어서,상기 파라미터 세트에 대한 정보는 시간적 스케일러빌러티를 제공하는 서브 레이어를 추출할 때 이용되는 것을 특징으로 하는 영상 정보 디코딩 장치.
- 제14항에 있어서,상기 파라미터 세트에 대한 정보는 비디오 코딩 레이어 NALU을 디코딩하기 위한 파라미터 세트들을 참조(활성화)하기 위해 이용되는 것을 특징으로 하는 영상 정보 디코딩 장치.
Priority Applications (23)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810384111.5A CN108769705B (zh) | 2012-04-16 | 2013-04-16 | 视频解码方法和设备、视频编码方法和设备 |
PL13777676T PL2840788T3 (pl) | 2012-04-16 | 2013-04-16 | Urządzenie do dekodowania wideo |
EP21167414.8A EP3866472A1 (en) | 2012-04-16 | 2013-04-16 | Video decoding apparatus and video encoding apparatus |
CN201810384705.6A CN108769687B (zh) | 2012-04-16 | 2013-04-16 | 视频解码方法和设备、视频编码方法和设备 |
US14/391,061 US10602160B2 (en) | 2012-04-16 | 2013-04-16 | Image information decoding method, image decoding method, and device using same |
JP2015506892A JP5933815B2 (ja) | 2012-04-16 | 2013-04-16 | 映像情報デコーディング方法、映像デコーディング方法及びそれを利用する装置 |
DK13777676.1T DK2840788T3 (da) | 2012-04-16 | 2013-04-16 | Videoafkodningsapparat |
SI201331581T SI2840788T1 (sl) | 2012-04-16 | 2013-04-16 | Naprava za dekodiranje videa |
LT13777676T LT2840788T (lt) | 2012-04-16 | 2013-04-16 | Vaizdo dekodavimo aparatas |
EP13777676.1A EP2840788B1 (en) | 2012-04-16 | 2013-04-16 | Video decoding apparatus |
CN201380025824.8A CN104303503B (zh) | 2012-04-16 | 2013-04-16 | 图像信息解码方法、图像解码方法和使用所述方法的装置 |
EP19183291.4A EP3570546A1 (en) | 2012-04-16 | 2013-04-16 | Video coding and decoding with marking of a picture as non-reference picture or reference picture |
ES13777676T ES2748463T3 (es) | 2012-04-16 | 2013-04-16 | Aparato de decodificación de vídeo |
CN201810384113.4A CN108769707B (zh) | 2012-04-16 | 2013-04-16 | 视频编码和解码方法、存储和生成位流的方法 |
CN201810384112.XA CN108769706B (zh) | 2012-04-16 | 2013-04-16 | 视频解码方法和设备、视频编码方法和设备 |
CN201810384281.3A CN108769713B (zh) | 2012-04-16 | 2013-04-16 | 视频解码方法和设备、视频编码方法和设备 |
RS20191215A RS59596B1 (sr) | 2012-04-16 | 2013-04-16 | Uređaj za video dekodiranje |
HRP20191726TT HRP20191726T1 (hr) | 2012-04-16 | 2019-09-24 | Aparat za dekodiranje video zapisa |
CY20191101019T CY1122257T1 (el) | 2012-04-16 | 2019-09-27 | Συσκευη αποκωδικοποιησης βιντεο |
US16/784,714 US10958919B2 (en) | 2012-04-16 | 2020-02-07 | Image information decoding method, image decoding method, and device using same |
US17/144,409 US11483578B2 (en) | 2012-04-16 | 2021-01-08 | Image information decoding method, image decoding method, and device using same |
US17/950,248 US12028538B2 (en) | 2012-04-16 | 2022-09-22 | Image information decoding method, image decoding method, and device using same |
US18/417,415 US20240155140A1 (en) | 2012-04-16 | 2024-01-19 | Image information decoding method, image decoding method, and device using same |
Applications Claiming Priority (16)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2012-0038870 | 2012-04-16 | ||
KR20120038870 | 2012-04-16 | ||
KR20120066606 | 2012-06-21 | ||
KR10-2012-0066606 | 2012-06-21 | ||
KR10-2012-0067925 | 2012-06-25 | ||
KR20120067925 | 2012-06-25 | ||
KR20120071933 | 2012-07-02 | ||
KR10-2012-0071933 | 2012-07-02 | ||
KR10-2012-0077012 | 2012-07-16 | ||
KR20120077012 | 2012-07-16 | ||
KR1020120108925A KR20130116782A (ko) | 2012-04-16 | 2012-09-28 | 계층적 비디오 부호화에서의 계층정보 표현방식 |
KR10-2012-0108925 | 2012-09-28 | ||
KR10-2012-0112598 | 2012-10-10 | ||
KR1020120112598A KR20130116783A (ko) | 2012-04-16 | 2012-10-10 | 계층적 비디오 부호화에서의 계층정보 표현방식 |
KR1020130041862A KR101378861B1 (ko) | 2012-04-16 | 2013-04-16 | 영상 정보 디코딩 방법, 영상 디코딩 방법 및 이를 이용하는 장치 |
KR10-2013-0041862 | 2013-04-16 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/391,061 A-371-Of-International US10602160B2 (en) | 2012-04-16 | 2013-04-16 | Image information decoding method, image decoding method, and device using same |
US16/784,714 Continuation US10958919B2 (en) | 2012-04-16 | 2020-02-07 | Image information decoding method, image decoding method, and device using same |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013157826A1 true WO2013157826A1 (ko) | 2013-10-24 |
Family
ID=49635785
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/003204 WO2013157826A1 (ko) | 2012-04-16 | 2013-04-16 | 영상 정보 디코딩 방법, 영상 디코딩 방법 및 이를 이용하는 장치 |
PCT/KR2013/003206 WO2013157828A1 (ko) | 2012-04-16 | 2013-04-16 | 복수의 계층을 지원하는 비트스트림의 디코딩 방법 및 이를 이용하는 장치 |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2013/003206 WO2013157828A1 (ko) | 2012-04-16 | 2013-04-16 | 복수의 계층을 지원하는 비트스트림의 디코딩 방법 및 이를 이용하는 장치 |
Country Status (16)
Country | Link |
---|---|
US (10) | US10595026B2 (ko) |
EP (7) | EP3340630B1 (ko) |
JP (30) | JP5933815B2 (ko) |
KR (24) | KR20130116782A (ko) |
CN (12) | CN108769710B (ko) |
CY (1) | CY1122257T1 (ko) |
DK (1) | DK2840788T3 (ko) |
ES (1) | ES2748463T3 (ko) |
HR (1) | HRP20191726T1 (ko) |
HU (1) | HUE045980T2 (ko) |
LT (1) | LT2840788T (ko) |
PL (2) | PL3340630T3 (ko) |
PT (1) | PT2840788T (ko) |
RS (1) | RS59596B1 (ko) |
SI (1) | SI2840788T1 (ko) |
WO (2) | WO2013157826A1 (ko) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015093811A1 (ko) * | 2013-12-16 | 2015-06-25 | 엘지전자 주식회사 | 트릭 플레이 서비스 제공을 위한 신호 송수신 장치 및 신호 송수신 방법 |
JP2017055438A (ja) * | 2016-11-16 | 2017-03-16 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US10455243B2 (en) | 2014-03-07 | 2019-10-22 | Sony Corporation | Transmission device, transmission method, reception device, and reception method for a first stream having encoded image data of pictures on a low-level side and a second stream having encoded image data of pictures on a high-level side |
Families Citing this family (49)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130116782A (ko) * | 2012-04-16 | 2013-10-24 | 한국전자통신연구원 | 계층적 비디오 부호화에서의 계층정보 표현방식 |
CN104620585A (zh) * | 2012-09-09 | 2015-05-13 | Lg电子株式会社 | 图像解码方法和使用其的装置 |
US10805605B2 (en) * | 2012-12-21 | 2020-10-13 | Telefonaktiebolaget Lm Ericsson (Publ) | Multi-layer video stream encoding and decoding |
US10129550B2 (en) * | 2013-02-01 | 2018-11-13 | Qualcomm Incorporated | Inter-layer syntax prediction control |
CN109451320B (zh) * | 2013-06-05 | 2023-06-02 | 太阳专利托管公司 | 图像编码方法、图像解码方法、图像编码装置以及图像解码装置 |
WO2015054634A2 (en) * | 2013-10-11 | 2015-04-16 | Vid Scale, Inc. | High level syntax for hevc extensions |
KR102248848B1 (ko) | 2013-10-26 | 2021-05-06 | 삼성전자주식회사 | 멀티 레이어 비디오 부호화 방법 및 장치, 멀티 레이어 비디오 복호화 방법 및 장치 |
KR20150064676A (ko) * | 2013-12-03 | 2015-06-11 | 주식회사 케이티 | 멀티 레이어 비디오 신호 인코딩/디코딩 방법 및 장치 |
KR20150064678A (ko) * | 2013-12-03 | 2015-06-11 | 주식회사 케이티 | 멀티 레이어 비디오 신호 인코딩/디코딩 방법 및 장치 |
KR102266902B1 (ko) * | 2014-01-13 | 2021-06-18 | 삼성전자주식회사 | 멀티 레이어 비디오 부호화 방법 및 장치, 멀티 레이어 비디오 복호화 방법 및 장치 |
JP6150134B2 (ja) | 2014-03-24 | 2017-06-21 | ソニー株式会社 | 画像符号化装置および方法、画像復号装置および方法、プログラム、並びに記録媒体 |
WO2016098056A1 (en) | 2014-12-18 | 2016-06-23 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
EP3313079B1 (en) * | 2015-06-18 | 2021-09-01 | LG Electronics Inc. | Image filtering method in image coding system |
KR102602690B1 (ko) * | 2015-10-08 | 2023-11-16 | 한국전자통신연구원 | 화질에 기반한 적응적 부호화 및 복호화를 위한 방법 및 장치 |
WO2018037985A1 (ja) * | 2016-08-22 | 2018-03-01 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US10692262B2 (en) | 2017-01-12 | 2020-06-23 | Electronics And Telecommunications Research Institute | Apparatus and method for processing information of multiple cameras |
US11496761B2 (en) * | 2018-06-30 | 2022-11-08 | Sharp Kabushiki Kaisha | Systems and methods for signaling picture types of pictures included in coded video |
US10904545B2 (en) * | 2018-12-26 | 2021-01-26 | Tencent America LLC | Method for syntax controlled decoded picture buffer management |
EP3939318A1 (en) * | 2019-03-11 | 2022-01-19 | VID SCALE, Inc. | Sub-picture bitstream extraction and reposition |
WO2020184673A1 (ja) * | 2019-03-12 | 2020-09-17 | ソニー株式会社 | 画像復号装置、画像復号方法、画像符号化装置、および画像符号化方法 |
US11310560B2 (en) | 2019-05-17 | 2022-04-19 | Samsung Electronics Co., Ltd. | Bitstream merger and extractor |
WO2020235552A1 (en) * | 2019-05-19 | 2020-11-26 | Sharp Kabushiki Kaisha | Systems and methods for signaling picture property information in video coding |
WO2020242229A1 (ko) * | 2019-05-28 | 2020-12-03 | 삼성전자주식회사 | 작은 크기의 인트라 블록을 방지하기 위한 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 |
CN113950842A (zh) * | 2019-06-20 | 2022-01-18 | 索尼半导体解决方案公司 | 图像处理装置和方法 |
US11032548B2 (en) * | 2019-06-24 | 2021-06-08 | Tencent America LLC | Signaling for reference picture resampling |
US11457242B2 (en) * | 2019-06-24 | 2022-09-27 | Qualcomm Incorporated | Gradual random access (GRA) signalling in video coding |
KR20220027207A (ko) | 2019-07-08 | 2022-03-07 | 후아웨이 테크놀러지 컴퍼니 리미티드 | 비디오 코딩에서의 혼합된 nal 유닛 픽처 제약 조건 |
CN110446047A (zh) * | 2019-08-16 | 2019-11-12 | 苏州浪潮智能科技有限公司 | 视频码流的解码方法及装置 |
CA3156854C (en) * | 2019-10-07 | 2024-05-21 | Huawei Technologies Co., Ltd. | An encoder, a decoder and corresponding methods |
CN114503587A (zh) * | 2019-10-07 | 2022-05-13 | Lg电子株式会社 | 点云数据发送装置、点云数据发送方法、点云数据接收装置和点云数据接收方法 |
JP7526268B2 (ja) * | 2019-12-23 | 2024-07-31 | エルジー エレクトロニクス インコーポレイティド | Nalユニット関連情報に基づく映像又はビデオコーディング |
CN115104316A (zh) | 2019-12-23 | 2022-09-23 | Lg电子株式会社 | 用于切片或图片的基于nal单元类型的图像或视频编码 |
WO2021132962A1 (ko) | 2019-12-23 | 2021-07-01 | 엘지전자 주식회사 | Nal 유닛 타입 기반 영상 또는 비디오 코딩 |
CN116743997A (zh) * | 2019-12-27 | 2023-09-12 | 阿里巴巴(中国)有限公司 | 用信号通知子图像划分信息的方法和装置 |
WO2021137590A1 (ko) | 2020-01-02 | 2021-07-08 | 엘지전자 주식회사 | Ph nal 유닛 코딩 관련 영상 디코딩 방법 및 그 장치 |
AU2020418309B2 (en) * | 2020-01-02 | 2024-04-04 | Lg Electronics Inc. | Image decoding method and apparatus for coding image information including picture header |
CN115280781A (zh) * | 2020-01-02 | 2022-11-01 | Lg电子株式会社 | 图像解码方法及其装置 |
BR112022013931A2 (pt) * | 2020-01-14 | 2022-10-04 | Lg Electronics Inc | Método e dispositivo de codificação/decodificação de imagem para sinalizar informação relacionada à subimagem e cabeçalho de imagem e método para transmissão de fluxo de bits |
WO2021188451A1 (en) * | 2020-03-16 | 2021-09-23 | Bytedance Inc. | Random access point access unit in scalable video coding |
CN115299070A (zh) * | 2020-03-17 | 2022-11-04 | 华为技术有限公司 | 编码器、解码器及对应的方法 |
CN113453006B (zh) * | 2020-03-25 | 2024-04-16 | 腾讯美国有限责任公司 | 一种图片封装方法、设备以及存储介质 |
EP4124032A4 (en) * | 2020-04-12 | 2023-05-31 | LG Electronics, Inc. | POINT CLOUD DATA TRANSMITTING DEVICE, POINT CLOUD DATA TRANSMITTING METHOD, POINT CLOUD DATA RECEIVING DEVICE, AND POINT CLOUD DATA RECEIVING METHOD |
CN115552903A (zh) * | 2020-05-12 | 2022-12-30 | Lg电子株式会社 | 处理图像/视频编码系统中的单层比特流内的参数集的参考的方法和装置 |
US11223841B2 (en) | 2020-05-29 | 2022-01-11 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence encoding and artificial intelligence decoding on image |
KR102421720B1 (ko) * | 2020-05-29 | 2022-07-18 | 삼성전자주식회사 | 영상의 ai 부호화 및 ai 복호화를 위한 장치, 및 방법 |
WO2021251611A1 (en) | 2020-06-11 | 2021-12-16 | Samsung Electronics Co., Ltd. | Apparatus and method for performing artificial intelligence encoding and decoding on image by using low-complexity neural network |
BR112023019239A2 (pt) * | 2021-04-02 | 2023-10-17 | Qualcomm Inc | Mensagem de informação de aprimoramento suplemental de métricas de qualidade e orientação de imagem para codificação de vídeo |
US11895336B2 (en) | 2021-04-02 | 2024-02-06 | Qualcomm Incorporated | Picture orientation and quality metrics supplemental enhancement information message for video coding |
WO2024167266A1 (ko) * | 2023-02-09 | 2024-08-15 | 삼성전자 주식회사 | 전자 장치 및 전자 장치에서 스케일러블 코덱을 처리하는 방법 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060068254A (ko) * | 2004-12-16 | 2006-06-21 | 엘지전자 주식회사 | 비디오 부호화 방법, 복호화 방법 그리고, 복호화 장치 |
KR20060122664A (ko) * | 2005-05-26 | 2006-11-30 | 엘지전자 주식회사 | 레이어간 예측방식를 사용해 엔코딩된 영상신호를디코딩하는 방법 |
KR100763196B1 (ko) * | 2005-10-19 | 2007-10-04 | 삼성전자주식회사 | 어떤 계층의 플래그를 계층간의 연관성을 이용하여부호화하는 방법, 상기 부호화된 플래그를 복호화하는방법, 및 장치 |
KR20090079932A (ko) * | 2006-10-16 | 2009-07-22 | 노키아 코포레이션 | 멀티뷰 비디오 코딩에서 효율적인 디코딩된 버퍼 관리를 구현하기 위한 시스템 및 방법 |
WO2011052215A1 (ja) * | 2009-10-30 | 2011-05-05 | パナソニック株式会社 | 復号方法、復号装置、符号化方法、および符号化装置 |
Family Cites Families (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1578136A3 (en) * | 1998-01-27 | 2005-10-19 | AT&T Corp. | Method and apparatus for encoding video shape and texture information |
US6895048B2 (en) * | 1998-03-20 | 2005-05-17 | International Business Machines Corporation | Adaptive encoding of a sequence of still frames or partially still frames within motion video |
EP1500002A1 (en) * | 2002-04-29 | 2005-01-26 | Sony Electronics Inc. | Supporting advanced coding formats in media files |
US8752197B2 (en) * | 2002-06-18 | 2014-06-10 | International Business Machines Corporation | Application independent system, method, and architecture for privacy protection, enhancement, control, and accountability in imaging service systems |
CN100423581C (zh) * | 2002-12-30 | 2008-10-01 | Nxp股份有限公司 | 动态图形的编码/解码方法及其设备 |
JP4479160B2 (ja) * | 2003-03-11 | 2010-06-09 | チッソ株式会社 | シルセスキオキサン誘導体を用いて得られる重合体 |
KR100596706B1 (ko) | 2003-12-01 | 2006-07-04 | 삼성전자주식회사 | 스케일러블 비디오 코딩 및 디코딩 방법, 이를 위한 장치 |
WO2005055608A1 (en) | 2003-12-01 | 2005-06-16 | Samsung Electronics Co., Ltd. | Method and apparatus for scalable video encoding and decoding |
US7415069B2 (en) * | 2003-12-09 | 2008-08-19 | Lsi Corporation | Method for activation and deactivation of infrequently changing sequence and picture parameter sets |
US7586924B2 (en) * | 2004-02-27 | 2009-09-08 | Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. | Apparatus and method for coding an information signal into a data stream, converting the data stream and decoding the data stream |
KR101132062B1 (ko) * | 2004-06-02 | 2012-04-02 | 파나소닉 주식회사 | 화상 부호화 장치 및 화상 복호화 장치 |
JP4575129B2 (ja) * | 2004-12-02 | 2010-11-04 | ソニー株式会社 | データ処理装置およびデータ処理方法、並びにプログラムおよびプログラム記録媒体 |
JP2006203661A (ja) * | 2005-01-21 | 2006-08-03 | Toshiba Corp | 動画像符号化装置、動画像復号装置及び符号化ストリーム生成方法 |
JP2006211274A (ja) * | 2005-01-27 | 2006-08-10 | Toshiba Corp | 記録媒体、この記録媒体を再生する方法並びにその再生装置及び記録媒体に映像データを記録する記録装置並びにその記録方法 |
EP1869888B1 (en) * | 2005-04-13 | 2016-07-06 | Nokia Technologies Oy | Method, device and system for effectively coding and decoding of video data |
CN101120593A (zh) | 2005-04-13 | 2008-02-06 | 诺基亚公司 | 可扩展性信息的编码、存储和信号发送 |
EP1897377A4 (en) | 2005-05-26 | 2010-12-22 | Lg Electronics Inc | METHOD FOR PROVIDING AND USING INTERLOCK PREDICTION INFORMATION FOR VIDEO SIGNAL |
EP1773063A1 (en) * | 2005-06-14 | 2007-04-11 | Thomson Licensing | Method and apparatus for encoding video data, and method and apparatus for decoding video data |
FR2888424A1 (fr) * | 2005-07-07 | 2007-01-12 | Thomson Licensing Sas | Dispositif et procede de codage et de decodage de donnees video et train de donnees |
EP1949701A1 (en) * | 2005-10-11 | 2008-07-30 | Nokia Corporation | Efficient decoded picture buffer management for scalable video coding |
US20100158133A1 (en) * | 2005-10-12 | 2010-06-24 | Peng Yin | Method and Apparatus for Using High-Level Syntax in Scalable Video Encoding and Decoding |
MY159176A (en) * | 2005-10-19 | 2016-12-30 | Thomson Licensing | Multi-view video coding using scalable video coding |
KR100889745B1 (ko) | 2006-01-09 | 2009-03-24 | 한국전자통신연구원 | 날 유닛 타입 표시방법 및 그에 따른 비트스트림 전달장치및 리던던트 슬라이스 부호화 장치 |
JP4731343B2 (ja) * | 2006-02-06 | 2011-07-20 | 富士通東芝モバイルコミュニケーションズ株式会社 | 復号装置 |
EP1827023A1 (en) * | 2006-02-27 | 2007-08-29 | THOMSON Licensing | Method and apparatus for packet loss detection and virtual packet generation at SVC decoders |
US8767836B2 (en) * | 2006-03-27 | 2014-07-01 | Nokia Corporation | Picture delimiter in scalable video coding |
MX2008011652A (es) * | 2006-03-29 | 2008-09-22 | Thomson Licensing | Metodos y aparatos para usarse en un sistema de codificacion de video de multiples vistas. |
CN101455084A (zh) * | 2006-03-30 | 2009-06-10 | Lg电子株式会社 | 用于解码/编码视频信号的方法和装置 |
EP2039168A2 (en) * | 2006-07-05 | 2009-03-25 | Thomson Licensing | Methods and apparatus for multi-view video encoding and decoding |
KR20080007086A (ko) * | 2006-07-14 | 2008-01-17 | 엘지전자 주식회사 | 비디오 신호의 디코딩/인코딩 방법 및 장치 |
US8532178B2 (en) | 2006-08-25 | 2013-09-10 | Lg Electronics Inc. | Method and apparatus for decoding/encoding a video signal with inter-view reference picture list construction |
CN102158697B (zh) | 2006-09-07 | 2013-10-09 | Lg电子株式会社 | 用于解码/编码视频信号的方法及装置 |
CN101395925B (zh) * | 2006-09-07 | 2013-01-02 | Lg电子株式会社 | 用于解码/编码视频信号的方法及装置 |
JP5087627B2 (ja) * | 2006-09-28 | 2012-12-05 | トムソン ライセンシング | 効果的なレート制御および拡張したビデオ符号化品質のためのρ領域フレームレベルビット割り当てのための方法 |
BRPI0719536A2 (pt) * | 2006-10-16 | 2014-01-14 | Thomson Licensing | Método para utilização de uma unidade de camada genérica na rede de trabalho sinalizando uma reposição instantânea de decodificação durante uma operação em vídeo. |
BRPI0718421A2 (pt) * | 2006-10-24 | 2013-11-12 | Thomson Licensing | Gerenciamento de quadro para codificação de vídeo de multivistas |
KR100896289B1 (ko) * | 2006-11-17 | 2009-05-07 | 엘지전자 주식회사 | 비디오 신호의 디코딩/인코딩 방법 및 장치 |
JP5157140B2 (ja) * | 2006-11-29 | 2013-03-06 | ソニー株式会社 | 記録装置、記録方法、情報処理装置、情報処理方法、撮像装置およびビデオシステム |
US10291863B2 (en) * | 2006-12-21 | 2019-05-14 | InterDigital VC Holdings Inc. | Method for indicating coding order in multi-view video coded content |
MY162367A (en) | 2007-01-05 | 2017-06-15 | Thomson Licensing | Hypothetical reference decoder for scalable video coding |
CN101543018B (zh) | 2007-01-12 | 2012-12-26 | 庆熙大学校产学协力团 | 网络提取层单元的分组格式、使用该格式的视频编解码算法和装置以及使用该格式进行IPv6标签交换的QoS控制算法和装置 |
JP5023739B2 (ja) | 2007-02-28 | 2012-09-12 | ソニー株式会社 | 画像情報符号化装置及び符号化方法 |
CN101641954B (zh) * | 2007-03-23 | 2011-09-14 | Lg电子株式会社 | 用于解码/编码视频信号的方法和装置 |
JP5686594B2 (ja) * | 2007-04-12 | 2015-03-18 | トムソン ライセンシングThomson Licensing | スケーラブル・ビデオ符号化のためのビデオ・ユーザビリティ情報(vui)用の方法及び装置 |
US20100142613A1 (en) | 2007-04-18 | 2010-06-10 | Lihua Zhu | Method for encoding video data in a scalable manner |
MX2010000684A (es) | 2007-08-24 | 2010-03-30 | Lg Electronics Inc | Sistema de difusion digital y metodo para procesar datos en el sistema de difusion digital. |
BRPI0817420A2 (pt) | 2007-10-05 | 2013-06-18 | Thomson Licensing | mÉtodos e aparelho para incorporar informaÇço de usabilidade de vÍdeo (vui) em um sistema de codificaÇço de vÍdeo de méltiplas visualizaÇÕes (mvc) |
KR101345287B1 (ko) * | 2007-10-12 | 2013-12-27 | 삼성전자주식회사 | 스케일러블 영상 부호화 방법 및 장치와 그 영상 복호화방법 및 장치 |
EP2304940A4 (en) | 2008-07-22 | 2011-07-06 | Thomson Licensing | METHODS OF ERROR DISSIMULATION DUE TO ENHANCEMENT LAYER PACKET LOSS IN SCALEABLE VIDEO ENCODING DECODING (SVC) |
JP2012504925A (ja) * | 2008-10-06 | 2012-02-23 | エルジー エレクトロニクス インコーポレイティド | ビデオ信号の処理方法及び装置 |
US20100226227A1 (en) * | 2009-03-09 | 2010-09-09 | Chih-Ching Yu | Methods and apparatuses of processing readback signal generated from reading optical storage medium |
JP5332773B2 (ja) * | 2009-03-18 | 2013-11-06 | ソニー株式会社 | 画像処理装置および方法 |
US8566393B2 (en) | 2009-08-10 | 2013-10-22 | Seawell Networks Inc. | Methods and systems for scalable video chunking |
KR101124723B1 (ko) | 2009-08-21 | 2012-03-23 | 에스케이플래닛 주식회사 | 해상도 시그널링을 이용한 스케일러블 비디오 재생 시스템 및 방법 |
US8976871B2 (en) * | 2009-09-16 | 2015-03-10 | Qualcomm Incorporated | Media extractor tracks for file format track selection |
EP2346261A1 (en) * | 2009-11-18 | 2011-07-20 | Tektronix International Sales GmbH | Method and apparatus for multiplexing H.264 elementary streams without timing information coded |
CN102103651B (zh) | 2009-12-21 | 2012-11-14 | 中国移动通信集团公司 | 一种一卡通系统的实现方法和系统以及一种智能卡 |
US9185439B2 (en) | 2010-07-15 | 2015-11-10 | Qualcomm Incorporated | Signaling data for multiplexing video components |
KR20120015260A (ko) | 2010-07-20 | 2012-02-21 | 한국전자통신연구원 | 스케일러빌리티 및 뷰 정보를 제공하는 스트리밍 서비스를 위한 방법 및 장치 |
KR20120038870A (ko) | 2010-10-14 | 2012-04-24 | 정태길 | 클라우드 컴퓨팅 기반의 모바일 오피스 프린팅 부가 서비스 방법 |
KR101158244B1 (ko) | 2010-12-14 | 2012-07-20 | 주식회사 동호 | 하천 친환경 생태 조성 구조체 및 시스템 |
JP2012142551A (ja) | 2010-12-16 | 2012-07-26 | Nisshin:Kk | 加熱処理方法およびその装置 |
KR101740425B1 (ko) | 2010-12-23 | 2017-05-26 | 에스케이텔레콤 주식회사 | 중계기 및 상기 중계기의 신호 중계 방법 |
KR101214465B1 (ko) | 2010-12-30 | 2012-12-21 | 주식회사 신한엘리베이타 | 가볍고 방수성이 우수한 방수발판부재가 구비된 에스컬레이터 장치 |
JP5738434B2 (ja) * | 2011-01-14 | 2015-06-24 | ヴィディオ・インコーポレーテッド | 改善されたnalユニットヘッダ |
US20120230409A1 (en) | 2011-03-07 | 2012-09-13 | Qualcomm Incorporated | Decoded picture buffer management |
JP5833682B2 (ja) * | 2011-03-10 | 2015-12-16 | ヴィディオ・インコーポレーテッド | スケーラブルなビデオ符号化のための依存性パラメータセット |
JP5708124B2 (ja) | 2011-03-25 | 2015-04-30 | 三菱電機株式会社 | 半導体装置 |
MX2013014857A (es) | 2011-06-30 | 2014-03-26 | Ericsson Telefon Ab L M | Señalizacion de imagenes de referencia. |
RU2014105292A (ru) | 2011-07-13 | 2015-08-20 | Телефонактиеболагет Л М Эрикссон (Пабл) | Кодер, декодер и способы их работы для управления опорными изображениями |
US9237356B2 (en) | 2011-09-23 | 2016-01-12 | Qualcomm Incorporated | Reference picture list construction for video coding |
US9473752B2 (en) * | 2011-11-30 | 2016-10-18 | Qualcomm Incorporated | Activation of parameter sets for multiview video coding (MVC) compatible three-dimensional video coding (3DVC) |
US9451252B2 (en) * | 2012-01-14 | 2016-09-20 | Qualcomm Incorporated | Coding parameter sets and NAL unit headers for video coding |
DK3481068T3 (da) | 2012-04-13 | 2020-11-16 | Ge Video Compression Llc | Billedkodning med lav forsinkelse |
KR20130116782A (ko) | 2012-04-16 | 2013-10-24 | 한국전자통신연구원 | 계층적 비디오 부호화에서의 계층정보 표현방식 |
US9554146B2 (en) * | 2012-09-21 | 2017-01-24 | Qualcomm Incorporated | Indication and activation of parameter sets for video coding |
ES2648970T3 (es) * | 2012-12-21 | 2018-01-09 | Telefonaktiebolaget Lm Ericsson (Publ) | Codificación y decodificación de flujo de video multicapa |
CN105122816A (zh) | 2013-04-05 | 2015-12-02 | 夏普株式会社 | 层间参考图像集的解码和参考图像列表构建 |
US20140307803A1 (en) | 2013-04-08 | 2014-10-16 | Qualcomm Incorporated | Non-entropy encoded layer dependency information |
-
2012
- 2012-09-28 KR KR1020120108925A patent/KR20130116782A/ko unknown
- 2012-10-10 KR KR1020120112598A patent/KR20130116783A/ko unknown
-
2013
- 2013-04-16 PL PL18150626T patent/PL3340630T3/pl unknown
- 2013-04-16 KR KR1020130041862A patent/KR101378861B1/ko active IP Right Grant
- 2013-04-16 EP EP18150626.2A patent/EP3340630B1/en not_active Revoked
- 2013-04-16 CN CN201810384124.2A patent/CN108769710B/zh active Active
- 2013-04-16 PT PT137776761T patent/PT2840788T/pt unknown
- 2013-04-16 EP EP13777635.7A patent/EP2840787A4/en not_active Ceased
- 2013-04-16 CN CN201380025824.8A patent/CN104303503B/zh active Active
- 2013-04-16 WO PCT/KR2013/003204 patent/WO2013157826A1/ko active Application Filing
- 2013-04-16 EP EP21177418.7A patent/EP3893511A1/en active Pending
- 2013-04-16 EP EP13777676.1A patent/EP2840788B1/en active Active
- 2013-04-16 CN CN201810384111.5A patent/CN108769705B/zh active Active
- 2013-04-16 EP EP19183291.4A patent/EP3570546A1/en active Pending
- 2013-04-16 WO PCT/KR2013/003206 patent/WO2013157828A1/ko active Application Filing
- 2013-04-16 EP EP21167414.8A patent/EP3866472A1/en active Pending
- 2013-04-16 CN CN201810384113.4A patent/CN108769707B/zh active Active
- 2013-04-16 EP EP16163783.0A patent/EP3086556A1/en not_active Ceased
- 2013-04-16 CN CN201810384265.4A patent/CN108769712B/zh active Active
- 2013-04-16 CN CN201810384705.6A patent/CN108769687B/zh active Active
- 2013-04-16 CN CN201810384264.XA patent/CN108769711B/zh active Active
- 2013-04-16 RS RS20191215A patent/RS59596B1/sr unknown
- 2013-04-16 DK DK13777676.1T patent/DK2840788T3/da active
- 2013-04-16 US US14/391,151 patent/US10595026B2/en active Active
- 2013-04-16 PL PL13777676T patent/PL2840788T3/pl unknown
- 2013-04-16 US US14/391,061 patent/US10602160B2/en active Active
- 2013-04-16 HU HUE13777676A patent/HUE045980T2/hu unknown
- 2013-04-16 KR KR1020130041863A patent/KR101953703B1/ko active IP Right Grant
- 2013-04-16 CN CN201810384112.XA patent/CN108769706B/zh active Active
- 2013-04-16 CN CN201810384114.9A patent/CN108769708B/zh active Active
- 2013-04-16 JP JP2015506892A patent/JP5933815B2/ja active Active
- 2013-04-16 CN CN201810384281.3A patent/CN108769713B/zh active Active
- 2013-04-16 SI SI201331581T patent/SI2840788T1/sl unknown
- 2013-04-16 CN CN201810384123.8A patent/CN108769709B/zh active Active
- 2013-04-16 CN CN201810384148.8A patent/CN108769686B/zh active Active
- 2013-04-16 LT LT13777676T patent/LT2840788T/lt unknown
- 2013-04-16 ES ES13777676T patent/ES2748463T3/es active Active
- 2013-10-22 KR KR1020130125869A patent/KR101640583B1/ko active IP Right Grant
-
2014
- 2014-05-07 KR KR20140054447A patent/KR101488494B1/ko active IP Right Grant
- 2014-05-07 KR KR20140054446A patent/KR101488495B1/ko active IP Right Grant
- 2014-05-07 KR KR20140054448A patent/KR101488493B1/ko active IP Right Grant
- 2014-05-07 KR KR20140054445A patent/KR101488496B1/ko active IP Right Grant
- 2014-05-07 KR KR1020140054449A patent/KR101673291B1/ko active IP Right Grant
-
2016
- 2016-02-22 JP JP2016031335A patent/JP6186026B2/ja active Active
- 2016-05-02 JP JP2016092745A patent/JP6163229B2/ja active Active
- 2016-05-02 JP JP2016092752A patent/JP6224163B2/ja active Active
- 2016-05-02 JP JP2016092743A patent/JP6224162B2/ja active Active
- 2016-05-02 JP JP2016092749A patent/JP6163230B2/ja active Active
- 2016-07-12 KR KR1020160088203A patent/KR101719344B1/ko active IP Right Grant
- 2016-07-12 KR KR1020160088205A patent/KR101739748B1/ko active IP Right Grant
- 2016-07-12 KR KR1020160088204A patent/KR101719345B1/ko active IP Right Grant
-
2017
- 2017-02-02 KR KR1020170015044A patent/KR101843566B1/ko active IP Right Grant
- 2017-07-28 JP JP2017146933A patent/JP6549189B2/ja active Active
- 2017-09-28 KR KR1020170126210A patent/KR101843565B1/ko active IP Right Grant
-
2018
- 2018-03-23 KR KR1020180034089A patent/KR101904242B1/ko active IP Right Grant
- 2018-03-23 KR KR1020180034113A patent/KR101904237B1/ko active IP Right Grant
- 2018-03-23 KR KR1020180034128A patent/KR101904264B1/ko active IP Right Grant
- 2018-03-23 KR KR1020180034040A patent/KR101904258B1/ko active IP Right Grant
- 2018-03-23 KR KR1020180034107A patent/KR101904234B1/ko active IP Right Grant
- 2018-03-23 KR KR1020180034120A patent/KR101904247B1/ko active IP Right Grant
- 2018-03-23 KR KR1020180034131A patent/KR101931719B1/ko active IP Right Grant
- 2018-03-23 KR KR1020180034057A patent/KR101904255B1/ko active IP Right Grant
- 2018-04-26 JP JP2018085588A patent/JP6556904B2/ja active Active
- 2018-04-26 JP JP2018085608A patent/JP6556905B2/ja active Active
- 2018-04-26 JP JP2018085624A patent/JP6556906B2/ja active Active
- 2018-04-26 JP JP2018085634A patent/JP6556907B2/ja active Active
- 2018-04-26 JP JP2018085564A patent/JP6556903B2/ja active Active
- 2018-04-27 JP JP2018087416A patent/JP6549282B2/ja active Active
- 2018-04-27 JP JP2018087426A patent/JP6549283B2/ja active Active
- 2018-04-27 JP JP2018087408A patent/JP6553246B2/ja active Active
- 2018-04-27 JP JP2018087395A patent/JP6553245B2/ja active Active
-
2019
- 2019-02-25 KR KR1020190021762A patent/KR102062329B1/ko active IP Right Grant
- 2019-06-28 JP JP2019122109A patent/JP6871312B2/ja active Active
- 2019-06-28 JP JP2019122072A patent/JP6841869B2/ja active Active
- 2019-09-24 HR HRP20191726TT patent/HRP20191726T1/hr unknown
- 2019-09-27 CY CY20191101019T patent/CY1122257T1/el unknown
-
2020
- 2020-01-31 US US16/778,313 patent/US10958918B2/en active Active
- 2020-02-07 US US16/784,714 patent/US10958919B2/en active Active
-
2021
- 2021-01-08 US US17/144,409 patent/US11483578B2/en active Active
- 2021-02-12 US US17/174,843 patent/US11490100B2/en active Active
- 2021-02-18 JP JP2021024625A patent/JP7041294B2/ja active Active
- 2021-04-15 JP JP2021069283A patent/JP7123210B2/ja active Active
-
2022
- 2022-03-10 JP JP2022037465A patent/JP7305831B2/ja active Active
- 2022-08-09 JP JP2022127432A patent/JP7367144B2/ja active Active
- 2022-08-09 JP JP2022127436A patent/JP7367145B2/ja active Active
- 2022-08-09 JP JP2022127407A patent/JP7367143B2/ja active Active
- 2022-08-09 JP JP2022127399A patent/JP7367142B2/ja active Active
- 2022-08-09 JP JP2022127390A patent/JP7367141B2/ja active Active
- 2022-08-10 JP JP2022128503A patent/JP7431290B2/ja active Active
- 2022-08-10 JP JP2022128498A patent/JP7432668B2/ja active Active
- 2022-09-22 US US17/950,248 patent/US12028538B2/en active Active
- 2022-09-23 US US17/951,797 patent/US11949890B2/en active Active
-
2023
- 2023-10-17 JP JP2023179054A patent/JP2023179726A/ja active Pending
-
2024
- 2024-01-19 US US18/417,415 patent/US20240155140A1/en active Pending
- 2024-02-02 JP JP2024015010A patent/JP2024050775A/ja active Pending
- 2024-02-27 US US18/588,746 patent/US20240205428A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060068254A (ko) * | 2004-12-16 | 2006-06-21 | 엘지전자 주식회사 | 비디오 부호화 방법, 복호화 방법 그리고, 복호화 장치 |
KR20060122664A (ko) * | 2005-05-26 | 2006-11-30 | 엘지전자 주식회사 | 레이어간 예측방식를 사용해 엔코딩된 영상신호를디코딩하는 방법 |
KR100763196B1 (ko) * | 2005-10-19 | 2007-10-04 | 삼성전자주식회사 | 어떤 계층의 플래그를 계층간의 연관성을 이용하여부호화하는 방법, 상기 부호화된 플래그를 복호화하는방법, 및 장치 |
KR20090079932A (ko) * | 2006-10-16 | 2009-07-22 | 노키아 코포레이션 | 멀티뷰 비디오 코딩에서 효율적인 디코딩된 버퍼 관리를 구현하기 위한 시스템 및 방법 |
WO2011052215A1 (ja) * | 2009-10-30 | 2011-05-05 | パナソニック株式会社 | 復号方法、復号装置、符号化方法、および符号化装置 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015093811A1 (ko) * | 2013-12-16 | 2015-06-25 | 엘지전자 주식회사 | 트릭 플레이 서비스 제공을 위한 신호 송수신 장치 및 신호 송수신 방법 |
US10230999B2 (en) | 2013-12-16 | 2019-03-12 | Lg Electronics Inc. | Signal transmission/reception device and signal transmission/reception method for providing trick play service |
US10455243B2 (en) | 2014-03-07 | 2019-10-22 | Sony Corporation | Transmission device, transmission method, reception device, and reception method for a first stream having encoded image data of pictures on a low-level side and a second stream having encoded image data of pictures on a high-level side |
US11122280B2 (en) | 2014-03-07 | 2021-09-14 | Sony Corporation | Transmission device, transmission method, reception device, and reception method using hierarchical encoding to allow decoding based on device capability |
US11394984B2 (en) | 2014-03-07 | 2022-07-19 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
US11758160B2 (en) | 2014-03-07 | 2023-09-12 | Sony Group Corporation | Transmission device, transmission method, reception device, and reception method |
JP2017055438A (ja) * | 2016-11-16 | 2017-03-16 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
Also Published As
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013157826A1 (ko) | 영상 정보 디코딩 방법, 영상 디코딩 방법 및 이를 이용하는 장치 | |
WO2014092407A1 (ko) | 영상의 디코딩 방법 및 이를 이용하는 장치 | |
WO2014003379A1 (ko) | 영상 디코딩 방법 및 이를 이용하는 장치 | |
WO2014038906A1 (ko) | 영상 복호화 방법 및 이를 이용하는 장치 | |
WO2015056941A1 (ko) | 다계층 기반의 영상 부호화/복호화 방법 및 장치 | |
WO2015009036A1 (ko) | 시간적 서브 레이어 정보에 기반한 인터 레이어 예측 방법 및 장치 | |
WO2014042460A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2021177794A1 (ko) | 혼성 nal 유닛 타입에 기반하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2014107069A1 (ko) | 영상 부호화/복호화 방법 및 장치 | |
WO2021132964A1 (ko) | Nal 유닛 관련 정보 기반 영상 또는 비디오 코딩 | |
WO2021177791A1 (ko) | 혼성 nal 유닛 타입에 기반하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 | |
WO2021132963A1 (ko) | 슬라이스 또는 픽처에 대한 nal 유닛 타입 기반 영상 또는 비디오 코딩 | |
WO2021066618A1 (ko) | 변환 스킵 및 팔레트 코딩 관련 정보의 시그널링 기반 영상 또는 비디오 코딩 | |
WO2022131870A1 (ko) | Nal 유닛 어레이 정보를 포함하는 미디어 파일 생성/수신 방법, 장치 및 미디어 파일 전송 방법 | |
WO2022060113A1 (ko) | 미디어 파일 처리 방법 및 그 장치 | |
WO2021241963A1 (ko) | 비디오 또는 영상 코딩 시스템에서의 poc 정보 및 비-참조 픽처 플래그에 기반한 영상 코딩 방법 | |
WO2021201513A1 (ko) | Sps 내 ptl, dpb 및 hrd 관련 정보를 시그널링하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 컴퓨터 판독 가능한 기록 매체 | |
WO2021066609A1 (ko) | 변환 스킵 및 팔레트 코딩 관련 고급 문법 요소 기반 영상 또는 비디오 코딩 | |
WO2021125701A1 (ko) | 인터 예측 기반 영상/비디오 코딩 방법 및 장치 | |
WO2020185036A1 (ko) | 비디오 신호를 처리하기 위한 방법 및 장치 | |
WO2021251718A1 (ko) | 서브레이어 레벨 정보에 기반한 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장하는 기록 매체 | |
WO2021251752A1 (ko) | 최대 시간 식별자에 기반하여 서브 비트스트림 추출과정을 수행하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 컴퓨터 판독가능한 기록매체 | |
WO2024049269A1 (ko) | Hrd 파라미터를 시그널링하는 영상 부호화/복호화 방법, 장치 및 비트스트림을 저장한 컴퓨터 판독 가능한 기록 매체 | |
WO2022131845A1 (ko) | Nal 유닛 정보를 포함하는 미디어 파일 생성/수신 방법, 장치 및 미디어 파일 전송 방법 | |
WO2021066610A1 (ko) | 변환 스킵 및 팔레트 코딩 관련 정보 기반 영상 또는 비디오 코딩 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13777676 Country of ref document: EP Kind code of ref document: A1 |
|
DPE2 | Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 14391061 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013777676 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2015506892 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |