WO2015098563A1 - 画像符号化装置および方法、並びに画像復号装置および方法 - Google Patents
画像符号化装置および方法、並びに画像復号装置および方法 Download PDFInfo
- Publication number
- WO2015098563A1 WO2015098563A1 PCT/JP2014/082924 JP2014082924W WO2015098563A1 WO 2015098563 A1 WO2015098563 A1 WO 2015098563A1 JP 2014082924 W JP2014082924 W JP 2014082924W WO 2015098563 A1 WO2015098563 A1 WO 2015098563A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- layer
- unit
- inter
- information
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/132—Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/174—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
Definitions
- the present disclosure relates to an image encoding apparatus and method, and an image decoding apparatus and method, and in particular, an image encoding apparatus and method, and an image decoding apparatus and method that can smoothly perform processing related to layers. About.
- H.264 / AVC High Efficiency Video Coding
- JCTVC Joint Collaboration Team-Video Coding
- the conventional image coding methods such as MPEG-2 and AVC have a scalability function of layering and coding an image into a plurality of layers.
- a terminal with a low processing capacity such as a mobile phone
- image compression information of only the base layer (base layer) is transmitted, and a moving image with low spatiotemporal resolution or poor image quality is reproduced.
- the image compression information of the enhancement layer is transmitted, and the space-time resolution is high.
- Non-Patent Document 2 defines a scalable extension related to HEVC.
- layer_id is specified in NAL_unit_header
- VPS Video Parameter Set
- a layer set is defined by Layer_id_included_flag.
- VPS_extension information regarding whether there is a direct dependency relationship between layers is transmitted by direct_dependency_flag.
- Non-Patent Document 3 a skip picture is proposed. That is, when the scalable coding process is performed, if a skip picture is specified in the enhancement layer, the upsampled image of the base layer is output as it is, and the decoding process for the picture is not performed.
- the enhancement layer when the CPU load is high, the amount of computation is reduced, enabling real-time operation, or when an overflow of the buffer is likely to occur, when information is not transmitted to the picture, an overflow occurs. Can be avoided.
- JCTVC-L1003_v4 Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11 12th Meeting: Geneva, CH, 14-23 Jan. 2013 Jianle Chen, Jill Boyce, Yan Ye, Miska M.
- an image that has been subjected to up-sampling twice or more may be output in the enhancement layer. That is, an image with extremely low resolution may be output as a decoded image as compared with the hierarchy. As described above, it may be difficult to smoothly perform processing related to the hierarchy.
- the present disclosure has been made in view of such a situation, and can smoothly perform processing related to a hierarchy.
- the image coding apparatus When the image coding apparatus according to the first aspect of the present disclosure performs coding processing on an image having three or more layers, the image of the reference layer referred to by the target image to be coded is in skip mode.
- An acquisition unit that acquires inter-layer information indicating whether or not there is present, and the inter-layer information acquired by the acquisition unit is referred to, and when the image of the reference layer is in skip mode, the target image is set as skip mode.
- an inter-layer information setting unit that prohibits encoding processing.
- the image coding method provides a reference layer to which a target image that is a target of a coding process refers when an image coding apparatus performs a coding process on an image having three or more layers.
- Inter-layer information indicating whether or not the image of the reference layer is in the skip mode, and referring to the acquired inter-layer information, if the image of the reference layer is in the skip mode, the target image is encoded as the skip mode. Is prohibited.
- the image decoding device when decoding a bitstream in which an image having three or more layers is encoded, has a reference layer that is referenced by a target image to be decoded.
- An acquisition unit that acquires inter-layer information indicating whether or not an image is in a skip mode, and the inter-layer information acquired by the acquisition unit, and the image of the reference layer is in skip mode, the target
- An inter-layer information setting unit that prohibits the decoding processing of the image in the skip mode.
- a target image to be subjected to the decoding process is determined.
- Inter-layer information indicating whether or not the image of the reference layer to be referenced is in the skip mode, and referring to the acquired inter-layer information, when the image of the reference layer is in the skip mode, the target image is It is prohibited to perform the decoding process as the skip mode.
- An image encoding device includes an acquisition unit that acquires inter-layer information indicating the number of hierarchies of the image when encoding an image having 64 or more layers, and the acquisition unit In the VPS_extension, the inter-layer information setting unit that sets information related to the expanded number of layers is provided.
- the image coding method obtains inter-layer information indicating the number of hierarchies of the image when the image coding device performs an encoding process on an image having a plurality of hierarchies of 64 or more, With reference to the acquired inter-layer information, information regarding the expanded number of layers is set in VPS_extension.
- the image decoding device includes a receiving unit that receives information on the number of expanded layers set in VPS_extension from a bitstream in which an image having multiple layers of 64 or more is encoded; A decoding unit that performs a decoding process with reference to the information about the expanded number of layers received by the receiving unit.
- the image decoding apparatus obtains information on the number of extended layers set in VPS_extension from a bitstream in which an image having multiple layers of 64 or more is encoded.
- the decoding process is performed with reference to the received information regarding the expanded number of layers.
- the reference layer image referred to by the target image to be encoded is in the skip mode.
- Inter-layer information indicating is acquired. Then, referring to the acquired inter-layer information, when the image of the reference layer is in the skip mode, it is prohibited to encode the target image as the skip mode.
- an image of a reference layer referenced by a target image to be decoded is skipped
- Inter-layer information indicating whether or not the mode is selected is acquired. Then, referring to the acquired inter-layer information, when the image of the reference layer is in the skip mode, it is prohibited to perform the decoding process using the target image as the skip mode.
- inter-layer information indicating the number of layers of the image is acquired when an image having 64 or more layers is encoded. Then, referring to the acquired inter-layer information, information regarding the expanded number of layers is set in VPS_extension.
- information on the extended number of layers set in VPS_extension is received from a bitstream in which an image having 64 or more layers is encoded. Then, the decoding process is performed with reference to the received information on the expanded number of layers.
- the above-described image encoding device may be an independent device, or may be an internal block constituting one image processing device or an image encoding device.
- the above-described image decoding device may be an independent device, or may be an internal block constituting one image processing device or image decoding device.
- an image can be encoded.
- processing related to the hierarchy can be performed smoothly.
- an image can be decoded.
- processing related to the hierarchy can be performed smoothly.
- FIG. 20 is a block diagram illustrating a main configuration example of a computer. It is a block diagram which shows an example of a schematic structure of a television apparatus. It is a block diagram which shows an example of a schematic structure of a mobile telephone.
- ⁇ Coding unit> In the AVC (Advanced Video Coding) method, a hierarchical structure is defined by macroblocks and sub-macroblocks. However, a macroblock of 16 pixels ⁇ 16 pixels is not optimal for a large image frame such as UHD (Ultra High Definition; 4000 pixels ⁇ 2000 pixels), which is a target of the next generation encoding method.
- UHD Ultra High Definition
- a coding unit (Coding Unit)) is defined.
- CU is also called Coding Tree Block (CTB) and is a partial area of a picture unit image that plays the same role as a macroblock in the AVC method.
- CTB Coding Tree Block
- the latter is fixed to a size of 16 ⁇ 16 pixels, whereas the size of the former is not fixed, and is specified in the image compression information in each sequence.
- the maximum size (LCU (Largest Coding Unit)) and the minimum size (SCU (Smallest Coding Unit)) are specified.
- the LCU size is 128 and the maximum hierarchical depth is 5.
- split_flag is “1”
- the 2N ⁇ 2N size CU is divided into N ⁇ N size CUs that are one level below.
- the CU is divided into prediction units (Prediction Units (PU)) that are regions (partial regions of images in units of pictures) that are processing units of intra or inter prediction, and are regions that are processing units of orthogonal transformation It is divided into transform units (Transform Unit (TU)), which is (a partial area of an image in units of pictures).
- Prediction Units PU
- transform Unit Transform Unit
- a macro block in the AVC method corresponds to an LCU
- a block (sub block) corresponds to a CU. Then you can think.
- a motion compensation block in the AVC method can be considered to correspond to a PU.
- the size of the LCU of the highest hierarchy is generally set larger than the macro block of the AVC method, for example, 128 ⁇ 128 pixels.
- the LCU also includes a macro block in the AVC method
- the CU also includes a block (sub-block) in the AVC method.
- “block” used in the following description indicates an arbitrary partial area in the picture, and its size, shape, characteristics, and the like are not limited. That is, the “block” includes an arbitrary area (processing unit) such as a TU, PU, SCU, CU, LCU, sub-block, macroblock, or slice. Of course, other partial areas (processing units) are also included. When it is necessary to limit the size, processing unit, etc., it will be described as appropriate.
- CTU Coding Tree Unit
- CTB Coding Tree Block
- CU Coding ⁇ Unit
- CB Coding ⁇ ⁇ ⁇ ⁇ Block
- JM Job Model
- JM JM
- High Complexity Mode Low Complexity Mode.
- a cost function value for each prediction mode Mode is calculated, and a prediction mode that minimizes the cost function value is selected as the optimum mode for the block or macroblock.
- ⁇ is the entire set of candidate modes for encoding the block or macroblock
- D is the difference energy between the decoded image and the input image when encoded in the prediction mode.
- ⁇ is a Lagrange undetermined multiplier given as a function of the quantization parameter.
- R is the total code amount when encoding is performed in this mode, including orthogonal transform coefficients.
- D is the difference energy between the predicted image and the input image, unlike the case of High Complexity Mode.
- QP2Quant QP
- HeaderBit is a code amount related to information belonging to Header, such as a motion vector and mode, which does not include an orthogonal transform coefficient.
- Scalable encoding is a scheme in which an image is divided into a plurality of layers (hierarchical) and encoded for each layer.
- one image is divided into a plurality of images (layers) based on a predetermined parameter.
- each layer is composed of difference data so that redundancy is reduced.
- a base layer and an enhancement layer an image with lower quality than the original image can be obtained with only the base layer data, and the base layer data and the enhancement layer data are combined.
- the original image that is, a high quality image
- image compression information of only the base layer (base layer) is transmitted, and a moving image with a low spatiotemporal resolution or poor image quality is reproduced.
- image enhancement information of the enhancement layer is transmitted.
- Image compression information corresponding to the capabilities of the terminal and the network can be transmitted from the server without performing transcoding processing, such as playing a moving image with high image quality.
- spatial scalability As a parameter having such scalability, for example, there is a spatial resolution as shown in FIG. 2 (spatial scalability).
- this spatial scalability spatial scalability
- the resolution is different for each layer. That is, as shown in FIG. 2, enhancement in which each picture is synthesized with a base layer having a spatially lower resolution than the original image and the base layer image to obtain the original image (original spatial resolution). Layered into two layers.
- this number of hierarchies is an example, and the number of hierarchies can be hierarchized.
- temporal resolution As another parameter for providing such scalability, for example, there is temporal resolution as shown in FIG. 3 (temporal scalability).
- the frame rate is different for each layer. That is, in this case, as shown in FIG. 3, layers are layered at different frame rates, and by adding a high frame rate layer to a low frame rate layer, a higher frame rate moving image is obtained. By adding all the layers, the original moving image (original frame rate) can be obtained.
- This number of hierarchies is an example, and can be hierarchized to an arbitrary number of hierarchies.
- each picture has two layers of enhancement layers in which the original image (original SNR) is obtained by combining the base layer with a lower SNR than the original image and the base layer image. Is layered.
- original SNR original image
- enhancement layer image compression information information related to the low PSNR image is transmitted, and by adding the enhancement layer image compression information to this, a high SNR image is reconstructed. It is possible.
- this number of hierarchies is an example, and the number of hierarchies can be hierarchized.
- the parameters for providing scalability may be other than the examples described above.
- the base layer (base layer) consists of an 8-bit (bit) image, and by adding an enhancement layer (enhancement layer) to this, the bit-depth scalability (bit-depth scalability) can be obtained. is there.
- base layer (base ⁇ ⁇ layer) consists of component images in 4: 2: 0 format, and by adding the enhancement layer (enhancement layer) to this, chroma scalability (chroma) scalability).
- the layers are hierarchized into layers of different views (viewpoints).
- the layers described in the present embodiment include the above-described scalability coding spatial, temporal, SNR, bit depth, color, view, and the like.
- layer used in this specification includes the above-described scalable (hierarchical) coding layer and each view when considering a multi-view multi-view.
- the term “layer” used in this specification includes a main (corresponding to sub) layer and a sublayer.
- the main layer is a spatial scalability layer and the sublayer is composed of a temporal scalability layer.
- the hierarchy is described as a layer as appropriate.
- Non-Patent Document 2 defines the scalable extension in HEVC.
- layer_id is specified in NAL_unit_header as shown in FIG. 5
- VPS Video_Parameter_Set
- FIG. 5 is a diagram illustrating an example of NAL_unit_header syntax. The numbers on the left are provided for convenience of explanation. In the example of FIG. 5, nuh_layer_id for specifying the layer id is described in the fourth line.
- FIG. 6 is a diagram illustrating an example of VPS syntax. The numbers on the left are provided for convenience of explanation.
- vps_max_layers_minus1 for specifying the maximum number of layers included in the bitstream is described in the fourth line.
- vps_extension_offset is described in the seventh line.
- the number of layer sets is described as vps_num_layer_sets_minus1.
- layer_id_included_flag for defining a layer set is described.
- information about vpe_extension is further described.
- VPS_extension information regarding whether or not there is a direct dependency between layers is transmitted by direct_dependency_flag.
- FIGS. 7 and 8 are diagrams showing examples of VPS_extension syntax. The numbers on the left are provided for convenience of explanation. In the example of FIGS. 7 and 8, the 23rd to 25th lines describe direct_dependency_flag that is information regarding whether or not there is a direct dependency between the above-described hierarchies.
- Non-Patent Document 2 can set up to 63 layers at maximum. That is, it did not support an application including 63 or more layers such as a super multi-view image.
- Non-Patent Document 3 the following skip pictures are proposed. That is, when the scalable coding process is performed, if a skip picture is specified in the enhancement layer, the upsampled image of the base layer is output as it is, and the decoding process for the picture is not performed.
- the enhancement layer when the CPU load is high, the amount of computation is reduced, enabling real-time operation, or when there is a possibility of buffer overflow, when information is not transmitted to the picture, overflow occurs. Can be avoided.
- FIG. 9 is a block diagram illustrating a main configuration example of a scalable encoding device.
- the scalable encoding device 100 shown in FIG. 9 is an image information processing device that performs scalable encoding of image data, and encodes each layer of image data layered into a base layer and an enhancement layer.
- the parameter used as a standard for this hierarchization is arbitrary.
- the scalable encoding device 100 includes a common information generation unit 101, an encoding control unit 102, a base layer image encoding unit 103, an enhancement layer image encoding unit 104-1, and an enhancement layer image encoding unit 104-2.
- enhancement layer image encoding units 104-1 and 104-2 are collectively referred to as enhancement layer image encoding unit 104 unless it is necessary to distinguish between them.
- FIG. 9 an example in which the enhancement layer image encoding unit 104 is two is shown, but there may be a case where it is configured with a larger number of units.
- the common information generation unit 101 acquires information relating to encoding of image data that is stored in, for example, a NAL unit. In addition, the common information generation unit 101 acquires necessary information from the base layer image encoding unit 103, the enhancement layer image encoding unit 104, and the like as necessary. The common information generation unit 101 generates common information that is information regarding all layers based on the information. The common information includes, for example, a video parameter set. The common information generation unit 101 outputs the generated common information to the outside of the scalable encoding device 100, for example, as a NAL unit. Note that the common information generation unit 101 also supplies the generated common information to the encoding control unit 102. Furthermore, the common information generation unit 101 supplies part or all of the generated common information to the base layer image encoding unit 103 and the enhancement layer image encoding unit 104 as necessary.
- the encoding control unit 102 controls encoding of each layer by controlling the base layer image encoding unit 103 and the enhancement layer image encoding unit 104 based on the common information supplied from the common information generation unit 101. To do.
- the base layer image encoding unit 103 acquires base layer image information (base layer image information).
- the base layer image encoding unit 103 encodes the base layer image information without using information of other layers, and generates and outputs base layer encoded data (base layer encoded data).
- the enhancement layer image encoding unit 104 acquires enhancement layer image information (enhancement layer image), and encodes the enhancement layer image information.
- enhancement layer image information enhancement layer image
- the enhancement layer is divided into a current layer being processed and a reference layer referenced in the current layer.
- the enhancement layer image encoding unit 104 acquires image information (current layer image information) of the current layer (enhancement layer) and, if necessary, encodes another layer (base layer or enhancement layer first).
- the current layer image information is encoded with reference to the layer.
- the enhancement layer image encoding unit 104 indicates whether or not a picture is a skip picture when inter-layer information necessary for performing processing between layers, that is, when a decoded image of another layer is used as a reference picture. Inter-layer information or inter-layer information indicating a layer dependency in the case of having 64 or more layers is set.
- the enhancement layer image encoding unit 104 performs motion prediction based on the set inter-layer information, using or prohibiting the skip picture mode, and encodes the inter-layer information. Alternatively, the enhancement layer image encoding unit 104 performs motion prediction based on the set inter-layer information and encodes the inter-layer information.
- the enhancement layer image encoding unit 104 obtains another enhancement layer decoded image (or base layer decoded image), upsamples it, and then performs motion prediction. It is used as one of the reference pictures for this purpose.
- the enhancement layer image encoding unit 104 generates and outputs enhancement layer encoded data by such encoding.
- FIG. 10 is a block diagram illustrating a main configuration example of the base layer image encoding unit 103 in FIG. 9.
- the base layer image encoding unit 103 includes an A / D conversion unit 111, a screen rearrangement buffer 112, a calculation unit 113, an orthogonal transformation unit 114, a quantization unit 115, a lossless encoding unit 116, The storage buffer 117, the inverse quantization unit 118, and the inverse orthogonal transform unit 119 are included.
- the base layer image encoding unit 103 includes a calculation unit 120, a deblocking filter 121, a frame memory 122, a selection unit 123, an intra prediction unit 124, a motion prediction / compensation unit 125, a predicted image selection unit 126, and a rate control unit. 127. Further, the base layer image encoding unit 103 includes an adaptive offset filter 128 between the deblocking filter 121 and the frame memory 122.
- the A / D conversion unit 111 performs A / D conversion on the input image data (base layer image information), and supplies the converted image data (digital data) to the screen rearrangement buffer 112 for storage.
- the screen rearrangement buffer 112 rearranges the images of the frames in the stored display order in the order of frames for encoding according to the GOP (Group Of Picture), and rearranges the images in the order of the frames. It supplies to the calculating part 113.
- the screen rearrangement buffer 112 also supplies the image in which the frame order is rearranged to the intra prediction unit 124 and the motion prediction / compensation unit 125.
- the calculation unit 113 subtracts the predicted image supplied from the intra prediction unit 124 or the motion prediction / compensation unit 125 via the predicted image selection unit 126 from the image read from the screen rearrangement buffer 112, and the difference information Is output to the orthogonal transform unit 114. For example, in the case of an image on which intra coding is performed, the calculation unit 113 subtracts the prediction image supplied from the intra prediction unit 124 from the image read from the screen rearrangement buffer 112. For example, in the case of an image on which inter coding is performed, the calculation unit 113 subtracts the prediction image supplied from the motion prediction / compensation unit 125 from the image read from the screen rearrangement buffer 112.
- the orthogonal transform unit 114 performs orthogonal transform such as discrete cosine transform and Karhunen-Loeve transform on the difference information supplied from the computation unit 113.
- the orthogonal transform unit 114 supplies the transform coefficient to the quantization unit 115.
- the quantization unit 115 quantizes the transform coefficient supplied from the orthogonal transform unit 114.
- the quantization unit 115 sets a quantization parameter based on the information regarding the target value of the code amount supplied from the rate control unit 127, and performs the quantization.
- the quantization unit 115 supplies the quantized transform coefficient to the lossless encoding unit 116.
- the lossless encoding unit 116 encodes the transform coefficient quantized by the quantization unit 115 by an arbitrary encoding method. Since the coefficient data is quantized under the control of the rate control unit 127, the code amount becomes the target value set by the rate control unit 127 (or approximates the target value).
- the lossless encoding unit 116 acquires information indicating the mode of intra prediction from the intra prediction unit 124, and acquires information indicating the mode of inter prediction, differential motion vector information, and the like from the motion prediction / compensation unit 125. Furthermore, the lossless encoding unit 116 appropriately generates a base layer NAL unit including a sequence parameter set (SPS), a picture parameter set (PPS), and the like. Although not shown, the lossless encoding unit 116 supplies information necessary for the enhancement layer image encoding unit 104-1 to set inter-layer information to the enhancement layer image encoding unit 104-1.
- SPS sequence parameter set
- PPS picture parameter set
- the lossless encoding unit 116 encodes these various types of information using an arbitrary encoding method, and sets (multiplexes) the encoded information (also referred to as an encoded stream) as a part.
- the lossless encoding unit 116 supplies the encoded data obtained by encoding to the accumulation buffer 117 for accumulation.
- Examples of the encoding method of the lossless encoding unit 116 include variable length encoding or arithmetic encoding.
- Examples of variable length coding include H.264.
- CAVLC Context-Adaptive Variable Length Coding
- Examples of arithmetic coding include CABAC (Context-Adaptive Binary Arithmetic Coding).
- the accumulation buffer 117 temporarily holds the encoded data (base layer encoded data) supplied from the lossless encoding unit 116.
- the accumulation buffer 117 outputs the stored base layer encoded data to, for example, a recording device (recording medium) (not shown) or a transmission path at a later stage at a predetermined timing. That is, the accumulation buffer 117 is also a transmission unit that transmits encoded data.
- the transform coefficient quantized by the quantization unit 115 is also supplied to the inverse quantization unit 118.
- the inverse quantization unit 118 inversely quantizes the quantized transform coefficient by a method corresponding to the quantization by the quantization unit 115.
- the inverse quantization unit 118 supplies the obtained transform coefficient to the inverse orthogonal transform unit 119.
- the inverse orthogonal transform unit 119 performs inverse orthogonal transform on the transform coefficient supplied from the inverse quantization unit 118 by a method corresponding to the orthogonal transform process by the orthogonal transform unit 114.
- the inversely orthogonal transformed output (restored difference information) is supplied to the calculation unit 120.
- the calculation unit 120 uses the prediction image selection unit 126 to perform prediction from the intra prediction unit 124 or the motion prediction / compensation unit 125 on the restored difference information, which is the inverse orthogonal transform result supplied from the inverse orthogonal transform unit 119.
- the images are added to obtain a locally decoded image (decoded image).
- the decoded image is supplied to the deblocking filter 121 or the frame memory 122.
- the deblocking filter 121 removes block distortion of the reconstructed image by performing deblocking filter processing on the reconstructed image supplied from the calculation unit 120.
- the deblocking filter 121 supplies the filtered image to the adaptive offset filter 128.
- the adaptive offset filter 128 is an adaptive offset filter (SAO: Sample adaptive offset) that mainly removes ringing from the deblocking filter processing result (reconstructed image from which block distortion has been removed) from the deblocking filter 121. Process.
- SAO Sample adaptive offset
- the adaptive offset filter 128 determines the type of adaptive offset filter processing for each LCU (Largest Coding Unit) which is the maximum coding unit, and obtains an offset used in the adaptive offset filter processing.
- the adaptive offset filter 128 performs the determined type of adaptive offset filter processing on the image after the adaptive deblocking filter processing, using the obtained offset. Then, the adaptive offset filter 128 supplies the image after the adaptive offset filter processing (hereinafter referred to as a decoded image) to the frame memory 122.
- the deblocking filter 121 and the adaptive offset filter 128 can supply information such as filter coefficients used for the filter processing to the lossless encoding unit 116 and encode them as necessary. Further, an adaptive loop filter may be provided after the adaptive offset filter 128.
- the frame memory 122 stores the reconstructed image supplied from the arithmetic unit 120 and the decoded image supplied from the adaptive offset filter 128, respectively.
- the frame memory 122 supplies the stored reconstructed image to the intra prediction unit 124 via the selection unit 123 at a predetermined timing or based on a request from the outside such as the intra prediction unit 124.
- the frame memory 122 also stores the decoded image stored at a predetermined timing or based on a request from the outside such as the motion prediction / compensation unit 125 via the selection unit 123. 125.
- the frame memory 122 stores the supplied decoded image, and supplies the stored decoded image as a reference image to the selection unit 123 at a predetermined timing. Note that the base layer decoded image of the frame memory 122 is supplied as a reference picture to the enhancement layer image encoding unit 104-1 and the enhancement layer image encoding unit 104-2 as necessary.
- the selection unit 123 selects a reference image supply destination supplied from the frame memory 122. For example, in the case of intra prediction, the selection unit 123 supplies the reference image (pixel value in the current picture) supplied from the frame memory 122 to the motion prediction / compensation unit 125. For example, in the case of inter prediction, the selection unit 123 supplies the reference image supplied from the frame memory 122 to the motion prediction / compensation unit 125.
- the intra prediction unit 124 performs intra prediction (intra-screen prediction) that generates a predicted image using a pixel value in a current picture that is a reference image supplied from the frame memory 122 via the selection unit 123.
- the intra prediction unit 124 performs this intra prediction in a plurality of intra prediction modes prepared in advance.
- the intra prediction unit 124 generates prediction images in all candidate intra prediction modes, evaluates the cost function value of each prediction image using the input image supplied from the screen rearrangement buffer 112, and selects the optimum mode. select. When the optimal intra prediction mode is selected, the intra prediction unit 124 supplies the predicted image generated in the optimal mode to the predicted image selection unit 126.
- the intra prediction unit 124 appropriately supplies the intra prediction mode information indicating the adopted intra prediction mode to the lossless encoding unit 116 and causes the encoding to be performed.
- the motion prediction / compensation unit 125 performs motion prediction (inter prediction) using the input image supplied from the screen rearrangement buffer 112 and the reference image supplied from the frame memory 122 via the selection unit 123.
- the motion prediction / compensation unit 125 performs a motion compensation process according to the detected motion vector, and generates a prediction image (inter prediction image information).
- the motion prediction / compensation unit 125 performs such inter prediction in a plurality of inter prediction modes prepared in advance.
- the motion prediction / compensation unit 125 generates a prediction image in all candidate inter prediction modes.
- the motion prediction / compensation unit 125 evaluates the cost function value of each predicted image using the input image supplied from the screen rearrangement buffer 112 and information on the generated differential motion vector, and selects an optimal mode. .
- the motion prediction / compensation unit 125 supplies the predicted image generated in the optimal mode to the predicted image selection unit 126.
- the motion prediction / compensation unit 125 supplies information indicating the employed inter prediction mode, information necessary for performing processing in the inter prediction mode, and the like to the lossless encoding unit 116 when decoding the encoded data. And encoding.
- the necessary information includes, for example, information on the generated differential motion vector and a flag indicating an index of the predicted motion vector as predicted motion vector information.
- the predicted image selection unit 126 selects a supply source of a predicted image to be supplied to the calculation unit 113 or the calculation unit 120.
- the prediction image selection unit 126 selects the intra prediction unit 124 as a supply source of the prediction image, and supplies the prediction image supplied from the intra prediction unit 124 to the calculation unit 113 and the calculation unit 120.
- the predicted image selection unit 126 selects the motion prediction / compensation unit 125 as a supply source of the predicted image, and calculates the predicted image supplied from the motion prediction / compensation unit 125 as the calculation unit 113.
- To the arithmetic unit 120 To the arithmetic unit 120.
- the rate control unit 127 controls the quantization operation rate of the quantization unit 115 based on the code amount of the encoded data stored in the storage buffer 117 so that no overflow or underflow occurs.
- FIG. 11 is a block diagram illustrating a main configuration example of the enhancement layer image encoding unit 104-2 in FIG.
- the enhancement layer image encoding unit 104-1 has the same configuration as the enhancement layer image encoding unit 104-2 in FIG. 11, and thus description thereof is omitted.
- enhancement layer image encoding section 104-2 has basically the same configuration as base layer image encoding section 103 of FIG.
- each unit of the enhancement layer image encoding unit 104-2 performs processing for encoding the current layer image information in the enhancement layer, not the base layer. That is, the A / D conversion unit 111 of the enhancement layer image encoding unit 104-2 performs A / D conversion on the current layer image information, and the accumulation buffer 117 of the enhancement layer image encoding unit 104-2 performs the current layer encoding. For example, the data is output to a recording device (recording medium) (not shown) or a transmission path, which is not shown.
- the lossless encoding unit 116 includes information necessary for the enhancement layer image encoding unit 104-3 to set inter-layer information. Is supplied to, for example, the enhancement layer image encoding unit 104-3. In this case, the decoded image in the frame memory 122 is supplied as a reference picture to the enhancement layer image encoding unit 104-3 as necessary.
- the enhancement layer image encoding unit 104-2 has a motion prediction / compensation unit 135 instead of the motion prediction / compensation unit 125. Further, unlike the base layer image encoding unit 103, the enhancement layer image encoding unit 104-2 includes an inter-layer information setting unit 140 and an upsampling unit 141.
- the motion prediction / compensation unit 135 performs motion prediction / compensation according to the inter-layer information set by the inter-layer information setting unit 140. That is, the motion prediction / compensation unit 135 performs basically the same processing as the motion prediction / compensation unit 125 except that the inter-layer information set by the inter-layer information setting unit 140 is referred to.
- the inter-layer information setting unit 140 acquires information on the reference layer from the enhancement layer image encoding unit 104-1 (or the base layer image encoding unit 103), and based on the acquired information on the reference layer, Inter-layer information, which is information necessary for processing between layers with the current layer, is set.
- the inter-layer information setting unit 140 supplies the set inter-layer information to the motion prediction / compensation unit 135 and the lossless encoding unit 116.
- the lossless encoding unit 116 appropriately generates a VPS or VPS_extension based on the inter-layer information from the inter-layer information setting unit 140.
- the upsampling unit 141 acquires the reference layer decoded image from the enhancement layer image encoding unit 104-1 as a reference picture, and upsamples the acquired reference picture.
- the upsampling unit 141 stores the upsampled reference picture in the frame memory 122.
- a skip picture as one of inter-layer information according to the present technology will be described.
- a square represents a picture
- a cross indicated by the square represents that the picture is a skip picture.
- the upsampled image of Layer 1 is used as it is for the output of this picture.
- the Layer 1 picture that is a reference picture of the Layer 2 picture is also a skip picture
- an upsampled image of Layer 0 that is the reference hierarchy of Layer 1 is output as the Layer 2 picture.
- an image obtained by further upsampling the upsampled image of Layer0 is output for the skipped picture of Layer2, and therefore, the output image is compared with the other pictures of Layer2.
- the picture will be remarkably low in resolution. That is, in Layer 2, the difference in resolution between pictures may be observed as image quality degradation.
- the above limitation is imposed when the layer (Layer 2) and the reference layer (Layer 1) are in SNR scalability. It may not apply. That is, in the case of SNR scalability, the reference source of the skip picture may be a skip picture.
- the layer (Layer 2) and the reference layer (Layer 1) are spatial scalability, but between the reference layer (Layer 1) and a layer (Layer 0) that further refers to this. May be SNR scalability, the restriction according to the present technology may not be applied.
- processing can be applied not only to skip pictures but also to skip modes in general, such as skip slices or skip tiles.
- the inter-layer information setting unit that implements the above-described technology is configured as follows.
- FIG. 15 is a block diagram illustrating a main configuration example of the inter-tier information setting unit 140 of FIG.
- the inter-layer information setting unit 140 includes a reference layer picture type buffer 151 and a skip picture setting unit 152.
- Information regarding whether or not a picture in the reference layer is a skip picture is supplied from the enhancement layer image encoding unit 104-1 to the reference layer picture type buffer 151. That is, the reference layer picture type buffer 151 acquires information regarding whether or not a picture in the reference layer is a skip picture. The information is further supplied to the skip picture setting unit 152.
- the skip picture setting unit 152 sets whether the picture in the layer is a skip picture as inter-layer information. Then, the skip picture setting unit 152 supplies the set information to the motion prediction / compensation unit 135 and the lossless encoding unit 116.
- the skip picture setting unit 152 does not perform setting regarding whether the picture in the layer is a skip picture as inter-layer information. That is, it is prohibited that a picture in the hierarchy becomes a skip picture.
- the motion prediction / compensation unit 135 performs a motion prediction / compensation process based on the information from the skip picture setting unit 152 regarding whether the picture in the layer is a skip picture.
- the lossless encoding unit 116 encodes information on whether a picture in the layer is a skip picture for transmission to the decoding side as information indicating an inter prediction mode.
- step S101 the encoding control unit 102 of the scalable encoding device 100 sets the first layer as a processing target.
- step S102 the encoding control unit 102 determines whether or not the current layer to be processed is a base layer. If it is determined that the current layer is the base layer, the process proceeds to step S103.
- step S103 the base layer image encoding unit 103 performs base layer encoding processing.
- step S103 the process proceeds to step S106.
- step S104 the encoding control unit 102 determines a reference layer corresponding to the current layer (that is, a reference destination).
- the reference layer may be a base layer.
- step S105 the enhancement layer image encoding unit 104-1 or the enhancement layer image encoding unit 104-2 performs a current layer encoding process.
- step S106 the process proceeds to step S106.
- step S106 the encoding control unit 102 determines whether all layers have been processed. If it is determined that there is an unprocessed layer, the process proceeds to step S107.
- step S107 the encoding control unit 102 sets the next unprocessed layer as a processing target (current layer).
- the process of step S107 ends, the process returns to step S102.
- the processing from step S102 to step S107 is repeatedly executed, and each layer is encoded.
- step S106 If it is determined in step S106 that all layers have been processed, the encoding process ends.
- step S121 the A / D conversion unit 111 of the base layer image encoding unit 103 A / D converts the input base layer image information (image data).
- step S122 the screen rearrangement buffer 112 stores the A / D converted base layer image information (digital data), and rearranges the pictures from the display order to the encoding order.
- step S123 the intra prediction unit 124 performs an intra prediction process in the intra prediction mode.
- step S124 the motion prediction / compensation unit 125 performs a motion prediction / compensation process for performing motion prediction or motion compensation in the inter prediction mode.
- step S ⁇ b> 125 the predicted image selection unit 126 determines an optimum mode based on the cost function values output from the intra prediction unit 124 and the motion prediction / compensation unit 125. That is, the predicted image selection unit 126 selects one of the predicted image generated by the intra prediction unit 124 and the predicted image generated by the motion prediction / compensation unit 125.
- step S126 the calculation unit 113 calculates the difference between the image rearranged by the process of step S122 and the predicted image selected by the process of step S125. The data amount of the difference data is reduced compared to the original image data. Therefore, the data amount can be compressed as compared with the case where the image is encoded as it is.
- step S127 the orthogonal transform unit 114 performs an orthogonal transform process on the difference information generated by the process in step S126.
- step S1208 the quantization unit 115 quantizes the orthogonal transform coefficient obtained by the process of step S127, using the quantization parameter calculated by the rate control unit 127.
- the difference information quantized by the process of step S128 is locally decoded as follows. That is, in step S129, the inverse quantization unit 118 inversely quantizes the quantized coefficient (also referred to as a quantization coefficient) generated by the process in step S128 with characteristics corresponding to the characteristics of the quantization unit 115. . In step S130, the inverse orthogonal transform unit 119 performs inverse orthogonal transform on the orthogonal transform coefficient obtained by the process of step S127. In step S131, the calculation unit 120 adds the predicted image to the locally decoded difference information, and generates a locally decoded image (an image corresponding to the input to the calculation unit 113).
- step S132 the deblocking filter 121 performs deblocking filter processing on the image generated by the processing in step S131. Thereby, block distortion and the like are removed.
- step S ⁇ b> 133 the adaptive offset filter 128 performs adaptive offset filter processing that mainly removes ringing on the deblocking filter processing result from the deblocking filter 121.
- step S134 the frame memory 122 stores the image from which ringing has been removed by the process of step S133. Note that an image that has not been filtered by the deblocking filter 121 and the adaptive offset filter 128 is also supplied to the frame memory 122 from the computing unit 120 and stored therein. The image stored in the frame memory 122 is used for the processing in step S123 and the processing in step S124, and is also supplied to the enhancement layer image encoding unit 104-1.
- step S135 the lossless encoding unit 116 of the base layer image encoding unit 103 encodes the coefficient quantized by the process of step S128. That is, lossless encoding such as variable length encoding or arithmetic encoding is performed on the data corresponding to the difference image.
- the lossless encoding unit 116 encodes information regarding the prediction mode of the prediction image selected by the process of step S125, and adds the encoded information to the encoded data obtained by encoding the difference image. That is, the lossless encoding unit 116 encodes and encodes the optimal intra prediction mode information supplied from the intra prediction unit 124 or the information corresponding to the optimal inter prediction mode supplied from the motion prediction / compensation unit 125. Append to data.
- the lossless encoding unit 116 includes information necessary for the enhancement layer image encoding unit 104-1 to set inter-layer information (information on whether a picture in the layer is a skip picture, , Information on dependency relations in the hierarchy, etc.) is supplied to the enhancement layer image encoding unit 104-1.
- step S136 the accumulation buffer 117 accumulates the base layer encoded data obtained by the process in step S135.
- the base layer encoded data stored in the storage buffer 117 is appropriately read and transmitted to the decoding side via a transmission path or a recording medium.
- step S137 the rate control unit 127 determines the quantum of the quantization unit 115 so that no overflow or underflow occurs based on the code amount (generated code amount) of the encoded data accumulated in the accumulation buffer 117 in step S136. Control the rate of activation.
- the base layer encoding process is completed, and the process returns to FIG.
- the base layer encoding process is executed in units of pictures, for example. That is, the base layer encoding process is executed for each picture in the current layer. However, each process in the base layer encoding process is performed for each processing unit.
- Steps S151 to S153 of the enhancement layer encoding process and steps S155 to S168 are performed in the same manner as the processes of steps S121 to S137 of the base layer encoding process of FIG. However, each process of the enhancement layer encoding process is performed on the enhancement layer image information by each processing unit of the enhancement layer image encoding unit 104.
- the inter-layer information setting unit 140 of the enhancement layer image encoding unit 104 is a layer that is information necessary for processing between layers of the reference layer and the current layer based on the information about the reference layer. Set interval information. Details of the inter-layer information setting process will be described later with reference to FIG.
- the enhancement layer encoding process is terminated, and the process returns to FIG.
- the enhancement layer encoding process is executed in units of pictures, for example. That is, the enhancement layer encoding process is executed for each picture in the current layer. However, each process in the enhancement layer encoding process is performed for each processing unit.
- Information regarding whether or not a picture in the reference layer is a skip picture is supplied from the enhancement layer image encoding unit 104-1 to the reference layer picture type buffer 151. The information is further supplied to the skip picture setting unit 152.
- step S171 the skip picture setting unit 152 refers to the information from the reference layer picture type buffer 151 and determines whether or not the reference picture is a skip picture. If it is determined in step S171 that the reference picture is a skip picture, step S172 is skipped, the inter-layer information setting process is terminated, and the process returns to FIG.
- step S172 the skip picture setting unit 152 performs setting regarding whether or not the picture in the layer is a skip picture. Then, the skip picture setting unit 152 supplies the information to the motion prediction / compensation unit 135 and the lossless encoding unit 116. Thereafter, the inter-layer information setting process is terminated, and the process returns to FIG.
- step S155 of FIG. 18 the motion prediction / compensation unit 135 performs motion prediction / compensation processing based on the information regarding whether or not the picture in the hierarchy from the skip picture setting unit 152 is a skip picture.
- step S166 of FIG. 18 the lossless encoding unit 116 encodes information regarding whether or not a picture in the layer is a skip picture for transmission to the decoding side as information indicating an inter prediction mode.
- the current picture to be output is prohibited because the picture of the layer is prohibited from becoming a skip picture. Degradation of image quality can be suppressed.
- VPS_extension syntax 20 and 21 are diagrams illustrating examples of VPS_extension syntax according to the present technology. The numbers on the left are provided for convenience of explanation.
- the layer_extension_factor is a value obtained by subtracting 1 from the value, and is encoded as layer_extension_factor_minus1. Further, in the present technology, as illustrated in FIGS. 20 and 21, a layer set is newly defined by VPS_extension for the number of layers expanded by layer_extension_factor. That is, when the value of layer_extension_factor_minus1 is other than 0, information on the layer set is set in VPS_extension.
- scalable coding processing including 64 or more layers can be performed.
- layer_extension_flag may be set, and the syntax element layer_extension_factor_minus1 may be set in VPS_extension only when the value of layer_extension_flag is 1.
- the inter-layer information setting unit that implements the above-described technology is configured as follows.
- FIG. 22 is a block diagram illustrating a main configuration example of the inter-tier information setting unit 140 of FIG.
- the inter-tier information setting unit 140 includes a hierarchy dependency buffer 181 and an extended hierarchy setting unit 182.
- Information related to the dependency relationship in the reference layer is supplied from the enhancement layer image encoding unit 104-1 to the layer dependency buffer 181. That is, the hierarchy dependency relationship buffer 181 acquires information regarding the dependency relationship in the reference hierarchy. The information is further supplied to the extended hierarchy setting unit 182.
- the extension hierarchy setting unit 182 performs the setting related to the extension hierarchy based on the method according to the present technology described above with reference to FIGS. That is, when there are 64 or more hierarchies, the extended hierarchy setting unit 182 sets layer_extension_flag to 1 in VPS and sets information related to the extended hierarchy in VPS_extension. On the other hand, when the extension layer setting unit 182 does not have 64 or more layers, layer_extension_flag is set to 0 in VPS, and setting in VPS_extension is not performed. Then, the enhancement layer setting unit 182 supplies information on the set enhancement layer to the motion prediction / compensation unit 135 and the lossless encoding unit 116.
- the motion prediction / compensation unit 135 performs motion prediction / compensation processing based on the information related to the enhancement layer from the enhancement layer setting unit 182.
- the lossless encoding unit 116 generates and encodes VPS and VPS_extension to transmit information on the enhancement layer to the decoding side as information indicating an inter prediction mode.
- Information related to the dependency relationship in the reference layer is supplied from the enhancement layer image encoding unit 104-1 to the layer dependency buffer 181.
- the information is further supplied to the extended hierarchy setting unit 182.
- the extended gradation setting unit 182 determines whether or not it has 64 or more layers in step S191. If it is determined in step S191 that there are 64 or more layers, the process proceeds to step S192.
- the extension gradation setting unit 182 sets layer_extension_flag to 1 in VPS as shown in FIG. 6 in step S192.
- the extended gradation setting unit 182 sets information related to the extended hierarchy with VPS_extension. Then, the extended gradation setting unit 182 supplies the information to the motion prediction / compensation unit 135 and the lossless encoding unit 116. Thereafter, the inter-layer information setting process is terminated, and the process returns to FIG.
- step S191 determines whether there are no more than 64 hierarchies. If it is determined in step S191 that there are no more than 64 hierarchies, the process proceeds to step S194.
- the extension gradation setting unit 182 sets layer_extension_flag to 0 in VPS as shown in FIG. 6 in step S192. Then, the extended gradation setting unit 182 supplies the information to the motion prediction / compensation unit 135 and the lossless encoding unit 116. Thereafter, the inter-layer information setting process is terminated, and the process returns to FIG.
- the motion prediction / compensation unit 135 performs a motion prediction / compensation process based on the information related to the enhancement layer from the enhancement gradation setting unit 182 in step S155 of FIG.
- the lossless encoding unit 116 encodes the information related to the enhancement layer from the enhancement gradation setting unit 182 for transmission to the decoding side as information indicating the inter prediction mode.
- scalable coding of the present technology it is possible to define more than 64 layers by setting VPS and VPS_extension, so scalable coding processing including more than 64 layers can be performed. It becomes possible to do.
- FIG. 24 is a block diagram illustrating a main configuration example of a scalable decoding device corresponding to the scalable encoding device 100 of FIG.
- the scalable decoding device 200 illustrated in FIG. 24 performs scalable decoding on encoded data obtained by scalable encoding of image data by the scalable encoding device 100, for example, by a method corresponding to the encoding method.
- scalable decoding apparatus 200 includes common information acquisition section 201, decoding control section 202, base layer image decoding section 203, enhancement layer image decoding section 204-1 and enhancement layer image decoding section 204-2.
- the enhancement layer image decoding units 204-1 and 204-2 are collectively referred to as an enhancement layer image decoding unit 204 unless it is necessary to distinguish between them.
- an example of two enhancement layer image decoding units 204 is shown.
- the common information acquisition unit 201 acquires common information (for example, a video parameter set (VPS)) transmitted from the encoding side.
- the common information acquisition unit 201 extracts information related to decoding from the acquired common information and supplies it to the decoding control unit 202. Further, the common information acquisition unit 201 appropriately supplies part or all of the common information to the base layer image decoding unit 203 to the enhancement layer image decoding unit 204-2.
- common information for example, a video parameter set (VPS)
- the decoding control unit 202 acquires information related to decoding supplied from the common information acquisition unit 201, and controls the base layer image decoding unit 203 to the enhancement layer image decoding unit 204-2 based on the information, thereby Control layer decoding.
- the base layer image decoding unit 203 is an image decoding unit corresponding to the base layer image encoding unit 103, and for example, base layer encoded data obtained by encoding base layer image information by the base layer image encoding unit 103. To get.
- the base layer image decoding unit 203 decodes the base layer encoded data without using the information of other layers, reconstructs the base layer image information, and outputs it.
- the enhancement layer image decoding unit 204 is an image decoding unit corresponding to the enhancement layer image encoding unit 104, for example, enhancement layer encoded data obtained by encoding enhancement layer image information by the enhancement layer image encoding unit 104. To get.
- the enhancement layer image decoding unit 204 decodes the enhancement layer encoded data.
- the enhancement layer image decoding unit 204 receives the inter-layer information transmitted from the encoding side and performs a decoding process.
- the inter-layer information includes inter-layer information necessary for performing processing between layers, that is, inter-layer information indicating whether or not the picture is a skip picture, or 64 or more layers. In this case, it is information between layers indicating the hierarchy dependency in the case.
- the enhancement layer image decoding unit 204 performs motion compensation using such received inter-layer information, generates a prediction image, reconstructs enhancement layer image information using the prediction image, and outputs the reconstruction layer image information.
- the enhancement layer image decoding unit 204 When decoding enhancement layer image information, the enhancement layer image decoding unit 204 obtains another enhancement layer decoded image (or base layer decoded image), up-samples it, and then performs motion prediction. Used as one of the reference pictures.
- FIG. 25 is a block diagram illustrating a main configuration example of the base layer image decoding unit 203 of FIG.
- the base layer image decoding unit 203 includes a storage buffer 211, a lossless decoding unit 212, an inverse quantization unit 213, an inverse orthogonal transform unit 214, a calculation unit 215, a deblocking filter 216, and a screen rearrangement buffer 217. And a D / A converter 218.
- the base layer image decoding unit 203 includes a frame memory 219, a selection unit 220, an intra prediction unit 221, a motion compensation unit 222, and a selection unit 223.
- the base layer image decoding unit 203 includes an adaptive offset filter 224 between the deblocking filter 216 and the screen rearrangement buffer 217 and the frame memory 219.
- Accumulation buffer 211 is also a receiving unit that receives transmitted base layer encoded data.
- the accumulation buffer 211 receives and accumulates the transmitted base layer encoded data, and supplies the encoded data to the lossless decoding unit 212 at a predetermined timing.
- Information necessary for decoding such as prediction mode information is added to the base layer encoded data.
- the lossless decoding unit 212 decodes the information supplied from the accumulation buffer 211 and encoded by the lossless encoding unit 116 by a method corresponding to the encoding method of the lossless encoding unit 116.
- the lossless decoding unit 212 supplies the quantized coefficient data of the difference image obtained by decoding to the inverse quantization unit 213.
- the lossless decoding unit 212 appropriately extracts and acquires NAL units including a video parameter set (VPS), a sequence parameter set (SPS), a picture parameter set (PPS), and the like included in the base layer encoded data.
- the lossless decoding unit 212 extracts information on the optimum prediction mode from the information, determines whether the intra prediction mode or the inter prediction mode is selected as the optimum prediction mode based on the information, and Information regarding the optimal prediction mode is supplied to the mode determined to be selected from the intra prediction unit 221 and the motion compensation unit 222. That is, for example, when the intra prediction mode is selected as the optimal prediction mode in the base layer image encoding unit 103, information regarding the optimal prediction mode is supplied to the intra prediction unit 221.
- the lossless decoding unit 212 supplies information necessary for the enhancement layer image decoding unit 204-1 to set inter-layer information to the enhancement layer image decoding unit 204-1.
- the lossless decoding unit 212 extracts information necessary for inverse quantization, such as a quantization matrix and a quantization parameter, from a NAL unit or the like, and supplies it to the inverse quantization unit 213.
- the inverse quantization unit 213 inversely quantizes the quantized coefficient data obtained by decoding by the lossless decoding unit 212 using a method corresponding to the quantization method of the quantization unit 115.
- the inverse quantization unit 213 is a processing unit similar to the inverse quantization unit 118. That is, the description of the inverse quantization unit 213 can be applied to the inverse quantization unit 118. However, the data input / output destinations and the like need to be changed appropriately according to the device.
- the inverse quantization unit 213 supplies the obtained coefficient data to the inverse orthogonal transform unit 214.
- the inverse orthogonal transform unit 214 performs inverse orthogonal transform on the coefficient data supplied from the inverse quantization unit 213 using a method corresponding to the orthogonal transform method of the orthogonal transform unit 114.
- the inverse orthogonal transform unit 214 is a processing unit similar to the inverse orthogonal transform unit 119. That is, the description of the inverse orthogonal transform unit 214 can be applied to the inverse orthogonal transform unit 119. However, the data input / output destinations and the like need to be changed appropriately according to the device.
- the inverse orthogonal transform unit 214 obtains decoded residual data corresponding to the residual data before being orthogonally transformed by the orthogonal transform unit 114 by the inverse orthogonal transform process.
- the decoded residual data obtained by the inverse orthogonal transform is supplied to the calculation unit 215.
- a prediction image is supplied to the calculation unit 215 from the intra prediction unit 221 or the motion compensation unit 222 via the selection unit 223.
- the calculating unit 215 adds the decoded residual data and the predicted image, and obtains decoded image data corresponding to the image data before the predicted image is subtracted by the calculating unit 113.
- the arithmetic unit 215 supplies the decoded image data to the deblocking filter 216.
- the deblocking filter 216 removes block distortion of the decoded image by performing a deblocking filter process on the decoded image.
- the deblocking filter 216 supplies the filtered image to the adaptive offset filter 224.
- the adaptive offset filter 224 performs an adaptive offset filter (SAO: Sample adaptive offset) process that mainly removes ringing from the deblocking filter processing result (decoded image from which block distortion has been removed) from the deblocking filter 216. I do.
- SAO Sample adaptive offset
- the adaptive offset filter 224 receives the type and offset of adaptive offset filter processing for each LCU (Largest Coding Unit), which is the maximum coding unit, from the lossless decoding unit 212 (not shown).
- the adaptive offset filter 224 performs the received type of adaptive offset filter processing on the image after the adaptive deblocking filter processing, using the received offset. Then, the adaptive offset filter 224 supplies the image after the adaptive offset filter processing (hereinafter referred to as a decoded image) to the screen rearrangement buffer 217 and the frame memory 219.
- the decoded image output from the calculation unit 215 can be supplied to the screen rearrangement buffer 217 and the frame memory 219 without passing through the deblocking filter 216 and the adaptive offset filter 224. That is, part or all of the filtering process by the deblocking filter 216 can be omitted.
- An adaptive loop filter may be provided after the adaptive offset filter 224.
- the screen rearrangement buffer 217 rearranges the decoded images. That is, the order of frames rearranged for the encoding order by the screen rearrangement buffer 112 is rearranged in the original display order.
- the D / A conversion unit 218 performs D / A conversion on the image supplied from the screen rearrangement buffer 217, and outputs and displays the image on a display (not shown).
- the frame memory 219 stores the supplied decoded image, and uses the stored decoded image as a reference image at a predetermined timing or based on an external request such as the intra prediction unit 221 or the motion compensation unit 222. This is supplied to the selection unit 220. Note that the decoded image in the frame memory 219 is supplied as a reference picture to the enhancement layer image decoding unit 204-1 and the enhancement layer image decoding unit 204-2 as necessary.
- the eyelid selection unit 220 selects a reference image supply destination supplied from the frame memory 219.
- the selection unit 220 supplies the reference image supplied from the frame memory 219 to the intra prediction unit 221 when decoding an intra-coded image.
- the selection unit 220 supplies the reference image supplied from the frame memory 219 to the motion compensation unit 222 when decoding an inter-encoded image.
- the intra prediction unit 221 is appropriately supplied with information indicating the intra prediction mode obtained by decoding the header information from the lossless decoding unit 212.
- the intra prediction unit 221 performs intra prediction using the reference image acquired from the frame memory 219 in the intra prediction mode used in the intra prediction unit 124, and generates a predicted image.
- the intra prediction unit 221 supplies the generated predicted image to the selection unit 223.
- the eyelid motion compensation unit 222 acquires information (optimum prediction mode information, reference image information, etc.) obtained by decoding the header information from the lossless decoding unit 212.
- the heel motion compensation unit 222 performs motion compensation using the reference image acquired from the frame memory 219 in the inter prediction mode indicated by the optimal prediction mode information acquired from the lossless decoding unit 212, and generates a predicted image.
- the eyelid motion compensation unit 222 supplies the generated predicted image to the selection unit 223.
- the eyelid selection unit 223 supplies the prediction image from the intra prediction unit 221 or the prediction image from the motion compensation unit 222 to the calculation unit 215.
- the arithmetic unit 215 adds the predicted image generated using the motion vector and the decoded residual data (difference image information) from the inverse orthogonal transform unit 214 to decode the original image.
- FIG. 26 is a block diagram illustrating a main configuration example of the enhancement layer image decoding unit 204-2 in FIG.
- the enhancement layer image decoding unit 204-1 has the same configuration as the enhancement layer image encoding unit 104-2 in FIG.
- the enhancement layer image decoding unit 204-2 has basically the same configuration as the base layer image decoding unit 203 of FIG.
- each unit of the enhancement layer image decoding unit 204-2 performs processing for decoding enhancement layer encoded data, not the base layer. That is, the accumulation buffer 211 of the enhancement layer image decoding unit 204-2 stores the enhancement layer encoded data, and the D / A conversion unit 218 of the enhancement layer image decoding unit 204-2 stores the enhancement layer image information, for example, The data is output to a recording device (recording medium) or a transmission path (not shown) in the subsequent stage.
- the lossless decoding unit 212 sets information necessary for the enhancement layer image decoding unit 204-3 to set inter-layer information, for example, And supplied to the enhancement layer image decoding unit 204-3. In this case, the decoded image in the frame memory 219 is supplied as a reference picture to the enhancement layer image decoding unit 204-3 as necessary.
- the enhancement layer image decoding unit 204-2 includes a motion compensation unit 232 instead of the motion compensation unit 222. Furthermore, unlike the base layer image decoding unit 203, an enhancement layer image encoding unit 204-2 is provided with an inter-layer information receiving unit 240 and an upsampling unit 241.
- the heel motion compensation unit 232 performs motion compensation according to the inter-layer information set by the inter-layer information setting unit 240. That is, the motion compensation unit 232 performs basically the same processing as the motion compensation unit 222 except that the inter-layer information received by the inter-layer information reception unit 240 is referred to.
- the inter-layer information receiving unit 240 receives the inter-layer information supplied from the lossless decoding unit 212 and supplies the received inter-layer information to the motion compensation unit 232.
- the upsampling unit 241 acquires the reference layer decoded image from the enhancement layer image decoding unit 204-1 as a reference picture, and upsamples the acquired reference picture.
- the upsampling unit 241 stores the upsampled reference picture in the frame memory 219.
- FIG. 27 is a block diagram illustrating a main configuration example of the inter-tier information receiving unit 240 of FIG. 27 has a configuration corresponding to the inter-layer information setting unit 140 in FIG.
- the inter-layer information receiving unit 240 includes a reference layer picture type buffer 251 and a skip picture receiving unit 252.
- Information regarding whether or not a picture in the reference layer is a skip picture is supplied from the enhancement layer image decoding unit 204-1 to the reference layer picture type buffer 251.
- the information is further supplied to the skip picture receiving unit 252. 27, it is configured in the example of FIG. 27, but on the decoding side, if the picture of the layer is a skip picture from the information obtained from the bitstream, the picture of the reference layer is not a skip picture on the encoding side. Therefore, the reference layer picture type buffer 251 is not necessary.
- the skip picture receiving unit 252 receives information regarding whether the picture in the layer is a skip picture from the lossless decoding unit 212 as inter-layer information. Then, the skip picture reception unit 252 supplies the received information to the motion compensation unit 232.
- the skip picture receiving unit 252 does not receive, from the lossless decoding unit 212, information regarding whether the picture in the layer is a skip picture as inter-layer information. That is, it is prohibited that a picture in the hierarchy becomes a skip picture.
- the motion compensation unit 232 performs motion compensation processing based on the information from the skip picture reception unit 252 regarding whether the picture in the layer is a skip picture.
- step S401 the decoding control unit 202 of the scalable decoding device 200 sets the first layer as a processing target.
- step S402 the decoding control unit 202 determines whether or not the current layer to be processed is a base layer. If it is determined that the current layer is the base layer, the process proceeds to step S403.
- step S403 the base layer image decoding unit 203 performs base layer decoding processing.
- step S403 ends, the process proceeds to step S406.
- step S402 If it is determined in step S402 that the current layer is an enhancement layer, the process proceeds to step S404.
- step S404 the decoding control unit 202 determines a reference layer corresponding to the current layer (that is, a reference destination).
- the reference layer may be a base layer.
- step S405 the enhancement layer image decoding unit 204 performs enhancement layer decoding processing.
- step S405 ends, the process proceeds to step S406.
- step S406 the decoding control unit 202 determines whether all layers have been processed. If it is determined that there is an unprocessed layer, the process proceeds to step S407.
- step S407 the decoding control unit 202 sets the next unprocessed layer as a processing target (current layer).
- the process of step S407 ends, the process returns to step S402.
- the processing from step S402 to step S407 is repeatedly executed, and each layer is decoded.
- step S406 If it is determined in step S406 that all layers have been processed, the decoding process ends.
- step S421 the accumulation buffer 211 of the base layer image decoding unit 203 accumulates the base layer bit stream transmitted from the encoding side.
- step S422 the lossless decoding unit 212 decodes the base layer bitstream (encoded difference image information) supplied from the accumulation buffer 211. That is, the I picture, P picture, and B picture encoded by the lossless encoding unit 116 are decoded. At this time, various information other than the difference image information included in the bit stream such as header information is also decoded.
- the lossless decoding unit 212 includes information necessary for the enhancement layer image decoding unit 204-1 to set inter-layer information (information on whether a picture in the layer is a skip picture, Information on the dependency relationship in the hierarchy) is supplied to the enhancement layer image decoding unit 204-1.
- step S423 the inverse quantization unit 213 inversely quantizes the quantized coefficient obtained by the process in step S422.
- step S424 the inverse orthogonal transform unit 214 performs inverse orthogonal transform on the current block (current TU).
- step S425 the intra prediction unit 221 or the motion compensation unit 222 performs a prediction process to generate a predicted image. That is, the prediction process is performed in the prediction mode that is determined in the lossless decoding unit 212 and applied at the time of encoding. More specifically, for example, when intra prediction is applied at the time of encoding, the intra prediction unit 221 generates a prediction image in the intra prediction mode that is optimized at the time of encoding. For example, when inter prediction is applied at the time of encoding, the motion compensation unit 222 generates a prediction image in an inter prediction mode that is optimized at the time of encoding.
- step S426 the calculation unit 215 adds the predicted image generated in step S425 to the difference image information generated by the inverse orthogonal transform process in step S424. As a result, the original image is decoded.
- step S427 the deblocking filter 216 performs deblocking filter processing on the decoded image obtained in step S426. Thereby, block distortion and the like are removed.
- step S428, the adaptive offset filter 224 performs adaptive offset filter processing that mainly removes ringing on the deblocking filter processing result from the deblocking filter 216.
- step S429 the screen rearrangement buffer 217 rearranges the images from which ringing has been removed in step S428. That is, the order of frames rearranged for encoding by the screen rearrangement buffer 112 is rearranged in the original display order.
- step S430 the D / A conversion unit 218 performs D / A conversion on the image in which the frame order is rearranged in step S429. This image is output to a display (not shown), and the image is displayed.
- step S431 the frame memory 219 stores the image that has been subjected to the adaptive offset filter processing in step S428.
- the image stored in the frame memory 219 is used for the processing in step S425, and is further supplied to the enhancement layer image decoding unit 204-1.
- the base layer decoding process ends, and the process returns to FIG.
- the base layer decoding process is executed in units of pictures, for example. That is, the base layer decoding process is executed for each picture in the current layer. However, each process in the base layer decoding process is performed for each processing unit.
- steps S451 to S454 of the enhancement layer decoding process and steps S456 to S462 are performed in the same manner as the processes of steps S421 to S431 of the base layer decoding process.
- each process of the enhancement layer decoding process is performed on the enhancement layer encoded data by each processing unit of the enhancement layer image decoding unit 204.
- step S455 the inter-layer information receiving unit 240 of the enhancement layer image decoding unit 204 is an inter-layer information that is necessary for processing between the reference layer and the current layer based on the information about the reference layer. Receive information. Details of the inter-layer information reception process will be described later with reference to FIG.
- the enhancement layer decoding process ends, and the process returns to FIG.
- the enhancement layer decoding process is executed in units of pictures, for example. That is, the enhancement layer decoding process is executed for each picture in the current layer. However, each process in the enhancement layer decoding process is performed for each processing unit.
- Information regarding whether or not a picture in the reference layer is a skip picture is supplied from the enhancement layer image decoding unit 204-1 to the reference layer picture type buffer 251. The information is further supplied to the skip picture receiving unit 252.
- step S471 the skip picture receiving unit 252 refers to the information from the reference layer picture type buffer 251 and determines whether or not the reference picture is a skip picture. If it is determined in step S471 that the reference picture is a skip picture, step S472 is skipped, the inter-layer information reception process is terminated, and the process returns to FIG.
- step S471 if it is not determined in step S471 that the reference picture is a skip picture, the process proceeds to step S472.
- step S472 the skip picture receiving unit 252 receives information from the lossless decoding unit 212 regarding whether the picture in the layer is a skip picture. Then, the skip picture reception unit 252 supplies the information to the motion compensation unit 232. Thereafter, the inter-layer information setting process is ended, and the process returns to FIG.
- step S456 of FIG. 30 the motion compensation unit 232 performs motion compensation processing based on the information from the skip picture reception unit 252 regarding whether the picture in the layer is a skip picture.
- the scalable decoding device of the present technology when a picture of a reference layer is a skip picture, an image of the layer is prohibited from becoming a skip picture. A decrease in image quality can be suppressed.
- FIG. 32 is a block diagram illustrating a main configuration example of the inter-layer information receiving unit 240 of FIG. 32 has a configuration corresponding to the inter-layer information setting unit 140 in FIG.
- the inter-layer information receiving unit 240 includes a layer dependency buffer 281 and an extended layer receiving unit 282.
- Information related to the dependency relationship in the reference layer is supplied from the enhancement layer image decoding unit 204-1 to the layer dependency relationship buffer 281.
- the information is further supplied to the enhancement layer receiving unit 282.
- information on the dependency relationship in the reference layer is obtained from the bit stream on the decoding side, and thus the layer dependency buffer 281 is not necessary.
- the enhancement layer receiving unit 282 receives information on the enhancement layer from the lossless decoding unit 212 as inter-layer information. First, the enhancement layer reception unit 282 receives layer_extension_flag from the lossless decoding unit 212 in VPS.
- the extension layer receiving unit 282 receives information on the extension layer from the lossless decoding unit 212 using VPS_extension. Then, the enhancement layer reception unit 282 supplies the received information regarding the enhancement layer to the motion compensation unit 232.
- the extension layer receiving unit 282 does not receive information on the extension layer from the lossless decoding unit 212 using the VPS_extension. That is, reception of the information is prohibited.
- the motion compensation unit 232 performs motion compensation processing based on the information related to the enhancement layer from the enhancement layer reception unit 282.
- Information related to the dependency relationship in the reference layer is supplied from the enhancement layer image decoding unit 204-1 to the layer dependency relationship buffer 281. The information is further supplied to the enhancement layer receiving unit 282.
- the extension layer reception unit 282 receives layer_extension_flag in the VPS from the lossless decoding unit 212 in step S491.
- the extension layer receiving unit 282 determines whether or not layer_extension_flag is 1 in step S492. If it is determined in step S492 that layer_extension_flag is 1, the process proceeds to step S493. In step S493, the enhancement layer receiving unit 282 receives information on the enhancement layer from the lossless decoding unit 212 using VPS_extension. Then, the enhancement layer reception unit 282 supplies the received information regarding the enhancement layer to the motion compensation unit 232. Thereafter, the inter-tier information reception process is terminated, and the process returns to FIG.
- step S492 determines whether layer_extension_flag is 0, the process skips step S493. Thereafter, the inter-tier information reception process is terminated, and the process returns to FIG.
- the motion compensation unit 232 performs a motion compensation process based on the information related to the enhancement layer from the enhancement layer reception unit 282 in step S455 of FIG.
- the scalable decoding device of the present technology it is possible to define 64 or more layers by setting VPS and VPS_extension, and therefore, scalable encoding processing including 64 or more layers can be performed. It becomes possible to do.
- processing related to hierarchies can be performed smoothly. That is, it is possible to suppress a decrease in image quality of the current image that is output.
- scalable encoding processing including 64 or more layers can be performed.
- image data is hierarchized into a plurality of layers by scalable coding, but the number of layers is arbitrary. Further, for example, as shown in the example of FIG. 34, some pictures may be hierarchized.
- the enhancement layer has been described as being processed using the information of the base layer. However, the enhancement layer is not limited to this, and other enhancement layers that have been processed may be processed. Processing may be performed using information.
- the layers described above include views in multi-view image encoding / decoding. That is, the present technology can be applied to multi-view image encoding / multi-view image decoding.
- FIG. 35 shows an example of a multi-view image encoding method.
- a multi-viewpoint image includes images of a plurality of viewpoints (views), and an image of a predetermined one viewpoint among the plurality of viewpoints is designated as a base-view image.
- Each viewpoint image other than the base view image is treated as a non-base view image.
- each view image is encoded / decoded.
- the above-described method may be applied to the encoding / decoding of each view. Good. That is, information between layers (views) may be set in a plurality of views in such multi-viewpoint encoding / decoding.
- the application range of the present technology can be applied to any image encoding device and image decoding device based on a scalable encoding / decoding method.
- the present technology is disclosed in, for example, MPEG, H.264, and the like.
- image information compressed by orthogonal transformation such as discrete cosine transformation and motion compensation, such as 26x
- network media such as satellite broadcasting, cable television, the Internet, or mobile phones.
- the present invention can be applied to an image encoding device and an image decoding device used in the above.
- the present technology can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical, magnetic disk, and flash memory.
- Third Embodiment> ⁇ Computer>
- the series of processes described above can be executed by hardware or can be executed by software.
- a program constituting the software is installed in the computer.
- the computer includes, for example, a general-purpose personal computer that can execute various functions by installing a computer incorporated in dedicated hardware and various programs.
- FIG. 36 is a block diagram showing an example of the hardware configuration of a computer that executes the above-described series of processing by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- An input / output interface 810 is also connected to the bus 804.
- An input unit 811, an output unit 812, a storage unit 813, a communication unit 814, and a drive 815 are connected to the input / output interface 810.
- the bag input unit 811 includes, for example, a keyboard, a mouse, a microphone, a touch panel, an input terminal, and the like.
- the output unit 812 includes, for example, a display, a speaker, an output terminal, and the like.
- the storage unit 813 includes, for example, a hard disk, a RAM disk, a nonvolatile memory, and the like.
- the communication unit 814 includes a network interface, for example.
- the drive 815 drives a removable medium 821 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 801 loads the program stored in the storage unit 813 into the RAM 803 via the input / output interface 810 and the bus 804 and executes the program, for example. Is performed.
- the RAM 803 also appropriately stores data necessary for the CPU 801 to execute various processes.
- the program executed by the computer (CPU 801) can be recorded and applied to, for example, a removable medium 821 as a package medium or the like.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 813 via the input / output interface 810 by attaching the removable medium 821 to the drive 815.
- the program can be received by the communication unit 814 via a wired or wireless transmission medium and installed in the storage unit 813.
- the program can be installed in the ROM 802 or the storage unit 813 in advance.
- the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
- the step of describing the program recorded on the recording medium is not limited to the processing performed in chronological order according to the described order, but may be performed in parallel or It also includes processes that are executed individually.
- the system means a set of a plurality of components (devices, modules (parts), etc.), and it does not matter whether all the components are in the same housing. Accordingly, a plurality of devices housed in separate housings and connected via a network and a single device housing a plurality of modules in one housing are all systems. .
- the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units). Conversely, the configurations described above as a plurality of devices (or processing units) may be combined into a single device (or processing unit). Of course, a configuration other than that described above may be added to the configuration of each device (or each processing unit). Furthermore, if the configuration and operation of the entire system are substantially the same, a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or other processing unit). .
- the present technology can take a configuration of cloud computing in which one function is shared by a plurality of devices via a network and is jointly processed.
- each step described in the above flowchart can be executed by one apparatus or can be shared by a plurality of apparatuses.
- the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
- An image encoding device and an image decoding device include a transmitter or a receiver in optical broadcasting, satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, and distribution to terminals by cellular communication, etc.
- the present invention can be applied to various electronic devices such as a recording device that records an image on a medium such as a magnetic disk and a flash memory, or a playback device that reproduces an image from these storage media.
- a recording device that records an image on a medium such as a magnetic disk and a flash memory
- a playback device that reproduces an image from these storage media.
- FIG. 37 shows an example of a schematic configuration of a television apparatus to which the above-described embodiment is applied.
- the television apparatus 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
- the tuner 902 extracts a signal of a desired channel from a broadcast signal received via the antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the encoded bit stream obtained by the demodulation to the demultiplexer 903. That is, the tuner 902 has a role as a transmission unit in the television device 900 that receives an encoded stream in which an image is encoded.
- the demultiplexer 903 separates the video stream and audio stream of the viewing target program from the encoded bit stream, and outputs each separated stream to the decoder 904. Further, the demultiplexer 903 extracts auxiliary data such as EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. Note that the demultiplexer 903 may perform descrambling when the encoded bit stream is scrambled.
- EPG Electronic Program Guide
- the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. In addition, the decoder 904 outputs audio data generated by the decoding process to the audio signal processing unit 907.
- the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display the video.
- the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via a network.
- the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting.
- the video signal processing unit 905 may generate a GUI (Graphical User Interface) image such as a menu, a button, or a cursor, and superimpose the generated image on the output image.
- GUI Graphic User Interface
- the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays an image on a video screen of a display device (for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
- a display device for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
- the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on the audio data input from the decoder 904, and outputs audio from the speaker 908.
- the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
- the external interface 909 is an interface for connecting the television device 900 to an external device or a network.
- a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also has a role as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
- the bag control unit 910 includes a processor such as a CPU and memories such as a RAM and a ROM.
- the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
- the program stored in the memory is read and executed by the CPU when the television apparatus 900 is activated.
- the CPU executes the program to control the operation of the television device 900 according to an operation signal input from the user interface 911, for example.
- the user interface 911 is connected to the control unit 910.
- the user interface 911 includes, for example, buttons and switches for the user to operate the television device 900, a remote control signal receiving unit, and the like.
- the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
- a bus 912 connects a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, an audio signal processing unit 907, an external interface 909, and a control unit 910 to each other.
- the decoder 904 has the function of the scalable decoding device 200 according to the above-described embodiment. Thereby, when decoding an image on the television device 900, processing related to the hierarchy can be smoothly performed. That is, it is possible to suppress a decrease in image quality of the current image that is output. Alternatively, scalable encoding processing including 64 or more layers can be performed.
- FIG. 38 shows an example of a schematic configuration of a mobile phone to which the above-described embodiment is applied.
- a cellular phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, a control unit 931, an operation A portion 932 and a bus 933.
- the cage antenna 921 is connected to the communication unit 922.
- the speaker 924 and the microphone 925 are connected to the audio codec 923.
- the operation unit 932 is connected to the control unit 931.
- the bus 933 connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931 to each other.
- the mobile phone 920 has various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode, and is used for sending and receiving voice signals, sending and receiving e-mail or image data, taking images, and recording data. Perform the action.
- the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
- the audio codec 923 converts an analog audio signal into audio data, A / D converts the compressed audio data, and compresses it. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
- the communication unit 922 encodes and modulates the audio data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. In addition, the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
- the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
- the audio codec 923 decompresses the audio data and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- control unit 931 in the data communication mode, for example, the control unit 931 generates character data constituting an e-mail in response to an operation by the user via the operation unit 932. In addition, the control unit 931 causes the display unit 930 to display characters. In addition, the control unit 931 generates e-mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated e-mail data to the communication unit 922.
- the communication unit 922 encodes and modulates email data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. In addition, the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
- the communication unit 922 demodulates and decodes the received signal to restore the email data, and outputs the restored email data to the control unit 931.
- the control unit 931 displays the content of the electronic mail on the display unit 930 and stores the electronic mail data in the storage medium of the recording / reproducing unit 929.
- the recording / reproducing unit 929 has a readable / writable arbitrary storage medium.
- the storage medium may be a built-in storage medium such as a RAM or a flash memory, or an externally mounted type such as a hard disk, magnetic disk, magneto-optical disk, optical disk, USB (Universal Serial Bus) memory, or memory card. It may be a storage medium.
- the camera unit 926 images a subject to generate image data, and outputs the generated image data to the image processing unit 927.
- the image processing unit 927 encodes the image data input from the camera unit 926 and stores the encoded stream in the storage medium of the storage / playback unit 929.
- the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the multiplexed stream is the communication unit 922. Output to.
- the communication unit 922 encodes and modulates the stream and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
- These transmission signal and reception signal may include an encoded bit stream.
- the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
- the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
- the image processing unit 927 decodes the video stream and generates video data.
- the video data is supplied to the display unit 930, and a series of images is displayed on the display unit 930.
- the audio codec 923 decompresses the audio stream and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the image processing unit 927 has the functions of the scalable encoding device 100 and the scalable decoding device 200 according to the above-described embodiment. Thereby, when encoding and decoding an image with the mobile phone 920, processing related to layers can be smoothly performed. That is, it is possible to suppress a decrease in image quality of the current image that is output. Alternatively, scalable encoding processing including 64 or more layers can be performed.
- FIG. 39 shows an example of a schematic configuration of a recording / reproducing apparatus to which the above-described embodiment is applied.
- the recording / reproducing device 940 encodes audio data and video data of a received broadcast program and records the encoded data on a recording medium.
- the recording / reproducing device 940 may encode audio data and video data acquired from another device and record them on a recording medium, for example.
- the recording / reproducing device 940 reproduces data recorded on the recording medium on a monitor and a speaker, for example, in accordance with a user instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
- the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. 950.
- Tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs the encoded bit stream obtained by the demodulation to the selector 946. That is, the tuner 941 serves as a transmission unit in the recording / reproducing apparatus 940.
- the external interface 942 is an interface for connecting the recording / reproducing device 940 to an external device or a network.
- the external interface 942 may be, for example, an IEEE1394 interface, a network interface, a USB interface, or a flash memory interface.
- video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 serves as a transmission unit in the recording / reproducing device 940.
- Encoder 943 encodes video data and audio data when video data and audio data input from external interface 942 are not encoded. Then, the encoder 943 outputs the encoded bit stream to the selector 946.
- the HDD 944 records an encoded bit stream in which content data such as video and audio are compressed, various programs, and other data on an internal hard disk. Further, the HDD 944 reads out these data from the hard disk when reproducing video and audio.
- the disk drive 945 records and reads data to and from the mounted recording medium.
- the recording medium mounted on the disk drive 945 is, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk. It may be.
- the selector 946 selects an encoded bit stream input from the tuner 941 or the encoder 943 when recording video and audio, and outputs the selected encoded bit stream to the HDD 944 or the disk drive 945. In addition, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 during video and audio reproduction.
- the decoder 947 decodes the encoded bit stream and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. The decoder 904 outputs the generated audio data to an external speaker.
- the OSD 948 reproduces the video data input from the decoder 947 and displays the video. Further, the OSD 948 may superimpose a GUI image such as a menu, a button, or a cursor on the video to be displayed.
- a GUI image such as a menu, a button, or a cursor
- the bag control unit 949 includes a processor such as a CPU and memories such as a RAM and a ROM.
- the memory stores a program executed by the CPU, program data, and the like.
- the program stored in the memory is read and executed by the CPU when the recording / reproducing apparatus 940 is activated, for example.
- the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from the user interface 950, for example, by executing the program.
- the user interface 950 is connected to the control unit 949.
- the user interface 950 includes, for example, buttons and switches for the user to operate the recording / reproducing device 940, a remote control signal receiving unit, and the like.
- the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
- the encoder 943 has the function of the scalable encoding apparatus 100 according to the above-described embodiment.
- the decoder 947 has the function of the scalable decoding device 200 according to the above-described embodiment.
- FIG. 40 illustrates an example of a schematic configuration of an imaging apparatus to which the above-described embodiment is applied.
- the imaging device 960 images a subject to generate an image, encodes the image data, and records it on a recording medium.
- the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972.
- the optical block 961 is connected to the imaging unit 962.
- the imaging unit 962 is connected to the signal processing unit 963.
- the display unit 965 is connected to the image processing unit 964.
- the user interface 971 is connected to the control unit 970.
- the bus 972 connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970 to each other.
- the optical block 961 includes a focus lens and a diaphragm mechanism.
- the optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962.
- the imaging unit 962 includes an image sensor such as a CCD (Charge-Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), and converts an optical image formed on the imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
- CCD Charge-Coupled Device
- CMOS Complementary Metal-Oxide Semiconductor
- the eyelid signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
- the signal processing unit 963 outputs the image data after the camera signal processing to the image processing unit 964.
- the haze image processing unit 964 encodes the image data input from the signal processing unit 963 to generate encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965. In addition, the image processing unit 964 may display the image by outputting the image data input from the signal processing unit 963 to the display unit 965. Further, the image processing unit 964 may superimpose display data acquired from the OSD 969 on an image output to the display unit 965.
- the OSD 969 generates a GUI image such as a menu, a button, or a cursor, for example, and outputs the generated image to the image processing unit 964.
- the external interface 966 is configured as a USB input / output terminal, for example.
- the external interface 966 connects the imaging device 960 and a printer, for example, when printing an image.
- a drive is connected to the external interface 966 as necessary.
- a removable medium such as a magnetic disk or an optical disk is attached to the drive, and a program read from the removable medium can be installed in the imaging device 960.
- the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
- the recording medium loaded in the media drive 968 may be any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
- a recording medium may be fixedly mounted on the media drive 968, and a non-portable storage unit such as an internal hard disk drive or an SSD (Solid State Drive) may be configured.
- the bag control unit 970 includes a processor such as a CPU and memories such as a RAM and a ROM.
- the memory stores a program executed by the CPU, program data, and the like.
- the program stored in the memory is read and executed by the CPU when the imaging device 960 is activated, for example.
- the CPU controls the operation of the imaging device 960 according to an operation signal input from the user interface 971 by executing the program.
- the user interface 971 is connected to the control unit 970.
- the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
- the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
- the image processing unit 964 has the functions of the scalable encoding device 100 and the scalable decoding device 200 according to the above-described embodiment. Thereby, when the image is encoded and decoded by the imaging device 960, processing related to the hierarchy can be smoothly performed. That is, it is possible to suppress a decrease in image quality of the current image that is output. Alternatively, scalable encoding processing including 64 or more layers can be performed.
- the distribution server 1002 reads the scalable encoded data stored in the scalable encoded data storage unit 1001, and via the network 1003, the personal computer 1004, the AV device 1005, the tablet This is distributed to the terminal device such as the device 1006 and the mobile phone 1007.
- the distribution server 1002 selects and transmits encoded data of appropriate quality according to the capability of the terminal device, the communication environment, and the like. Even if the distribution server 1002 transmits high-quality data unnecessarily, a high-quality image is not always obtained in the terminal device, which may cause a delay or an overflow. Moreover, there is a possibility that the communication band is unnecessarily occupied or the load on the terminal device is unnecessarily increased. On the other hand, even if the distribution server 1002 transmits unnecessarily low quality data, there is a possibility that an image with sufficient image quality cannot be obtained in the terminal device. Therefore, the distribution server 1002 appropriately reads and transmits the scalable encoded data stored in the scalable encoded data storage unit 1001 as encoded data having an appropriate quality with respect to the capability and communication environment of the terminal device. .
- the scalable encoded data storage unit 1001 stores scalable encoded data (BL + EL) 1011 encoded in a scalable manner.
- the scalable encoded data (BL + EL) 1011 is encoded data including both a base layer and an enhancement layer, and is a data that can be decoded to obtain both a base layer image and an enhancement layer image. It is.
- Distribution server 1002 selects an appropriate layer according to the capability of the terminal device that transmits data, the communication environment, and the like, and reads the data of that layer. For example, the distribution server 1002 reads high-quality scalable encoded data (BL + EL) 1011 from the scalable encoded data storage unit 1001 and transmits it to the personal computer 1004 and the tablet device 1006 with high processing capability as they are. . On the other hand, for example, the distribution server 1002 extracts base layer data from the scalable encoded data (BL + EL) 1011 for the AV device 1005 and the cellular phone 1007 having a low processing capability, and performs scalable encoding. Although it is data of the same content as the data (BL + EL) 1011, it is transmitted as scalable encoded data (BL) 1012 having a lower quality than the scalable encoded data (BL + EL) 1011.
- BL scalable encoded data
- scalable encoded data By using scalable encoded data in this way, the amount of data can be easily adjusted, so that the occurrence of delays and overflows can be suppressed, and unnecessary increases in the load on terminal devices and communication media can be suppressed. be able to.
- scalable encoded data (BL + EL) 1011 since scalable encoded data (BL + EL) 1011 has reduced redundancy between layers, the amount of data can be reduced as compared with the case where encoded data of each layer is used as individual data. . Therefore, the storage area of the scalable encoded data storage unit 1001 can be used more efficiently.
- the hardware performance of the terminal device varies depending on the device.
- the application which a terminal device performs is also various, the capability of the software is also various.
- the network 1003 serving as a communication medium can be applied to any communication network including wired, wireless, or both, such as the Internet and a LAN (Local Area Network), and has various data transmission capabilities. Furthermore, there is a risk of change due to other communications.
- the distribution server 1002 communicates with the terminal device that is the data transmission destination before starting data transmission, and the hardware performance of the terminal device, the performance of the application (software) executed by the terminal device, etc. Information regarding the capability of the terminal device and information regarding the communication environment such as the available bandwidth of the network 1003 may be obtained. The distribution server 1002 may select an appropriate layer based on the information obtained here.
- the layer extraction may be performed by the terminal device.
- the personal computer 1004 may decode the transmitted scalable encoded data (BL + EL) 1011 and display a base layer image or an enhancement layer image. Further, for example, the personal computer 1004 extracts the base layer scalable encoded data (BL) 1012 from the transmitted scalable encoded data (BL + EL) 1011 and stores it or transfers it to another device. The base layer image may be displayed after decoding.
- the data transmission system 1000 may be any system as long as it transmits a scalable encoded data to a terminal device by selecting an appropriate layer according to the capability of the terminal device or a communication environment. Can be applied to the system.
- the present technology is applied in the same manner as the application to the hierarchical encoding / decoding described above in the first embodiment and the second embodiment. Effects similar to those described above in the first embodiment and the second embodiment can be obtained.
- scalable coding is used for transmission via a plurality of communication media, for example, as shown in FIG.
- a broadcasting station 1101 transmits base layer scalable encoded data (BL) 1121 by terrestrial broadcasting 1111. Also, the broadcast station 1101 transmits enhancement layer scalable encoded data (EL) 1122 via an arbitrary network 1112 including a wired or wireless communication network or both (for example, packetized transmission).
- BL base layer scalable encoded data
- EL enhancement layer scalable encoded data
- Terminal device 1102 has a reception function of terrestrial broadcast 1111 broadcasted by broadcast station 1101 and receives base layer scalable encoded data (BL) 1121 transmitted via terrestrial broadcast 1111.
- the terminal apparatus 1102 further has a communication function for performing communication via the network 1112, and receives enhancement layer scalable encoded data (EL) 1122 transmitted via the network 1112.
- BL base layer scalable encoded data
- EL enhancement layer scalable encoded data
- the terminal device 1102 decodes the base layer scalable encoded data (BL) 1121 acquired via the terrestrial broadcast 1111 according to, for example, a user instruction, and obtains or stores a base layer image. Or transmit to other devices.
- BL base layer scalable encoded data
- the terminal device 1102 for example, in response to a user instruction, the base layer scalable encoded data (BL) 1121 acquired via the terrestrial broadcast 1111 and the enhancement layer scalable encoded acquired via the network 1112 Data (EL) 1122 is combined to obtain scalable encoded data (BL + EL), or decoded to obtain an enhancement layer image, stored, or transmitted to another device.
- BL base layer scalable encoded data
- EL enhancement layer scalable encoded acquired via the network 1112 Data
- the scalable encoded data can be transmitted via, for example, different communication media for each layer. Therefore, the load can be distributed, and the occurrence of delay and overflow can be suppressed.
- a communication medium used for transmission may be selected for each layer.
- scalable encoded data (BL) 1121 of a base layer having a relatively large amount of data is transmitted via a communication medium having a wide bandwidth
- scalable encoded data (EL) 1122 having a relatively small amount of data is transmitted. You may make it transmit via a communication medium with a narrow bandwidth.
- the communication medium for transmitting the enhancement layer scalable encoded data (EL) 1122 is switched between the network 1112 and the terrestrial broadcast 1111 according to the available bandwidth of the network 1112. May be.
- the number of layers is arbitrary, and the number of communication media used for transmission is also arbitrary.
- the number of terminal devices 1102 serving as data distribution destinations is also arbitrary.
- broadcasting from the broadcasting station 1101 has been described as an example, but the usage example is not limited to this.
- the data transmission system 1100 can be applied to any system as long as it is a system that divides scalable encoded data into a plurality of layers and transmits them through a plurality of lines.
- the present technology is applied in the same manner as the application to the hierarchical encoding / decoding described above in the first and second embodiments. Effects similar to those described above in the first embodiment and the second embodiment can be obtained.
- scalable encoding is used for storing encoded data as in the example shown in FIG. 43, for example.
- the imaging device 1201 performs scalable coding on image data obtained by imaging the subject 1211, and as scalable coded data (BL + EL) 1221, a scalable coded data storage device 1202. To supply.
- the scalable encoded data storage device 1202 stores the scalable encoded data (BL + EL) 1221 supplied from the imaging device 1201 with quality according to the situation. For example, in the normal case, the scalable encoded data storage device 1202 extracts base layer data from the scalable encoded data (BL + EL) 1221, and the base layer scalable encoded data ( BL) 1222. On the other hand, for example, in the case of attention, the scalable encoded data storage device 1202 stores scalable encoded data (BL + EL) 1221 with high quality and a large amount of data.
- the scalable encoded data storage device 1202 can store an image with high image quality only when necessary, so that an increase in the amount of data can be achieved while suppressing a reduction in the value of the image due to image quality degradation. And the use efficiency of the storage area can be improved.
- the imaging device 1201 is a surveillance camera.
- the monitoring target for example, an intruder
- the content of the captured image is likely to be unimportant, so reduction of the data amount is given priority, and the image data (scalable coding) Data) is stored in low quality.
- the image quality is given priority and the image data (scalable) (Encoded data) is stored with high quality.
- the imaging apparatus 1201 may make a determination, and the determination result may be transmitted to the scalable encoded data storage device 1202.
- the criterion for determining whether the time is normal or noting is arbitrary, and the content of the image as the criterion is arbitrary. Of course, conditions other than the contents of the image can also be used as the criterion. For example, it may be switched according to the volume or waveform of the recorded sound, may be switched at every predetermined time, or may be switched by an external instruction such as a user instruction.
- the number of states is arbitrary, for example, normal, slightly attention, attention, very attention, etc.
- three or more states may be switched.
- the upper limit number of states to be switched depends on the number of layers of scalable encoded data.
- the imaging apparatus 1201 may determine the number of layers for scalable coding according to the state. For example, in a normal case, the imaging apparatus 1201 may generate base layer scalable encoded data (BL) 1222 with low quality and a small amount of data, and supply the scalable encoded data storage apparatus 1202 to the scalable encoded data storage apparatus 1202. For example, when attention is paid, the imaging device 1201 generates scalable encoded data (BL + EL) 1221 having a high quality and a large amount of data, and supplies the scalable encoded data storage device 1202 to the scalable encoded data storage device 1202. May be.
- BL base layer scalable encoded data
- BL + EL scalable encoded data
- the surveillance camera has been described as an example.
- the use of the imaging system 1200 is arbitrary and is not limited to the surveillance camera.
- the first technique and the second embodiment are applied in the same manner as the application to the hierarchical encoding / decoding described above, whereby the first technique is applied. Effects similar to those described above in the second embodiment and the second embodiment can be obtained.
- FIG. 44 illustrates an example of a schematic configuration of a video set to which the present technology is applied.
- the video set 1300 shown in FIG. 44 has such a multi-functional configuration, and a device having a function relating to image encoding and decoding (either or both of them) can be used for the function. It is a combination of devices having other related functions.
- the video set 1300 includes a module group such as a video module 1311, an external memory 1312, a power management module 1313, and a front-end module 1314, and an associated module 1321, a camera 1322, a sensor 1323, and the like. And a device having a function.
- a cocoon module is a component that has several functions that are related to each other and that have a coherent function.
- the specific physical configuration is arbitrary. For example, a plurality of processors each having a function, electronic circuit elements such as resistors and capacitors, and other devices arranged on a wiring board or the like can be considered. . It is also possible to combine the module with another module, a processor, or the like to form a new module.
- the video module 1311 is a combination of configurations having functions related to image processing, and includes an application processor, a video processor, a broadband modem 1333, and an RF module 1334.
- the processor is a configuration in which a configuration having a predetermined function is integrated on a semiconductor chip by an SoC (System On Chip), and for example, there is also a system LSI (Large Scale Integration) or the like.
- the configuration having the predetermined function may be a logic circuit (hardware configuration), a CPU, a ROM, a RAM, and the like, and a program (software configuration) executed using them. , Or a combination of both.
- a processor has a logic circuit and a CPU, ROM, RAM, etc., a part of the function is realized by a logic circuit (hardware configuration), and other functions are executed by the CPU (software configuration) It may be realized by.
- the application processor 1331 in FIG. 44 is a processor that executes an application related to image processing.
- the application executed in the application processor 1331 not only performs arithmetic processing to realize a predetermined function, but also can control the internal and external configurations of the video module 1311 such as the video processor 1332 as necessary. .
- the video processor 1332 is a processor having a function related to image encoding / decoding (one or both of them).
- the broadband modem 1333 is a processor (or module) that performs processing related to wired or wireless (or both) broadband communication performed via a broadband line such as the Internet or a public telephone line network.
- the broadband modem 1333 digitally modulates data to be transmitted (digital signal) to convert it into an analog signal, or demodulates the received analog signal to convert it into data (digital signal).
- the broadband modem 1333 can digitally modulate and demodulate arbitrary information such as image data processed by the video processor 1332, a stream obtained by encoding the image data, an application program, setting data, and the like.
- the RF module 1334 is a module that performs frequency conversion, modulation / demodulation, amplification, filter processing, and the like on an RF (Radio RF Frequency) signal transmitted and received via an antenna. For example, the RF module 1334 generates an RF signal by performing frequency conversion or the like on the baseband signal generated by the broadband modem 1333. Further, for example, the RF module 1334 generates a baseband signal by performing frequency conversion or the like on the RF signal received via the front end module 1314.
- RF Radio RF Frequency
- the application processor 1331 and the video processor 1332 may be integrated into a single processor.
- the external memory 1312 is a module having a storage device that is provided outside the video module 1311 and is used by the video module 1311.
- the storage device of the external memory 1312 may be realized by any physical configuration, but is generally used for storing a large amount of data such as image data in units of frames. For example, it is desirable to realize it with a relatively inexpensive and large-capacity semiconductor memory such as DRAM (Dynamic Random Access Memory).
- the power management module 1313 manages and controls power supply to the video module 1311 (each component in the video module 1311).
- the front end module 1314 is a module that provides the RF module 1334 with a front end function (a circuit on a transmitting / receiving end on the antenna side). As illustrated in FIG. 44, the front end module 1314 includes, for example, an antenna unit 1351, a filter 1352, and an amplification unit 1353.
- Antenna unit 1351 has an antenna for transmitting and receiving a radio signal and its peripheral configuration.
- the antenna unit 1351 transmits the signal supplied from the amplification unit 1353 as a radio signal, and supplies the received radio signal to the filter 1352 as an electric signal (RF signal).
- the filter 1352 performs a filtering process on the RF signal received via the antenna unit 1351 and supplies the processed RF signal to the RF module 1334.
- the amplifying unit 1353 amplifies the RF signal supplied from the RF module 1334 and supplies the amplified RF signal to the antenna unit 1351.
- Connectivity 1321 is a module having a function related to connection with the outside.
- the physical configuration of the connectivity 1321 is arbitrary.
- the connectivity 1321 has a configuration having a communication function other than the communication standard supported by the broadband modem 1333, an external input / output terminal, and the like.
- the communication 1321 is compliant with wireless communication standards such as Bluetooth (registered trademark), IEEE 802.11 (for example, Wi-Fi (Wireless Fidelity, registered trademark)), NFC (Near Field Communication), IrDA (InfraRed Data Association), etc. You may make it have a module which has a function, an antenna etc. which transmit / receive the signal based on the standard.
- the connectivity 1321 has a module having a communication function compliant with a wired communication standard such as USB (Universal Serial Bus), HDMI (registered trademark) (High-Definition Multimedia Interface), or a terminal compliant with the standard. You may do it.
- the connectivity 1321 may have other data (signal) transmission functions such as analog input / output terminals.
- the connectivity 1321 may include a data (signal) transmission destination device.
- the drive 1321 reads and writes data to and from a recording medium such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory (not only a removable medium drive, but also a hard disk, SSD (Solid State Drive) NAS (including Network Attached Storage) and the like.
- the connectivity 1321 may include an image or audio output device (a monitor, a speaker, or the like).
- the eyelid camera 1322 is a module having a function of capturing an image of a subject and obtaining image data of the subject.
- Image data obtained by imaging by the camera 1322 is supplied to, for example, a video processor 1332 and encoded.
- the sensor 1323 includes, for example, a voice sensor, an ultrasonic sensor, an optical sensor, an illuminance sensor, an infrared sensor, an image sensor, a rotation sensor, an angle sensor, an angular velocity sensor, a velocity sensor, an acceleration sensor, an inclination sensor, a magnetic identification sensor, an impact sensor, It is a module having an arbitrary sensor function such as a temperature sensor.
- the data detected by the sensor 1323 is supplied to the application processor 1331 and used by an application or the like.
- the configuration described above as a module may be realized as a processor, or conversely, the configuration described as a processor may be realized as a module.
- the present technology can be applied to the video processor 1332 as described later. Therefore, the video set 1300 can be implemented as a set to which the present technology is applied.
- FIG. 45 illustrates an example of a schematic configuration of a video processor 1332 (FIG. 44) to which the present technology is applied.
- the video processor 1332 receives the video signal and the audio signal, encodes them in a predetermined method, decodes the encoded video data and audio data, A function of reproducing and outputting an audio signal.
- the video processor 1332 includes a video input processing unit 1401, a first image enlargement / reduction unit 1402, a second image enlargement / reduction unit 1403, a video output processing unit 1404, a frame memory 1405, and a memory control unit 1406.
- the video processor 1332 includes an encoding / decoding engine 1407, video ES (ElementaryElementStream) buffers 1408A and 1408B, and audio ES buffers 1409A and 1409B.
- the video processor 1332 includes an audio encoder 1410, an audio decoder 1411, a multiplexing unit (MUX (Multiplexer)) 1412, a demultiplexing unit (DMUX (Demultiplexer)) 1413, and a stream buffer 1414.
- MUX Multiplexing unit
- DMUX Demultiplexer
- the video input processing unit 1401 acquires a video signal input from, for example, the connectivity 1321 (FIG. 44) and converts it into digital image data.
- the first image enlargement / reduction unit 1402 performs format conversion, image enlargement / reduction processing, and the like on the image data.
- the second image enlargement / reduction unit 1403 performs image enlargement / reduction processing on the image data in accordance with the format of the output destination via the video output processing unit 1404, or is the same as the first image enlargement / reduction unit 1402. Format conversion and image enlargement / reduction processing.
- the video output processing unit 1404 performs format conversion, conversion to an analog signal, and the like on the image data, and outputs the reproduced video signal to, for example, the connectivity 1321 (FIG. 44).
- the frame memory 1405 is a memory for image data shared by the video input processing unit 1401, the first image scaling unit 1402, the second image scaling unit 1403, the video output processing unit 1404, and the encoding / decoding engine 1407. .
- the frame memory 1405 is realized as a semiconductor memory such as a DRAM, for example.
- the memory control unit 1406 receives the synchronization signal from the encoding / decoding engine 1407, and controls the writing / reading access to the frame memory 1405 according to the access schedule to the frame memory 1405 written in the access management table 1406A.
- the access management table 1406A is updated by the memory control unit 1406 in accordance with processing executed by the encoding / decoding engine 1407, the first image enlargement / reduction unit 1402, the second image enlargement / reduction unit 1403, and the like.
- the encoding / decoding engine 1407 performs encoding processing of image data and decoding processing of a video stream that is data obtained by encoding the image data. For example, the encoding / decoding engine 1407 encodes the image data read from the frame memory 1405 and sequentially writes the data as a video stream in the video ES buffer 1408A. Further, for example, the video stream is sequentially read from the video ES buffer 1408B, decoded, and sequentially written in the frame memory 1405 as image data.
- the encoding / decoding engine 1407 uses the frame memory 1405 as a work area in the encoding and decoding. Also, the encoding / decoding engine 1407 outputs a synchronization signal to the memory control unit 1406, for example, at a timing at which processing for each macroblock is started.
- the video ES buffer 1408A buffers the video stream generated by the encoding / decoding engine 1407 and supplies the buffered video stream to the multiplexing unit (MUX) 1412.
- the video ES buffer 1408B buffers the video stream supplied from the demultiplexer (DMUX) 1413 and supplies the buffered video stream to the encoding / decoding engine 1407.
- the audio ES buffer 1409A buffers the audio stream generated by the audio encoder 1410 and supplies the buffered audio stream to the multiplexing unit (MUX) 1412.
- the audio ES buffer 1409B buffers the audio stream supplied from the demultiplexer (DMUX) 1413 and supplies the buffered audio stream to the audio decoder 1411.
- the audio encoder 1410 converts, for example, an audio signal input from the connectivity 1321 (FIG. 44), for example, and digitally encodes the audio signal using a predetermined method such as an MPEG audio method or an AC3 (Audio Code (number 3) method.
- the audio encoder 1410 sequentially writes an audio stream, which is data obtained by encoding an audio signal, in the audio ES buffer 1409A.
- the audio decoder 1411 decodes the audio stream supplied from the audio ES buffer 1409B, performs conversion to an analog signal, for example, and supplies the reproduced audio signal to, for example, the connectivity 1321 (FIG. 44).
- Multiplexer (MUX) 1412 multiplexes the video stream and the audio stream.
- the multiplexing method (that is, the format of the bit stream generated by multiplexing) is arbitrary.
- the multiplexing unit (MUX) 1412 can also add predetermined header information or the like to the bit stream. That is, the multiplexing unit (MUX) 1412 can convert the stream format by multiplexing. For example, the multiplexing unit (MUX) 1412 multiplexes the video stream and the audio stream to convert it into a transport stream that is a bit stream in a transfer format. Further, for example, the multiplexing unit (MUX) 1412 multiplexes the video stream and the audio stream, thereby converting the data into file format data (file data) for recording.
- the demultiplexing unit (DMUX) 1413 demultiplexes the bit stream in which the video stream and the audio stream are multiplexed by a method corresponding to the multiplexing by the multiplexing unit (MUX) 1412. That is, the demultiplexer (DMUX) 1413 extracts the video stream and the audio stream from the bit stream read from the stream buffer 1414 (separates the video stream and the audio stream). That is, the demultiplexer (DMUX) 1413 can convert the stream format by demultiplexing (inverse conversion of the conversion by the multiplexer (MUX) 1412). For example, the demultiplexer (DMUX) 1413 obtains the transport stream supplied from, for example, the connectivity 1321 and the broadband modem 1333 (both in FIG.
- the demultiplexer (DMUX) 1413 obtains the file data read from various recording media by the connectivity 1321 (FIG. 44) via the stream buffer 1414 and demultiplexes the file data, for example. It can be converted into a video stream and an audio stream.
- the stream buffer 1414 buffers the bit stream.
- the stream buffer 1414 buffers the transport stream supplied from the multiplexing unit (MUX) 1412 and, for example, at the predetermined timing or based on a request from the outside, for example, the connectivity 1321 or the broadband modem 1333 (whichever Are also supplied to FIG.
- MUX multiplexing unit
- the stream buffer 1414 buffers the file data supplied from the multiplexing unit (MUX) 1412 and, for example, at the predetermined timing or based on an external request or the like, for example, the connectivity 1321 (FIG. 44) or the like. To be recorded on various recording media.
- MUX multiplexing unit
- the stream buffer 1414 buffers the transport stream acquired through, for example, the connectivity 1321 and the broadband modem 1333 (both in FIG. 44), and performs reverse processing at a predetermined timing or based on an external request or the like.
- the data is supplied to a multiplexing unit (DMUX) 1413.
- DMUX multiplexing unit
- the stream buffer 1414 buffers file data read from various recording media in, for example, the connectivity 1321 (FIG. 44), and performs a demultiplexing unit at a predetermined timing or based on an external request or the like. (DMUX) 1413.
- DMUX demultiplexing unit
- a video signal input from the connectivity 1321 (FIG. 44) or the like to the video processor 1332 is converted into digital image data of a predetermined format such as 4: 2: 2Y / Cb / Cr format by the video input processing unit 1401.
- the data is sequentially written into the frame memory 1405.
- This digital image data is read by the first image enlargement / reduction unit 1402 or the second image enlargement / reduction unit 1403, and format conversion to a predetermined method such as 4: 2: 0Y / Cb / Cr method and enlargement / reduction processing are performed. Is written again in the frame memory 1405.
- This image data is encoded by the encoding / decoding engine 1407 and written as a video stream in the video ES buffer 1408A.
- an audio signal input from the connectivity 1321 (FIG. 44) or the like to the video processor 1332 is encoded by the audio encoder 1410 and written as an audio stream in the audio ES buffer 1409A.
- the video stream of the video ES buffer 1408A and the audio stream of the audio ES buffer 1409A are read and multiplexed by the multiplexing unit (MUX) 1412 and converted into a transport stream or file data.
- the transport stream generated by the multiplexing unit (MUX) 1412 is buffered in the stream buffer 1414, and then output to the external network via, for example, the connectivity 1321 and the broadband modem 1333 (both are FIG. 44).
- the file data generated by the multiplexing unit (MUX) 1412 is buffered in the stream buffer 1414, and then output to, for example, the connectivity 1321 (FIG. 44) and recorded on various recording media.
- a transport stream input from an external network to the video processor 1332 via the connectivity 1321 or the broadband modem 1333 (both in FIG. 44) is buffered in the stream buffer 1414 and then demultiplexed (DMUX) 1413 is demultiplexed.
- DMUX demultiplexed
- file data read from various recording media in the connectivity 1321 (FIG. 44) and input to the video processor 1332 is buffered in the stream buffer 1414 and then demultiplexed by the demultiplexer (DMUX) 1413. It becomes. That is, the transport stream or file data input to the video processor 1332 is separated into a video stream and an audio stream by the demultiplexer (DMUX) 1413.
- the audio stream is supplied to the audio decoder 1411 via the audio ES buffer 1409B and decoded to reproduce the audio signal.
- the video stream is written to the video ES buffer 1408B, and then sequentially read and decoded by the encoding / decoding engine 1407, and written to the frame memory 1405.
- the decoded image data is enlarged / reduced by the second image enlargement / reduction unit 1403 and written to the frame memory 1405.
- the decoded image data is read out to the video output processing unit 1404, format-converted to a predetermined system such as 4: 2: 2Y / Cb / Cr system, and further converted into an analog signal to be converted into a video signal. Is played out.
- the present technology when the present technology is applied to the video processor 1332 configured as described above, the present technology according to each embodiment described above may be applied to the encoding / decoding engine 1407. That is, for example, the encoding / decoding engine 1407 has the functions of the scalable encoding device 100 (FIG. 9) according to the first embodiment and the scalable decoding device 200 (FIG. 24) according to the second embodiment. You can do it. In this way, the video processor 1332 can obtain the same effects as those described above with reference to FIGS.
- the present technology (that is, the functions of the scalable encoding device and the scalable decoding device according to each embodiment described above) may be realized by hardware such as a logic circuit. It may be realized by software such as an embedded program, or may be realized by both of them.
- FIG. 46 illustrates another example of a schematic configuration of the video processor 1332 (FIG. 44) to which the present technology is applied.
- the video processor 1332 has a function of encoding / decoding video data by a predetermined method.
- the video processor 1332 includes a control unit 1511, a display interface 1512, a display engine 1513, an image processing engine 1514, and an internal memory 1515.
- the video processor 1332 includes a codec engine 1516, a memory interface 1517, a multiplexing / demultiplexing unit (MUX DMUX) 1518, a network interface 1519, and a video interface 1520.
- MUX DMUX multiplexing / demultiplexing unit
- the eyelid control unit 1511 controls the operation of each processing unit in the video processor 1332 such as the display interface 1512, the display engine 1513, the image processing engine 1514, and the codec engine 1516.
- the control unit 1511 includes, for example, a main CPU 1531, a sub CPU 1532, and a system controller 1533.
- the main CPU 1531 executes a program and the like for controlling the operation of each processing unit in the video processor 1332.
- the main CPU 1531 generates a control signal according to the program and supplies it to each processing unit (that is, controls the operation of each processing unit).
- the sub CPU 1532 plays an auxiliary role of the main CPU 1531.
- the sub CPU 1532 executes a child process such as a program executed by the main CPU 1531, a subroutine, or the like.
- the system controller 1533 controls operations of the main CPU 1531 and the sub CPU 1532 such as designating a program to be executed by the main CPU 1531 and the sub CPU 1532.
- the display interface 1512 outputs the image data to, for example, the connectivity 1321 (FIG. 44) under the control of the control unit 1511.
- the display interface 1512 converts the digital data image data into an analog signal, and outputs it to the monitor device of the connectivity 1321 (FIG. 44) as a reproduced video signal or as the digital data image data.
- the display engine 1513 Under the control of the control unit 1511, the display engine 1513 performs various conversion processes such as format conversion, size conversion, color gamut conversion, and the like so as to match the image data with hardware specifications such as a monitor device that displays the image. I do.
- the eyelid image processing engine 1514 performs predetermined image processing such as filter processing for improving image quality on the image data under the control of the control unit 1511.
- the internal memory 1515 is a memory provided inside the video processor 1332 that is shared by the display engine 1513, the image processing engine 1514, and the codec engine 1516.
- the internal memory 1515 is used, for example, for data exchange performed between the display engine 1513, the image processing engine 1514, and the codec engine 1516.
- the internal memory 1515 stores data supplied from the display engine 1513, the image processing engine 1514, or the codec engine 1516, and stores the data as needed (eg, upon request). This is supplied to the image processing engine 1514 or the codec engine 1516.
- the internal memory 1515 may be realized by any storage device, but is generally used for storing a small amount of data such as image data or parameters in units of blocks. It is desirable to realize a semiconductor memory having a relatively small capacity but a high response speed (for example, as compared with the external memory 1312) such as “Static Random Access Memory”.
- the codec engine 1516 performs processing related to encoding and decoding of image data.
- the encoding / decoding scheme supported by the codec engine 1516 is arbitrary, and the number thereof may be one or plural.
- the codec engine 1516 may be provided with codec functions of a plurality of encoding / decoding schemes, and may be configured to perform encoding of image data or decoding of encoded data using one selected from them.
- the codec engine 1516 includes, for example, MPEG-2 video 1541, AVC / H.2641542, HEVC / H.2651543, HEVC / H.265 (Scalable) 1544, as function blocks for processing related to the codec.
- HEVC / H.265 (Multi-view) 1545 and MPEG-DASH 1551 are included.
- “MPEG-2” Video 1541 is a functional block that encodes and decodes image data in the MPEG-2 format.
- AVC / H.2641542 is a functional block that encodes and decodes image data using the AVC method.
- HEVC / H.2651543 is a functional block that encodes and decodes image data using the HEVC method.
- HEVC / H.265 (Scalable) 1544 is a functional block that performs scalable encoding and scalable decoding of image data using the HEVC method.
- HEVC / H.265 (Multi-view) 1545 is a functional block that multi-view encodes or multi-view decodes image data using the HEVC method.
- MPEG-DASH 1551 is a functional block that transmits and receives image data in the MPEG-DASH (MPEG-Dynamic Adaptive Streaming over HTTP) method.
- MPEG-DASH is a technology for streaming video using HTTP (HyperText Transfer Protocol), and selects and transmits appropriate data from multiple encoded data with different resolutions prepared in advance in segments. This is one of the features.
- MPEG-DASH 1551 generates a stream compliant with the standard, controls transmission of the stream, and the like.
- MPEG-2 Video 1541 to HEVC / H.265 (Multi-view) 1545 described above are used. Is used.
- the memory interface 1517 is an interface for the external memory 1312. Data supplied from the image processing engine 1514 or the codec engine 1516 is supplied to the external memory 1312 via the memory interface 1517. The data read from the external memory 1312 is supplied to the video processor 1332 (the image processing engine 1514 or the codec engine 1516) via the memory interface 1517.
- a multiplexing / demultiplexing unit (MUX DMUX) 1518 multiplexes and demultiplexes various data related to images such as a bit stream of encoded data, image data, and a video signal.
- This multiplexing / demultiplexing method is arbitrary.
- the multiplexing / demultiplexing unit (MUX DMUX) 1518 can not only combine a plurality of data into one but also add predetermined header information or the like to the data.
- the multiplexing / demultiplexing unit (MUX DMUX) 1518 not only divides one data into a plurality of data but also adds predetermined header information or the like to each divided data. it can.
- the multiplexing / demultiplexing unit (MUX DMUX) 1518 can convert the data format by multiplexing / demultiplexing.
- the multiplexing / demultiplexing unit (MUX DMUX) 1518 multiplexes the bitstream, thereby transporting the transport stream, which is a bit stream in a transfer format, or data in a file format for recording (file data).
- the transport stream which is a bit stream in a transfer format, or data in a file format for recording (file data).
- file data file format for recording
- the network interface 1519 is an interface for a broadband modem 1333, connectivity 1321 (both in FIG. 44), and the like.
- the video interface 1520 is an interface for, for example, the connectivity 1321 and the camera 1322 (both are FIG. 44).
- the transport stream is transmitted to the multiplexing / demultiplexing unit (MUX DMUX via the network interface 1519).
- MUX DMUX multiplexing / demultiplexing unit
- the codec engine 1516 the image data obtained by decoding by the codec engine 1516 is subjected to predetermined image processing by the image processing engine 1514, subjected to predetermined conversion by the display engine 1513, and connected to, for example, the connectivity 1321 (see FIG. 44), and the image is displayed on the monitor.
- image data obtained by decoding by the codec engine 1516 is re-encoded by the codec engine 1516, multiplexed by a multiplexing / demultiplexing unit (MUX DMUX) 1518, converted into file data, and video
- MUX DMUX multiplexing / demultiplexing unit
- the data is output to, for example, the connectivity 1321 (FIG. 44) through the interface 1520 and recorded on various recording media.
- encoded data file data obtained by encoding image data read from a recording medium (not shown) by the connectivity 1321 (FIG. 44) is multiplexed / demultiplexed via the video interface 1520. Is supplied to a unit (MUX DMUX) 1518, demultiplexed, and decoded by the codec engine 1516. Image data obtained by decoding by the codec engine 1516 is subjected to predetermined image processing by the image processing engine 1514, subjected to predetermined conversion by the display engine 1513, and, for example, connectivity 1321 (FIG. 44) via the display interface 1512. And the image is displayed on the monitor.
- MUX DMUX unit
- image data obtained by decoding by the codec engine 1516 is re-encoded by the codec engine 1516, multiplexed by the multiplexing / demultiplexing unit (MUX DMUX) 1518, and converted into a transport stream,
- MUX DMUX multiplexing / demultiplexing unit
- the connectivity 1321 and the broadband modem 1333 are supplied via the network interface 1519 and transmitted to another device (not shown).
- image data and other data are exchanged between the processing units in the video processor 1332 using, for example, the internal memory 1515 and the external memory 1312.
- the power management module 1313 controls power supply to the control unit 1511, for example.
- the present technology when the present technology is applied to the video processor 1332 configured as described above, the present technology according to each of the above-described embodiments may be applied to the codec engine 1516. That is, for example, the codec engine 1516 includes functional blocks that implement the scalable encoding device 100 (FIG. 9) according to the first embodiment and the scalable decoding device 200 (FIG. 24) according to the second embodiment. What should I do? In this way, the video processor 1332 can obtain the same effects as those described above with reference to FIGS.
- the present technology (that is, the functions of the image encoding device and the image decoding device according to each of the above-described embodiments) may be realized by hardware such as a logic circuit or an embedded program. It may be realized by software such as the above, or may be realized by both of them.
- the configuration of the video processor 1332 is arbitrary and may be other than the two examples described above.
- the video processor 1332 may be configured as one semiconductor chip, but may be configured as a plurality of semiconductor chips. For example, a three-dimensional stacked LSI in which a plurality of semiconductors are stacked may be used. Further, it may be realized by a plurality of LSIs.
- the video set 1300 can be incorporated into various devices that process image data.
- the video set 1300 can be incorporated in the television device 900 (FIG. 37), the mobile phone 920 (FIG. 38), the recording / reproducing device 940 (FIG. 39), the imaging device 960 (FIG. 40), or the like.
- the apparatus can obtain the same effects as those described above with reference to FIGS.
- the video set 1300 includes, for example, terminal devices such as the personal computer 1004, the AV device 1005, the tablet device 1006, and the mobile phone 1007 in the data transmission system 1000 in FIG. 41, the broadcasting station 1101 in the data transmission system 1100 in FIG. It can also be incorporated into the terminal device 1102, the imaging device 1201 in the imaging system 1200 of FIG. 43, the scalable encoded data storage device 1202, and the like.
- terminal devices such as the personal computer 1004, the AV device 1005, the tablet device 1006, and the mobile phone 1007 in the data transmission system 1000 in FIG. 41, the broadcasting station 1101 in the data transmission system 1100 in FIG. It can also be incorporated into the terminal device 1102, the imaging device 1201 in the imaging system 1200 of FIG. 43, the scalable encoded data storage device 1202, and the like.
- the apparatus can obtain the same effects as those described above with reference to FIGS.
- each configuration of the video set 1300 described above can be implemented as a configuration to which the present technology is applied as long as it includes the video processor 1332.
- the video processor 1332 can be implemented as a video processor to which the present technology is applied.
- the processor or the video module 1311 indicated by the dotted line 1341 can be implemented as a processor or a module to which the present technology is applied.
- the video module 1311, the external memory 1312, the power management module 1313, and the front end module 1314 can be combined and implemented as a video unit 1361 to which the present technology is applied. In any case, the same effects as those described above with reference to FIGS. 1 to 33 can be obtained.
- any configuration including the video processor 1332 can be incorporated into various devices that process image data, as in the case of the video set 1300.
- a video processor 1332 a processor indicated by a dotted line 1341, a video module 1311, or a video unit 1361, a television device 900 (FIG. 37), a mobile phone 920 (FIG. 38), a recording / playback device 940 (FIG. 39), Imaging device 960 (FIG. 40), terminal devices such as personal computer 1004, AV device 1005, tablet device 1006, and mobile phone 1007 in data transmission system 1000 in FIG. 41, broadcast station 1101 and terminal in data transmission system 1100 in FIG.
- the apparatus 1102 can be incorporated in the apparatus 1102, the imaging apparatus 1201 in the imaging system 1200 of FIG. 43, the scalable encoded data storage apparatus 1202, and the like. Then, by incorporating any configuration to which the present technology is applied, the apparatus can obtain the same effects as those described above with reference to FIGS. 1 to 33, as in the case of the video set 1300. .
- this technology selects and uses an appropriate one of a plurality of pieces of encoded data having different resolutions prepared in advance for each segment, for example, HTTP streaming content such as MPEG DASH to be described later
- HTTP streaming content such as MPEG DASH to be described later
- the present invention can also be applied to a reproduction system and a Wi-Fi standard wireless communication system.
- the method for transmitting such information is not limited to such an example.
- these pieces of information may be transmitted or recorded as separate data associated with the encoded bitstream without being multiplexed into the encoded bitstream.
- the term “associate” means that an image (which may be a part of an image such as a slice or a block) included in the bitstream and information corresponding to the image can be linked at the time of decoding. Means. That is, information may be transmitted on a transmission path different from that of the image (or bit stream).
- Information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream). Furthermore, the information and the image (or bit stream) may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a part of the frame.
- this technique can also take the following structures. (1) When an image having three or more layers is encoded, inter-layer information indicating whether or not a reference layer image referred to by a target image to be encoded is in a skip mode is acquired. An acquisition unit to An inter-layer information setting unit that refers to the inter-layer information acquired by the acquisition unit and prohibits the target image from being encoded as a skip mode when the image of the reference layer is in the skip mode. Image encoding device.
- the acquisition unit acquires inter-layer information indicating whether or not a reference layer picture referred to by a target picture to be encoded is a skip picture;
- the image encoding device (1), wherein the inter-layer information setting unit prohibits the target picture from being encoded as a skip picture when the picture in the reference layer is a skip picture.
- the acquisition unit acquires inter-layer information indicating whether or not a slice of a reference layer referred to by a target slice to be encoded is a skip slice,
- the inter-layer information setting unit when a slice of the reference layer is a skip slice, prohibits encoding processing of the target slice as a skip slice.
- the image encoding device (1).
- the acquisition unit acquires inter-layer information indicating whether or not a reference layer tile referred to by a target tile to be encoded is a skip tile
- the inter-layer information setting unit when the tile of the reference layer is a skip tile, prohibits encoding processing of the target tile as a skip tile.
- the image encoding device according to (1).
- the inter-layer information setting unit skips the target image when the image of the reference layer is in the skip mode only when the reference layer and the target layer to be encoded are spatial scalability.
- the image encoding device according to any one of (1) to (4), wherein encoding is prohibited as a mode.
- the inter-layer information setting unit includes: The image encoding device according to any one of (1) to (5), wherein even when an image of the reference layer is in a skip mode, the target image is allowed to be encoded as a skip mode.
- the image encoding device is When encoding an image having three or more layers, obtain inter-layer information indicating whether or not a reference layer image referred to by a target image to be encoded is in skip mode; An image encoding method that refers to acquired inter-layer information and prohibits encoding the target image as a skip mode when an image of the reference layer is in a skip mode.
- An image encoding method that refers to acquired inter-layer information and prohibits encoding the target image as a skip mode when an image of the reference layer is in a skip mode.
- decoding a bitstream in which an image having three or more layers is encoded whether or not the reference layer image referred to by the target image to be decoded is in the skip mode.
- Decoding device (9) The acquisition unit acquires inter-layer information indicating whether or not a reference layer picture referenced by a target picture to be decoded is a skip picture, The image decoding device according to (8), wherein the inter-layer information setting unit prohibits decoding the target picture as a skip picture when the picture in the reference layer is a skip picture.
- the acquisition unit acquires inter-layer information indicating whether or not a reference layer slice referenced by a target slice to be subjected to decoding processing is a skip slice, The inter-layer information setting unit, when a slice of the reference layer is a skip slice, prohibits decoding the target slice as a skip slice.
- the image decoding device according to (8).
- the acquisition unit acquires inter-layer information indicating whether or not a tile of a reference layer referred to by a target tile that is a target of decoding processing is a skip tile, The inter-layer information setting unit, when the tile of the reference layer is a skip tile, prohibits decoding the target tile as a skip tile.
- the image decoding device according to (8).
- the inter-layer information setting unit sets the target image to the skip mode when the image of the reference hierarchy is in the skip mode.
- the inter-layer information setting unit When the reference layer and the target layer to be decrypted are spatial scalability, and the reference layer and the layer referenced by the reference layer are SNR scalability, the inter-layer information setting unit The image decoding device according to any one of (8) to (11), wherein even when an image of a reference layer is in a skip mode, the target image is permitted to be decoded as a skip mode.
- the image decoding device When decoding a bitstream in which an image having three or more layers is encoded, an inter-layer indicating whether or not a reference layer image referred to by the target image to be decoded is in the skip mode Get information, An image decoding method that refers to acquired inter-layer information and prohibits the target image from being decoded as a skip mode when the image of the reference layer is in a skip mode.
- An acquisition unit that acquires inter-layer information indicating the number of layers of the image when encoding an image having 64 or more layers
- An image encoding apparatus comprising: an inter-layer information setting unit that sets information on the number of expanded layers in VPS_extension with reference to the inter-layer information acquired by the acquisition unit.
- the inter-layer information setting unit sets a syntax element layer_extension_factor_minus1 in the VPS_extension, and (vps_max_layers_minus1 + 1) * (layer_extension_factor_minus1 + 1) is the number of layers in the image.
- a receiving unit that receives information on the number of extended layers set in VPS_extension from a bitstream in which an image having multiple layers of 64 or more is encoded;
- An image decoding apparatus comprising: a decoding unit that performs a decoding process with reference to information on the expanded number of layers received by the receiving unit.
- the image decoding device according to (20) wherein the reception unit receives a syntax element layer_extension_factor_minus1 in the VPS_extension, and (vps_max_layers_minus1 + 1) * (layer_extension_factor_minus1 + 1) is the number of layers in the image.
- 100 scalable encoding device 101 common information generation unit, 102 encoding control unit, 103 base layer image encoding unit, 104 motion information encoding unit, 104, 104-1, 104-2 enhancement layer image encoding unit, 116 Lossless encoding unit, 125 motion prediction / compensation unit, 135 motion prediction / compensation unit, 140 inter-layer information setting unit, 151 reference layer picture type buffer, 152 skip picture setting unit, 181 layer dependency buffer, 182 extension layer setting unit , 200 scalable decoding device, 201 ⁇ common information acquisition unit, 202 decoding control unit, 203 base layer image decoding unit, 204, 204-1 and 204-2 enhancement layer image decoding unit, 2 2 lossless decoding unit, 222 a motion compensator, 232 a motion compensator, between 240 hierarchical information receiving unit, 251 the reference layer picture type buffer, 252 skipped picture receiving unit, 281 hierarchical dependency buffer, 282 enhancement layer receiving unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
0.概要
1.第1の実施の形態(スケーラブル符号化装置)
2.第2の実施の形態(スケーラブル復号装置)
3.その他
4.第3の実施の形態(コンピュータ)
5.応用例
6.スケーラブル符号化の応用例
7.第4の実施の形態(セット・ユニット・モジュール・プロセッサ)
<符号化方式>
以下においては、HEVC(High Efficiency Video Coding)方式の画像符号化・復号に適用する場合を例に、本技術を説明する。
AVC(Advanced Video Coding)方式においては、マクロブロックとサブマクロブロックによる階層構造が規定されている。しかしながら、16画素×16画素のマクロブロックでは、次世代符号化方式の対象となるような、UHD(Ultra High Definition;4000画素×2000画素)といった大きな画枠に対して最適ではない。
ところで、AVCそしてHEVC符号化方式において、より高い符号化効率を達成するには、適切な予測モードの選択が重要である。
ところで、これまでの、MPEG2、AVCといった画像符号化方式は、図2乃至図4に示されるような、スケーラビリティ(scalability)機能を有していた。スケーラブル符号化(階層符号化)とは、画像を複数レイヤ化(階層化)し、レイヤ毎に符号化する方式である。
HEVCにおけるスケーラブル拡張は、非特許文献2において規定されている。非特許文献1および非特許文献2において、図5に示されるように、NAL_unit_headerにおいて、layer_idが指定され、また、図6に示されるように、VPS(Video_Parameter_Set)において階層数が指定される。
また、非特許文献3においては、以下のようなスキップピクチャが提案されている。すなわち、スケーラブル符号化処理が行われる際、エンハンスメントレイヤにおいて、スキップピクチャが指定されると、ベースレイヤのアップサンプル画像がそのまま出力され、ピクチャに対する復号処理が行われない。
<スケーラブル符号化装置>
図9は、スケーラブル符号化装置の主な構成例を示すブロック図である。
図10は、図9のベースレイヤ画像符号化部103の主な構成例を示すブロック図である。図10に示されるように、ベースレイヤ画像符号化部103は、A/D変換部111、画面並べ替えバッファ112、演算部113、直交変換部114、量子化部115、可逆符号化部116、蓄積バッファ117、逆量子化部118、および逆直交変換部119を有する。また、ベースレイヤ画像符号化部103は、演算部120、デブロッキングフィルタ121、フレームメモリ122、選択部123、イントラ予測部124、動き予測・補償部125、予測画像選択部126、およびレート制御部127を有する。さらに、ベースレイヤ画像符号化部103は、デブロッキングフィルタ121とフレームメモリ122との間に、適応オフセットフィルタ128を有する。
図11は、図9のエンハンスメントレイヤ画像符号化部104-2の主な構成例を示すブロック図である。なお、エンハンスメントレイヤ画像符号化部104-1は、図11のエンハンスメントレイヤ画像符号化部104-2と同じ構成であるので、その説明は省略される。また、図11に示されるように、エンハンスメントレイヤ画像符号化部104-2は、図10のベースレイヤ画像符号化部103と基本的に同様の構成を有する。
次に、図12を参照して、本技術による階層間情報の1つとしてのスキップピクチャについて説明する。図12の例においては、四角がピクチャを表し、四角に示されるバツが、そのピクチャがスキップピクチャであることを表している。
図15は、図11の階層間情報設定部140の主な構成例を示すブロック図である。
次に、以上のようなスケーラブル符号化装置100により実行される各処理の流れについて説明する。最初に、図16のフローチャートを参照して、符号化処理の流れの例を説明する。スケーラブル符号化装置100は、ピクチャ毎にこの符号化処理を実行する。
次に、図17のフローチャートを参照して、図15のステップS103において実行されるベースレイヤ符号化処理の流れの例を説明する。
次に、図18のフローチャートを参照して、図15のステップS105において実行されるエンハンスメントレイヤ符号化処理の流れの例を説明する。
次に、図19のフローチャートを参照して、図18のステップS154において実行される階層間情報設定処理の流れの例を説明する。
次に、本技術による階層間情報の1つとして、スケーラブル符号化を行う際、64以上の階層を符号化する方法について説明する。
図22は、図11の階層間情報設定部140の主な構成例を示すブロック図である。
次に、図23のフローチャートを参照して、図18のステップS154において実行される階層間情報設定処理の流れの例を説明する。
<スケーラブル復号装置>
次に、以上のようにスケーラブル符号化(階層符号化)された符号化データ(ビットストリーム)の復号について説明する。図24は、図9のスケーラブル符号化装置100に対応するスケーラブル復号装置の主な構成例を示すブロック図である。図24に示されるスケーラブル復号装置200は、例えばスケーラブル符号化装置100により画像データがスケーラブル符号化されて得られた符号化データを、その符号化方法に対応する方法でスケーラブル復号する。
図25は、図24のベースレイヤ画像復号部203の主な構成例を示すブロック図である。図25に示されるようにベースレイヤ画像復号部203は、蓄積バッファ211、可逆復号部212、逆量子化部213、逆直交変換部214、演算部215、デブロッキングフィルタ216、画面並べ替えバッファ217、およびD/A変換部218を有する。また、ベースレイヤ画像復号部203は、フレームメモリ219、選択部220、イントラ予測部221、動き補償部222、および選択部223を有する。さらに、ベースレイヤ画像復号部203は、デブロッキングフィルタ216と、画面並べ替えバッファ217およびフレームメモリ219との間に、適応オフセットフィルタ224を有する。
図26は、図24のエンハンスメントレイヤ画像復号部204-2の主な構成例を示すブロック図である。なお、エンハンスメントレイヤ画像復号部204-1は、図26のエンハンスメントレイヤ画像符号化部104-2と同じ構成であるので、その説明は省略される。また、図26に示されるように、エンハンスメントレイヤ画像復号部204-2は、図25のベースレイヤ画像復号部203と基本的に同様の構成を有する。
図27は、図26の階層間情報受信部240の主な構成例を示すブロック図である。なお、図27の階層間情報受信部240は、図15の階層間情報設定部140に対応する構成を有している。
次に、以上のようなスケーラブル復号装置200により実行される各処理の流れについて説明する。最初に、図28のフローチャートを参照して、復号処理の流れの例を説明する。スケーラブル復号装置200は、ピクチャ毎にこの復号処理を実行する。
次に、図29のフローチャートを参照して、図28のステップS403において実行されるベースレイヤ復号処理の流れの例を説明する。
次に、図30のフローチャートを参照して、図28のステップS405において実行されるエンハンスメントレイヤ復号処理の流れの例を説明する。
次に、図31のフローチャートを参照して、図30のステップS455において実行される階層間情報受信処理の流れの例を説明する。
図32は、図26の階層間情報受信部240の主な構成例を示すブロック図である。なお、図32の階層間情報受信部240は、図22の階層間情報設定部140に対応する構成を有している。
次に、図33のフローチャートを参照して、図30のステップS455において実行される階層間情報受信処理の流れの例を説明する。
以上においては、スケーラブル符号化により画像データが階層化されて複数レイヤ化されるように説明したが、そのレイヤ数は任意である。また、例えば、図34の例に示されるように、一部のピクチャが階層化されるようにしてもよい。また、以上においては、符号化・復号において、エンハンスメントレイヤは、ベースレイヤの情報を利用して処理されるように説明したが、これに限らず、エンハンスメントレイヤが、処理済の他のエンハンスメントレイヤの情報を利用して処理されるようにしてもよい。
<コンピュータ>
上述した一連の処理は、ハードウエアにより実行させることもできるし、ソフトウエアにより実行させることもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここでコンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータ等が含まれる。
<第1の応用例:テレビジョン受像機>
図37は、上述した実施形態を適用したテレビジョン装置の概略的な構成の一例を示している。テレビジョン装置900は、アンテナ901、チューナ902、デマルチプレクサ903、デコーダ904、映像信号処理部905、表示部906、音声信号処理部907、スピーカ908、外部インタフェース909、制御部910、ユーザインタフェース911、及びバス912を備える。
図38は、上述した実施形態を適用した携帯電話機の概略的な構成の一例を示している。携帯電話機920は、アンテナ921、通信部922、音声コーデック923、スピーカ924、マイクロホン925、カメラ部926、画像処理部927、多重分離部928、記録再生部929、表示部930、制御部931、操作部932、及びバス933を備える。
図39は、上述した実施形態を適用した記録再生装置の概略的な構成の一例を示している。記録再生装置940は、例えば、受信した放送番組の音声データ及び映像データを符号化して記録媒体に記録する。また、記録再生装置940は、例えば、他の装置から取得される音声データ及び映像データを符号化して記録媒体に記録してもよい。また、記録再生装置940は、例えば、ユーザの指示に応じて、記録媒体に記録されているデータをモニタ及びスピーカ上で再生する。このとき、記録再生装置940は、音声データ及び映像データを復号する。
図40は、上述した実施形態を適用した撮像装置の概略的な構成の一例を示している。撮像装置960は、被写体を撮像して画像を生成し、画像データを符号化して記録媒体に記録する。
<第1のシステム>
次に、スケーラブル符号化(階層符号化)されたスケーラブル符号化データの具体的な利用例について説明する。スケーラブル符号化は、例えば、図41に示される例のように、伝送するデータの選択のために利用される。
また、スケーラブル符号化は、例えば、図42に示される例のように、複数の通信媒体を介する伝送のために利用される。
また、スケーラブル符号化は、例えば、図43に示される例のように、符号化データの記憶に利用される。
<実施のその他の例>
以上において本技術を適用する装置やシステム等の例を説明したが、本技術は、これに限らず、このような装置またはシステムを構成する装置に搭載するあらゆる構成、例えば、システムLSI(Large Scale Integration)等としてのプロセッサ、複数のプロセッサ等を用いるモジュール、複数のモジュール等を用いるユニット、ユニットにさらにその他の機能を付加したセット等(すなわち、装置の一部の構成)として実施することもできる。
本技術をセットとして実施する場合の例について、図44を参照して説明する。図44は、本技術を適用したビデオセットの概略的な構成の一例を示している。
図45は、本技術を適用したビデオプロセッサ1332(図44)の概略的な構成の一例を示している。
図46は、本技術を適用したビデオプロセッサ1332(図44)の概略的な構成の他の例を示している。図46の例の場合、ビデオプロセッサ1332は、ビデオデータを所定の方式で符号化・復号する機能とを有する。
(1) 3以上の複数階層を有する画像を符号化処理する際に、符号化処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得する取得部と、
前記取得部により取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして符号化処理することを禁止する階層間情報設定部と
を備える画像符号化装置。
(2) 前記取得部は、符号化処理の対象となる対象ピクチャが参照する参照階層のピクチャがスキップピクチャであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のピクチャがスキップピクチャである場合、前記対象ピクチャをスキップピクチャとして符号化処理することを禁止する
前記(1)に記載の画像符号化装置。
(3) 前記取得部は、符号化処理の対象となる対象スライスが参照する参照階層のスライスがスキップスライスであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のスライスがスキップスライスである場合、前記対象スライスをスキップスライスとして符号化処理することを禁止する
前記(1)に記載の画像符号化装置。
(4) 前記取得部は、符号化処理の対象となる対象タイルが参照する参照階層のタイルがスキップタイルであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のタイルがスキップタイルである場合、前記対象タイルをスキップタイルとして符号化処理することを禁止する
前記(1)に記載の画像符号化装置。
(5) 前記参照階層と符号化処理の対象となる対象階層がスペーシャルスケーラビリティである場合のみ、前記階層間情報設定部は、前記参照階層の画像がスキップモードであるとき、前記対象画像をスキップモードとして符号化処理することを禁止する
前記(1)乃至(4)のいずれかに記載の画像符号化装置。
(6) 前記参照階層と符号化処理の対象となる対象階層がスペーシャルスケーラビリティであるが、前記参照階層と、さらにそれが参照する階層がSNRスケーラビリティである場合、前記階層間情報設定部は、前記参照階層の画像がスキップモードであっても、前記対象画像をスキップモードとして符号化処理することを許可する
前記(1)乃至(5)のいずれかに記載の画像符号化装置。
(7) 画像符号化装置が、
3以上の複数階層を有する画像を符号化処理する際に、符号化処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得し、
取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして符号化することを禁止する
画像符号化方法。
(8) 3以上の複数階層を有する画像が符号化されているビットストリームを復号処理する際に、復号処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得する取得部と、
前記取得部により取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして復号処理することを禁止する階層間情報設定部と
を備える画像復号装置。
(9) 前記取得部は、復号処理の対象となる対象ピクチャが参照する参照階層のピクチャがスキップピクチャであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のピクチャがスキップピクチャである場合、前記対象ピクチャをスキップピクチャとして復号処理することを禁止する
前記(8)に記載の画像復号装置。
(10) 前記取得部は、復号処理の対象となる対象スライスが参照する参照階層のスライスがスキップスライスであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のスライスがスキップスライスである場合、前記対象スライスをスキップスライスとして復号処理することを禁止する
前記(8)に記載の画像復号装置。
(11) 前記取得部は、復号処理の対象となる対象タイルが参照する参照階層のタイルがスキップタイルであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のタイルがスキップタイルである場合、前記対象タイルをスキップタイルとして復号処理することを禁止する
前記(8)に記載の画像復号装置。
(12) 前記参照階層と復号処理の対象となる対象階層がスペーシャルスケーラビリティである場合のみ、前記階層間情報設定部は、前記参照階層の画像がスキップモードであるとき、前記対象画像をスキップモードとして復号処理することを禁止する
前記(8)乃至(11)のいずれかに記載の画像復号装置。
(13) 前記参照階層と復号処理の対象となる対象階層がスペーシャルスケーラビリティであるが、前記参照階層と、さらにそれが参照する階層がSNRスケーラビリティである場合、前記階層間情報設定部は、前記参照階層の画像がスキップモードであっても、前記対象画像をスキップモードとして復号処理することを許可する
前記(8)乃至(11)のいずれかに記載の画像復号装置。
(14) 画像復号装置が、
3以上の複数階層を有する画像が符号化されているビットストリームを復号処理する際に、復号処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得し、
取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして復号処理することを禁止する
画像復号方法。
(15) 64以上の複数階層を有する画像を符号化処理する際に、前記画像の階層数を示す階層間情報を取得する取得部と、
前記取得部により取得された階層間情報を参照して、VPS_extensionにおいて、拡張された階層数に関する情報を設定する階層間情報設定部と
を備える画像符号化装置。
(16) 前記階層間情報設定部は、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を設定し、(vps_max_layers_minus1+1)*(layer_extension_factor_minus1+1)が、前記画像における階層数である
前記(15)に記載の画像符号化装置。
(17) 前記階層間情報設定部は、前記layer_extension_factor_minus1の値が0以外である場合、VPS_extensionにおいて、階層セットに関する情報を設定する
前記(16)に記載の画像符号化装置。
(18) 前記階層間情報設定部は、VPSにおいて、layer_extension_flagを設定し、layer_extension_flagの値が1のときのみ、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を設定する
前記(16)に記載の画像符号化装置。
(19) 画像符号化装置が、
64以上の複数階層を有する画像を符号化処理する際に、前記画像の階層数を示す階層間情報を取得し、
取得された階層間情報を参照して、VPS_extensionにおいて、拡張された階層数に関する情報を設定する
画像符号化方法。
(20) 64以上の複数階層を有する画像が符号化されているビットストリームから、VPS_extensionにおいて設定されている拡張された階層数に関する情報を受け取る受け取り部と、
前記受け取り部により受け取られた拡張された階層数に関する情報を参照して、復号処理を行う復号部と
を備える画像復号装置。
(21) 前記受け取り部は、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を受け取り、(vps_max_layers_minus1+1)*(layer_extension_factor_minus1+1)が、前記画像における階層数である
前記(20)に記載の画像復号装置。
(22) 前記受け取り部は、前記layer_extension_factor_minus1の値が0以外である場合、前記VPS_extensionにおいて、階層セットに関する情報を受け取る
前記(21)に記載の画像復号装置。
(23) 前記受け取り部は、VPSにおいて、layer_extension_flagを受け取り、layer_extension_flagの値が1のときのみ、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を受け取る
前記(21)に記載の画像復号装置。
(24) 画像復号装置が、
64以上の複数階層を有する画像が符号化されているビットストリームから、VPS_extensionにおいて設定されている拡張された階層数に関する情報を受け取り、
受け取られた拡張された階層数に関する情報を参照して、復号処理を行う
画像復号方法。
Claims (24)
- 3以上の複数階層を有する画像を符号化処理する際に、符号化処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得する取得部と、
前記取得部により取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして符号化処理することを禁止する階層間情報設定部と
を備える画像符号化装置。 - 前記取得部は、符号化処理の対象となる対象ピクチャが参照する参照階層のピクチャがスキップピクチャであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のピクチャがスキップピクチャである場合、前記対象ピクチャをスキップピクチャとして符号化処理することを禁止する
請求項1に記載の画像符号化装置。 - 前記取得部は、符号化処理の対象となる対象スライスが参照する参照階層のスライスがスキップスライスであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のスライスがスキップスライスである場合、前記対象スライスをスキップスライスとして符号化処理することを禁止する
請求項1に記載の画像符号化装置。 - 前記取得部は、符号化処理の対象となる対象タイルが参照する参照階層のタイルがスキップタイルであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のタイルがスキップタイルである場合、前記対象タイルをスキップタイルとして符号化処理することを禁止する
請求項1に記載の画像符号化装置。 - 前記参照階層と符号化処理の対象となる対象階層がスペーシャルスケーラビリティである場合のみ、前記階層間情報設定部は、前記参照階層の画像がスキップモードであるとき、前記対象画像をスキップモードとして符号化処理することを禁止する
請求項1に記載の画像符号化装置。 - 前記参照階層と符号化処理の対象となる対象階層がスペーシャルスケーラビリティであるが、前記参照階層と、さらにそれが参照する階層がSNRスケーラビリティである場合、前記階層間情報設定部は、前記参照階層の画像がスキップモードであっても、前記対象画像をスキップモードとして符号化処理することを許可する
請求項1に記載の画像符号化装置。 - 画像符号化装置が、
3以上の複数階層を有する画像を符号化処理する際に、符号化処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得し、
取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして符号化することを禁止する
画像符号化方法。 - 3以上の複数階層を有する画像が符号化されているビットストリームを復号処理する際に、復号処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得する取得部と、
前記取得部により取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして復号処理することを禁止する階層間情報設定部と
を備える画像復号装置。 - 前記取得部は、復号処理の対象となる対象ピクチャが参照する参照階層のピクチャがスキップピクチャであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のピクチャがスキップピクチャである場合、前記対象ピクチャをスキップピクチャとして復号処理することを禁止する
請求項8に記載の画像復号装置。 - 前記取得部は、復号処理の対象となる対象スライスが参照する参照階層のスライスがスキップスライスであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のスライスがスキップスライスである場合、前記対象スライスをスキップスライスとして復号処理することを禁止する
請求項8に記載の画像復号装置。 - 前記取得部は、復号処理の対象となる対象タイルが参照する参照階層のタイルがスキップタイルであるか否かを示す階層間情報を取得し、
前記階層間情報設定部は、前記参照階層のタイルがスキップタイルである場合、前記対象タイルをスキップタイルとして復号処理することを禁止する
請求項8に記載の画像復号装置。 - 前記参照階層と復号処理の対象となる対象階層がスペーシャルスケーラビリティである場合のみ、前記階層間情報設定部は、前記参照階層の画像がスキップモードであるとき、前記対象画像をスキップモードとして復号処理することを禁止する
請求項8に記載の画像復号装置。 - 前記参照階層と復号処理の対象となる対象階層がスペーシャルスケーラビリティであるが、前記参照階層と、さらにそれが参照する階層がSNRスケーラビリティである場合、前記階層間情報設定部は、前記参照階層の画像がスキップモードであっても、前記対象画像をスキップモードとして復号処理することを許可する
請求項8に記載の画像復号装置。 - 画像復号装置が、
3以上の複数階層を有する画像が符号化されているビットストリームを復号処理する際に、復号処理の対象となる対象画像が参照する参照階層の画像がスキップモードであるか否かを示す階層間情報を取得し、
取得された階層間情報を参照して、前記参照階層の画像がスキップモードである場合、前記対象画像をスキップモードとして復号処理することを禁止する
画像復号方法。 - 64以上の複数階層を有する画像を符号化処理する際に、前記画像の階層数を示す階層間情報を取得する取得部と、
前記取得部により取得された階層間情報を参照して、VPS_extensionにおいて、拡張された階層数に関する情報を設定する階層間情報設定部と
を備える画像符号化装置。 - 前記階層間情報設定部は、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を設定し、(vps_max_layers_minus1+1)*(layer_extension_factor_minus1+1)が、前記画像における階層数である
請求項15に記載の画像符号化装置。 - 前記階層間情報設定部は、前記layer_extension_factor_minus1の値が0以外である場合、VPS_extensionにおいて、階層セットに関する情報を設定する
請求項16に記載の画像符号化装置。 - 前記階層間情報設定部は、VPSにおいて、layer_extension_flagを設定し、layer_extension_flagの値が1のときのみ、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を設定する
請求項16に記載の画像符号化装置。 - 画像符号化装置が、
64以上の複数階層を有する画像を符号化処理する際に、前記画像の階層数を示す階層間情報を取得し、
取得された階層間情報を参照して、VPS_extensionにおいて、拡張された階層数に関する情報を設定する
画像符号化方法。 - 64以上の複数階層を有する画像が符号化されているビットストリームから、VPS_extensionにおいて設定されている拡張された階層数に関する情報を受け取る受け取り部と、
前記受け取り部により受け取られた拡張された階層数に関する情報を参照して、復号処理を行う復号部と
を備える画像復号装置。 - 前記受け取り部は、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を受け取り、(vps_max_layers_minus1+1)*(layer_extension_factor_minus1+1)が、前記画像における階層数である
請求項20に記載の画像復号装置。 - 前記受け取り部は、前記layer_extension_factor_minus1の値が0以外である場合、前記VPS_extensionにおいて、階層セットに関する情報を受け取る
請求項21に記載の画像復号装置。 - 前記受け取り部は、VPSにおいて、layer_extension_flagを受け取り、layer_extension_flagの値が1のときのみ、前記VPS_extensionにおいて、シンタクス要素layer_extension_factor_minus1を受け取る
請求項21に記載の画像復号装置。 - 画像復号装置が、
64以上の複数階層を有する画像が符号化されているビットストリームから、VPS_extensionにおいて設定されている拡張された階層数に関する情報を受け取り、
受け取られた拡張された階層数に関する情報を参照して、復号処理を行う
画像復号方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015554740A JPWO2015098563A1 (ja) | 2013-12-27 | 2014-12-12 | 画像符号化装置および方法、並びに画像復号装置および方法 |
US15/034,007 US20160286218A1 (en) | 2013-12-27 | 2014-12-12 | Image encoding device and method, and image decoding device and method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-272942 | 2013-12-27 | ||
JP2013272942 | 2013-12-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015098563A1 true WO2015098563A1 (ja) | 2015-07-02 |
Family
ID=53478427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/082924 WO2015098563A1 (ja) | 2013-12-27 | 2014-12-12 | 画像符号化装置および方法、並びに画像復号装置および方法 |
Country Status (3)
Country | Link |
---|---|
US (1) | US20160286218A1 (ja) |
JP (1) | JPWO2015098563A1 (ja) |
WO (1) | WO2015098563A1 (ja) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107301063B (zh) * | 2017-05-10 | 2020-07-31 | 北京奇艺世纪科技有限公司 | 一种镜像管理方法及装置 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007324923A (ja) * | 2006-05-31 | 2007-12-13 | Sharp Corp | Mpeg画質補正装置、及びmpeg画質補正方法 |
-
2014
- 2014-12-12 WO PCT/JP2014/082924 patent/WO2015098563A1/ja active Application Filing
- 2014-12-12 JP JP2015554740A patent/JPWO2015098563A1/ja active Pending
- 2014-12-12 US US15/034,007 patent/US20160286218A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007324923A (ja) * | 2006-05-31 | 2007-12-13 | Sharp Corp | Mpeg画質補正装置、及びmpeg画質補正方法 |
Non-Patent Citations (3)
Title |
---|
JIANLE CHEN ET AL.: "MV-HEVC/SHVC HLS: On signaling of enhancement layer skip picture", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/ WG11 JCTVC-00265, ITU-T, pages 1 - 5 * |
JONATAN SAMUELSSON ET AL.: "SHVC skip pictures", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11 JCTVC-00199, ITU-T, pages 1 - 5 * |
KARSTEN SUEHRING ET AL.: "3D/MV-HEVC HLS: Study and proposal of methods for extending the supported number of layers", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11 JCTVC-00200, ITU-T, pages 1 - 13 * |
Also Published As
Publication number | Publication date |
---|---|
US20160286218A1 (en) | 2016-09-29 |
JPWO2015098563A1 (ja) | 2017-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10142634B2 (en) | Image processing apparatus and method | |
JP6358475B2 (ja) | 画像復号装置および方法、並びに、画像符号化装置および方法 | |
JP6287035B2 (ja) | 復号装置および復号方法 | |
JP6607414B2 (ja) | 画像符号化装置および方法 | |
JP6579393B2 (ja) | 画像処理装置および方法 | |
WO2015137145A1 (ja) | 画像符号化装置および方法、並びに画像復号装置および方法 | |
WO2015098561A1 (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
WO2015053116A1 (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
WO2014050731A1 (ja) | 画像処理装置および方法 | |
JP2015005899A (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
US9930353B2 (en) | Image decoding device and method | |
WO2015064402A1 (ja) | 画像処理装置および方法 | |
WO2015064403A1 (ja) | 画像処理装置および方法 | |
WO2014103764A1 (ja) | 画像処理装置および方法 | |
JP6233655B2 (ja) | 画像処理装置および方法 | |
JP6150134B2 (ja) | 画像符号化装置および方法、画像復号装置および方法、プログラム、並びに記録媒体 | |
JP6477930B2 (ja) | 符号化装置および符号化方法 | |
WO2014141899A1 (ja) | 画像処理装置および方法 | |
WO2015098563A1 (ja) | 画像符号化装置および方法、並びに画像復号装置および方法 | |
JPWO2014002900A1 (ja) | 画像処理装置および画像処理方法 | |
WO2014162916A1 (ja) | 画像符号化装置および方法、並びに、画像復号装置および方法 | |
WO2015064401A1 (ja) | 画像処理装置および方法 | |
JP2015050738A (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
WO2014156707A1 (ja) | 画像符号化装置および方法、並びに、画像復号装置および方法 | |
WO2014141964A1 (ja) | 画像処理装置および方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14873787 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15034007 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2015554740 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14873787 Country of ref document: EP Kind code of ref document: A1 |