WO2013005659A1 - 画像処理装置および方法 - Google Patents
画像処理装置および方法 Download PDFInfo
- Publication number
- WO2013005659A1 WO2013005659A1 PCT/JP2012/066647 JP2012066647W WO2013005659A1 WO 2013005659 A1 WO2013005659 A1 WO 2013005659A1 JP 2012066647 W JP2012066647 W JP 2012066647W WO 2013005659 A1 WO2013005659 A1 WO 2013005659A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- unit
- parameter
- coding
- decoding
- image
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/91—Entropy coding, e.g. variable length coding [VLC] or arithmetic coding
Definitions
- the present disclosure relates to an image processing apparatus and method, and more particularly to an image processing apparatus and method capable of performing image encoding processing and decoding processing at high speed.
- CABAC Context-Adaptive Binary Arithmetic Coding
- CAVLC Context-Adaptive Variable Length Coding
- CABAC when parsing the slice data, CABAC initializes the value of the context, the encoding processing engine, and the decoding processing engine at the beginning of the slice data (see, for example, Non-Patent Document 1).
- H. Joint Collaboration Team-Video Coding which is a joint standardization body of ITU-T and ISO / IEC, aims to improve coding efficiency further than the H.264 / AVC standard, and with High Efficiency Video Coding (HEVC).
- HEVC High Efficiency Video Coding
- the standardization of the called coding scheme is in progress.
- Non-Patent Document 2 has been issued as a draft.
- an adaptive loop filter see, for example, Non-Patent Document 3
- an adaptive offset filter see, for example, Non-Patent Document 4
- an adaptive offset filter is provided between the deblocking filter and the adaptive loop filter.
- Non-Patent Document 2 does not sufficiently consider the timing of the arithmetic coding process or the arithmetic decoding process when parsing the header (arithmetic coding process or arithmetic decoding process). Therefore, when trying to carry out the method proposed in Non-Patent Document 2, there is a delay associated with the arithmetic coding process or the arithmetic decoding process.
- the present disclosure has been made in view of such a situation, and can perform image encoding processing and decoding processing at high speed.
- the image processing apparatus is a code in which arithmetic coding processing is performed on arithmetic coding processing parameters in the syntax of a coded stream in which image data is coded in units having a hierarchical structure.
- the coding parameters subjected to the variable length coding process or the fixed length coding process are collectively arranged in the syntax of the coded stream, and the receiving unit receives the coding parameter from the coded stream, The decoding unit may decode the coding parameter received by the receiving unit, and decode the coded stream using the decoded coding parameter.
- the arithmetic coding parameter is arranged after the coding parameter in the syntax of the coding stream.
- Initialization parameters used when initializing the arithmetic coding process or arithmetic decoding process are collectively arranged in the syntax of the coded stream, and the receiving unit is configured to calculate the initialization parameter from the coded stream. And a control unit configured to control to initialize the arithmetic decoding process with reference to an initialization parameter received by the receiving unit.
- the arithmetic coding parameter is a parameter for controlling the coding process or the decoding process at the picture level or slice level.
- the arithmetic coding parameter is a parameter of a filter used when performing encoding processing or decoding processing.
- the parameter of the adaptive loop filter and the parameter of the adaptive offset filter are collectively arranged at the top of the slice data of the coded stream, and the receiving unit is the adaptive from the top of the slice data of the coded stream Parameters of a loop filter and parameters of the adaptive offset filter may be received.
- the parameter of the adaptive loop filter and the parameter of the adaptive offset filter are collectively arranged at the end of the slice header of the coded stream, and the receiving unit is configured to adjust the adaptation from the end of slice data of the coded stream. Parameters of a loop filter and parameters of the adaptive offset filter may be received.
- the initialization parameter is disposed near the top of the slice header of the coded stream, and the receiving unit can receive the initialization parameter from near the top of the slice header of the coded stream.
- an arithmetic coding parameter subjected to an arithmetic coding process is summarized in a syntax of a coded stream coded in a unit in which image data has a hierarchical structure.
- the coded coding parameter is received from the coded stream disposed in the memory section, the received coded coding parameter is subjected to arithmetic decoding processing, and the received coded coding stream is decoded using the decoded arithmetic coding parameter.
- An image processing apparatus encodes an image data in units having a hierarchical structure to generate an encoded stream, and an encoded stream generated by the encoding unit.
- transmission for transmitting an arrangement unit for arranging arithmetic coding parameters to be subjected to arithmetic coding processing together, a coded stream generated by the coding unit, and an arithmetic coding parameter arranged by the arrangement unit And a unit.
- the arrangement unit may collectively arrange coding parameters to be subjected to variable length coding processing or fixed length coding processing, and the transmission unit may transmit the coding parameters arranged by the arrangement unit.
- the placement unit may place the arithmetic coding parameter after the coding parameter.
- the arrangement unit arranges initialization parameters to be used when initializing the arithmetic coding process or arithmetic decoding process, and the transmission unit transmits the initialization parameter arranged by the arrangement unit. it can.
- the arithmetic coding parameter is a parameter for controlling the coding process or the decoding process at the picture level or slice level.
- the arithmetic coding parameter is a parameter of a filter used when performing encoding processing or decoding processing.
- the arrangement unit collectively arranges the parameter of the adaptive loop filter and the parameter of the adaptive offset filter at the top of slice data of the encoded stream, and the transmission unit is the adaptive loop arranged by the arrangement unit.
- the parameters of the filter and the parameters of the adaptive offset filter may be transmitted.
- the arrangement unit collectively arranges the parameter of the adaptive loop filter and the parameter of the adaptive offset filter at an end of a slice header of the encoded stream, and the transmission unit is the adaptive loop arranged by the arrangement unit.
- the parameters of the filter and the parameters of the adaptive offset filter may be transmitted.
- the arrangement unit may arrange the initialization parameter near the top of a slice header of the encoded stream.
- the image processing apparatus encodes image data in units having a hierarchical structure to generate an encoded stream, and performs arithmetic on the syntax of the generated encoded stream. Arithmetic coding parameters to be coded are collectively arranged, and the generated coded stream and the arranged arithmetic coding parameters are transmitted.
- a coded stream in which arithmetically coded arithmetic coding parameters are arranged together in the syntax of a coded stream in which image data is coded in units having a hierarchical structure is provided.
- the arithmetic coding parameter is received, and the received arithmetic coding parameter is arithmetically decoded.
- the received coded stream is decoded using the arithmetic coding parameter subjected to the arithmetic decoding process.
- a coded stream is generated by coding image data in units having a hierarchical structure. Then, in the syntax of the generated coded stream, arithmetic coding parameters to be subjected to the arithmetic coding process are collectively arranged, and the generated coded stream and the arranged arithmetic coding parameters are transmitted.
- the above-described image processing apparatus may be an independent apparatus, or may be an image coding apparatus or an internal block constituting an image decoding apparatus.
- an image can be decoded.
- the decoding process can be performed at high speed.
- an image can be encoded.
- the encoding process can be performed at high speed.
- FIG. 24 is a diagram illustrating an example main configuration of a multi-viewpoint image decoding device to which the present technology is applied. It is a figure which shows the example of a hierarchy image coding system. It is a figure which shows the main structural examples of the hierarchy image coding apparatus to which this technique is applied. It is a figure which shows the main structural examples of the hierarchy image decoding apparatus to which this technique is applied. It is a block diagram showing an example of main composition of a computer. It is a block diagram showing an example of a schematic structure of a television set. It is a block diagram which shows an example of a rough structure of a mobile telephone. It is a block diagram which shows an example of a rough structure of a recording and reproducing apparatus. It is a block diagram showing an example of rough composition of an imaging device.
- FIG. 1 shows a configuration of an embodiment of an image coding apparatus as an image processing apparatus to which the present disclosure is applied.
- the image coding apparatus 100 shown in FIG. 1 codes image data using a prediction process.
- a coding method for example, H.264 may be used.
- H.264 and MPEG (Moving Picture Experts Group) 4 Part 10 (AVC (Advanced Video Coding) (hereinafter referred to as H.264 / AVC)), HEVC (High Efficiency Video Coding), etc. are used.
- AVC Advanced Video Coding
- HEVC High Efficiency Video Coding
- the image coding apparatus 100 includes an A / D (Analog / Digital) conversion unit 101, a screen rearrangement buffer 102, an operation unit 103, an orthogonal conversion unit 104, a quantization unit 105, and a lossless coding unit 106. , And the accumulation buffer 107.
- the image coding apparatus 100 further includes an inverse quantization unit 108, an inverse orthogonal transformation unit 109, an operation unit 110, a deblock filter 111, a frame memory 112, a selection unit 113, an intra prediction unit 114, a motion prediction / compensation unit 115, The prediction image selection unit 116 and the rate control unit 117 are included.
- the image coding apparatus 100 further includes an adaptive offset unit 121 and an adaptive loop filter 122.
- the A / D converter 101 A / D converts the input image data, and outputs the image data to the screen rearrangement buffer 102 for storage.
- the screen rearrangement buffer 102 rearranges the images of frames in the stored display order into the order of frames for encoding in accordance with the GOP (Group of Picture) structure.
- the screen rearrangement buffer 102 supplies the image in which the order of the frames is rearranged to the calculation unit 103.
- the screen rearrangement buffer 102 also supplies the image in which the order of the frames is rearranged to the intra prediction unit 114 and the motion prediction / compensation unit 115.
- the operation unit 103 subtracts the predicted image supplied from the intra prediction unit 114 or the motion prediction / compensation unit 115 via the predicted image selection unit 116 from the image read from the screen rearrangement buffer 102, and the difference information thereof Are output to the orthogonal transformation unit 104.
- the operation unit 103 subtracts the predicted image supplied from the intra prediction unit 114 from the image read from the screen rearrangement buffer 102. Also, for example, in the case of an image on which inter coding is performed, the operation unit 103 subtracts the predicted image supplied from the motion prediction / compensation unit 115 from the image read from the screen rearrangement buffer 102.
- the orthogonal transformation unit 104 performs orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation on the difference information supplied from the calculation unit 103, and supplies the transformation coefficient to the quantization unit 105.
- the quantization unit 105 quantizes the transform coefficient output from the orthogonal transform unit 104.
- the quantization unit 105 supplies the quantized transform coefficient to the lossless encoding unit 106.
- the lossless coding unit 106 performs lossless coding such as variable length coding and arithmetic coding on the quantized transform coefficients.
- the lossless encoding unit 106 acquires parameters such as information indicating an intra prediction mode from the intra prediction unit 114, and acquires parameters such as information indicating an inter prediction mode and motion vector information from the motion prediction / compensation unit 115.
- the lossless encoding unit 106 acquires the parameters of the adaptive offset filter from the adaptive offset unit 121, and acquires the parameters of the adaptive loop filter from the adaptive loop filter 122.
- the lossless encoding unit 106 encodes the quantized transform coefficient, and encodes each acquired parameter (syntax element) to be part of header information of encoded data (multiplexing).
- the lossless encoding unit 106 supplies the encoded data obtained by the encoding to the accumulation buffer 107 for accumulation.
- lossless encoding processing such as variable length coding or arithmetic coding is performed.
- variable-length coding include CAVLC (Context-Adaptive Variable Length Coding).
- arithmetic coding include CABAC (Context-Adaptive Binary Arithmetic Coding).
- CABAC is a binary arithmetic coding scheme that adaptively codes according to the surrounding context (Context).
- parameters to be sent to the decoding side described above are data-intensive in variable-length coding and transmission of fixed-length coding levels, so arithmetic coding (CABAC) ) And compress and transmit.
- CABAC arithmetic coding
- parameters such as if statements are desirably variable-length encoded or fixed-length encoded.
- the lossless encoding unit 106 performs arithmetic encoding on parameters that require arithmetic encoding, and performs variable-length encoding and fixed-length encoding on other parameters.
- the accumulation buffer 107 temporarily holds the encoded data supplied from the lossless encoding unit 106, and at a predetermined timing, for example, a recording device or transmission (not shown) at a later stage as an encoded image encoded. Output to the road etc.
- the transform coefficient quantized in the quantization unit 105 is also supplied to the inverse quantization unit 108.
- the inverse quantization unit 108 inversely quantizes the quantized transform coefficient by a method corresponding to the quantization by the quantization unit 105.
- the inverse quantization unit 108 supplies the obtained transform coefficient to the inverse orthogonal transform unit 109.
- the inverse orthogonal transform unit 109 performs inverse orthogonal transform on the supplied transform coefficient by a method corresponding to orthogonal transform processing by the orthogonal transform unit 104.
- the inverse orthogonal transform output (restored difference information) is supplied to the calculation unit 110.
- the calculation unit 110 is supplied from the intra prediction unit 114 or the motion prediction / compensation unit 115 via the predicted image selection unit 116 to the inverse orthogonal transformation result supplied from the inverse orthogonal transformation unit 109, that is, the restored difference information. Prediction images are added to obtain a locally decoded image (decoded image).
- the calculation unit 110 adds the prediction image supplied from the intra prediction unit 114 to the difference information. Also, for example, when the difference information corresponds to an image on which inter coding is performed, the computing unit 110 adds the predicted image supplied from the motion prediction / compensation unit 115 to the difference information.
- the addition result is supplied to the deblocking filter 111 and the frame memory 112.
- the deblocking filter 111 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
- the deblocking filter 111 supplies the filter processing result to the adaptive offset unit 121.
- the frame memory 112 outputs the accumulated reference image to the intra prediction unit 114 or the motion prediction / compensation unit 115 via the selection unit 113 at a predetermined timing.
- the frame memory 112 supplies the reference image to the intra prediction unit 114 via the selection unit 113. Also, for example, when inter coding is performed, the frame memory 112 supplies the reference image to the motion prediction / compensation unit 115 via the selection unit 113.
- the selection unit 113 supplies the reference image to the intra prediction unit 114. Further, when the reference image supplied from the frame memory 112 is an image to be subjected to inter coding, the selection unit 113 supplies the reference image to the motion prediction / compensation unit 115.
- the intra prediction unit 114 performs intra prediction (in-screen prediction) that generates a predicted image using pixel values in the screen.
- the intra prediction unit 114 performs intra prediction in a plurality of modes (intra prediction modes).
- the intra prediction unit 114 generates prediction images in all intra prediction modes, evaluates each prediction image, and selects an optimal mode. When the optimal intra prediction mode is selected, the intra prediction unit 114 supplies the predicted image generated in the optimal mode to the calculation unit 103 and the calculation unit 110 via the predicted image selection unit 116.
- the intra prediction unit 114 appropriately supplies the lossless encoding unit 106 with parameters such as intra prediction mode information indicating the adopted intra prediction mode.
- the motion prediction / compensation unit 115 uses the input image supplied from the screen rearrangement buffer 102 and the reference image supplied from the frame memory 112 via the selection unit 113 for an image to be inter coded. Motion prediction is performed, motion compensation processing is performed according to the detected motion vector, and a predicted image (inter predicted image information) is generated.
- the motion prediction / compensation unit 115 performs inter prediction processing of all the candidate inter prediction modes to generate a prediction image.
- the motion prediction / compensation unit 115 supplies the generated predicted image to the calculation unit 103 or the calculation unit 110 via the predicted image selection unit 116.
- the motion prediction / compensation unit 115 supplies the lossless encoding unit 106 with parameters such as inter prediction mode information indicating the adopted inter prediction mode and motion vector information indicating the calculated motion vector.
- the predicted image selection unit 116 supplies the output of the intra prediction unit 114 to the calculation unit 103 and the calculation unit 110 in the case of an image to be subjected to intra coding, and the image of the motion prediction / compensation unit 115 in the case of an image to be subjected to inter coding.
- the output is supplied to the calculation unit 103 and the calculation unit 110.
- the rate control unit 117 controls the rate of the quantization operation of the quantization unit 105 based on the compressed image stored in the storage buffer 107 so that overflow or underflow does not occur.
- the adaptive offset unit 121 performs offset filter processing on the decoded image (baseband information after local decoding) from the deblocking filter 111. That is to say, the adaptive offset unit 121 performs quad-tree area division using the decoded image, and determines the quad-tree structure by determining the type of offset for each divided area. There are nine types of offsets: two band offsets, six edge offsets, and no offset. The adaptive offset unit 121 calculates an offset value for each divided area with reference to the quad-tree structure.
- the adaptive offset unit 121 performs offset processing on the decoded image from the deblocking filter 111 using the determined quad-tree structure and the offset value. Then, the adaptive offset unit 121 supplies the image after the offset processing to the adaptive loop filter 122. Further, the adaptive offset unit 121 supplies the determined quad-tree structure and the calculated offset value to the lossless encoding unit 106 as parameters of the adaptive offset filter.
- the adaptive loop filter 122 calculates an adaptive loop filter coefficient so as to minimize the residual with the original image from the screen rearrangement buffer 102, and uses the adaptive loop filter coefficient to decode from the adaptive offset unit 121. Filter the image. As this filter, for example, a Wiener filter is used. The adaptive loop filter 122 supplies the filtered image to the frame memory 112.
- the adaptive loop filter 122 sends the calculated adaptive loop filter coefficient to the lossless encoding unit 106 as a parameter of the adaptive loop filter.
- FIG. 2 is a block diagram showing a configuration example of the lossless encoding unit 106. As shown in FIG.
- the lossless encoding unit 106 includes a variable length coding (VLC) encoding unit 131, an encoding control unit 132, a setting unit 133, and a context-adaptive binary arithmetic coding (CABAC) encoding unit 134. Configured as.
- VLC variable length coding
- CABAC context-adaptive binary arithmetic coding
- the VLC encoding unit 131, the encoding control unit 132, and the setting unit 133 among the units constituting the lossless encoding unit 106 are implemented as firmware 141 by being executed by a CPU (not shown) or the like.
- the CABAC encoding unit 134 that performs relatively heavy processing is realized as hardware 142 by setting logic and the like.
- the transform coefficients from the quantization unit 105 are supplied to the VLC encoding unit 131.
- Parameters for controlling coding processing are supplied to the VLC coding unit 131 from the intra prediction unit 114, the motion prediction / compensation unit 115, the adaptive offset unit 121, the adaptive loop filter 122, and the like.
- the intra prediction unit 114 supplies a parameter related to intra prediction such as information indicating an intra prediction mode to the VLC coding unit 131.
- the motion prediction / compensation unit 115 supplies the VLC coding unit 131 with information indicating motion prediction information, motion vector information, reference frame information, and parameters related to motion prediction such as flag information.
- the adaptive offset unit 121 supplies parameters related to the adaptive offset filter such as a quad-tree structure and an offset value to the VLC encoding unit 131.
- the adaptive loop filter 122 supplies parameters related to the adaptive loop filter, such as adaptive loop filter coefficients, to the VLC encoding unit 131.
- illustration is omitted, quantization parameters and the like from the quantization unit 105 are also supplied.
- those relating to the adaptive offset filter and the parameters relating to the adaptive loop filter are both filter parameters. Also, these parameters are parameters that apply to the entire screen, that is, parameters that control the encoding process in slice or picture units. The amount of data to be sent to the decoding side is smaller when such parameters are subjected to arithmetic coding.
- the VLC encoding unit 131 performs variable-length encoding or fixed-length encoding processing on parameters other than the parameters for controlling encoding processing in units of slices or pictures among supplied parameters.
- the VLC encoding unit 131 supplies the encoded data to the setting unit 133.
- the VLC encoding unit 131 supplies the quantized transform coefficient from the quantization unit 105 to the CABAC encoding unit 134. Among the supplied parameters, the VLC encoding unit 131 supplies a parameter for controlling encoding processing in units of slices or pictures to the CABAC encoding unit 134, and notifies the encoding control unit 132 to that effect. When the parameters necessary for the initialization of the CABAC encoding unit 134 are ready, the VLC encoding unit 131 supplies those parameters to the CABAC encoding unit 134, and notifies the encoding control unit 132 to that effect.
- the encoding control unit 132 In response to the notification from the VLC encoding unit 131, the encoding control unit 132 causes the CABAC encoding unit 134 to start the initialization process and the encoding process. In response to the notification from the CABAC encoding unit 134, the encoding control unit 132 causes the VLC encoding unit 131 to start encoding processing or causes the setting unit 133 to start setting.
- the setting unit 133 arranges the parameters encoded by the VLC encoding unit 131 or the CABAC encoding unit 134 and the data encoded by the CABAC encoding unit 134 in a predetermined order. Arrange in order to generate slice header and slice data. The setting unit 133 supplies the generated slice header and slice data to the accumulation buffer 107.
- the CABAC coding unit 134 Under the control of the coding control unit 132, the CABAC coding unit 134 performs initialization using parameters necessary for initialization from the VLC coding unit 131. Under the control of the coding control unit 132, the CABAC coding unit 134 performs arithmetic coding on the quantized transform coefficients from the VLC coding unit 131 and each parameter. The CABAC encoding unit 134 supplies the arithmetically encoded data (transformation coefficients) and parameters to the setting unit 133.
- FIG. 3 is a diagram showing a configuration example of a slice header and a part of slice data in a conventional stream.
- slice_type indicating the type of slice is disposed near the top of the slice header.
- the slice_qp_delta is disposed slightly below the center of the slice header.
- slice_type and slice_qp_delta are parameters necessary for CABAC initialization (hereinafter also referred to as initialization parameters as appropriate) to be subjected to variable-length or fixed-length encoding or decoding.
- sao_param which is a parameter of the adaptive offset filter is arranged.
- alf_cu_control_param which is a parameter of the adaptive loop filter.
- Slice data follows the end of the slice header after its alf_cu_control_param ().
- Parameters of the adaptive offset filter, sao_param (), and parameters of the adaptive loop filter alf_cu_control_param () are parameters that control on or off in the entire screen. As described above, since parameters that control encoding processing in units of slices or pictures are large in data, such parameters are subjected to arithmetic encoding and arithmetic decoding.
- CABAC initialization must be performed before these parameters in the slice header.
- slice_type is placed near the top of the slice header, but slice_qp_delta is placed immediately before sao_param (), which is the parameter of the adaptive offset filter. There is. Therefore, at least on the decoding side, if slice_qp_delta is not decoded, it is difficult to perform CABAC initialization, and in the related art, CABAC initialization is performed immediately before sao_param ().
- the setting unit 133 arranges slice_qp_delta, which is necessary for CABAC initialization, after Slice_type located near the top of the slice header.
- the setting unit 133 arranges sao_param (), which is a parameter of the adaptive offset filter, at the end of the slice header in which the parameter alf_cu_control_param () of the adaptive loop filter is arranged.
- FIG. 4 is a diagram showing an example configuration of a slice header and part of slice data in the stream of the present technology.
- hatched hatchings represent the differences from the prior art described above with reference to FIG.
- slice_type and slice_qp_delta in the lower center of the slice header in the example of FIG. 3 are arranged side by side (collectively) around the top of the slice header.
- CABAC can be initialized anywhere from the top of the slice header where slice_type and slice_qp_delta, which are CABAC initialization parameters, are aligned. As a result, arithmetic coding and decoding of parameters are not delayed by initialization.
- alf_cu_control_param which is a parameter of the adaptive loop filter
- sao_param which is a parameter of the adaptive offset filter at the center of the slice header in the example of FIG.
- parameters requiring arithmetic coding and decoding are collectively arranged at the end of the slice header before slice data.
- processing of arithmetic coding and arithmetic decoding can be efficiently performed collectively from the end of the slice header to the slice data.
- a parameter requiring arithmetic coding that is, a coding parameter to be arithmetically coded (decoded) is referred to as an arithmetic coding parameter.
- parameters for variable-length coding (decoding) or fixed-length coding (decoding) are simply referred to as coding parameters.
- FIG. 4 an example in which arithmetic coding parameters that require arithmetic coding and arithmetic decoding are arranged at the end of the slice header is shown, but as shown in FIG. It may be arranged.
- FIG. 5 is a diagram showing another configuration example of a slice header and a part of slice data in the stream of the present technology.
- hatched hatchings represent differences from the prior art described above with reference to FIG.
- slice_type is disposed near the top of the slice header. Further, in the central part of the slice header, cabac_init_idc and slice_qp_delta, which is at the lower center of the slice header in the example of FIG. 3, are arranged side by side.
- Cabac_init_idc is based on H.
- the H.264 / AVC system was used to set the cabac table, but at present, its use has not been decided in the HEVC system.
- slice_qp_delta may be placed immediately after cabac_init_idc as shown in FIG. 5 to combine with cabac_init_idc.
- cabac_init_idc may be placed immediately after slice_type in FIG. 4 together with slice_qp_delta.
- CABAC can be initialized anywhere where slice_type, slice_qp_delta, and cabac_init_idc are aligned. As a result, arithmetic coding and arithmetic decoding of arithmetic coding parameters are not delayed due to initialization.
- alf_cu_control_param () which is a parameter of the adaptive loop filter and sao_param () which is a parameter of the adaptive offset filter are arranged side by side not at the end of the slice header but at the top of the slice data.
- Both alf_cu_control_param () and sao_param () are arithmetic coding parameters.
- arithmetic coding parameters that require arithmetic coding and decoding are collectively arranged in slice data and immediately before the data itself.
- the processing of arithmetic decoding can be performed collectively and efficiently.
- alf_cu_control_param () which is a parameter of the adaptive offset filter
- alf_cu_control_param () which is a parameter of the adaptive loop filter
- FIG. 6 is a diagram illustrating an exemplary configuration of part of syntax in a stream of the present technology.
- coding parameters for performing variable-length coding or fixed-length coding are collectively arranged. Furthermore, in the syntax, arithmetic coding parameters including sao_param () which is a parameter of the adaptive offset filter and alf_cu_control_param () which is a parameter of the adaptive loop filter are collectively arranged after the coding parameter.
- Arithmetic coding parameters that require arithmetic coding and decoding are not limited to sao_param (), which is a parameter of the adaptive offset filter, and alf_cu_control_param (), which is a parameter of the adaptive loop filter.
- sao_param which is a parameter of the adaptive offset filter
- alf_cu_control_param which is a parameter of the adaptive loop filter.
- qmatrix_param (), which is a parameter of the quantization matrix is also an arithmetic coding parameter that requires arithmetic coding and decoding, such as sao_param (), which is a parameter of the adaptive offset filter, and adaptive It may be arranged together with alf_cu_control_param () which is a parameter of the loop filter.
- each arithmetic coding parameter is not limited to the examples of FIGS. 6 and 7.
- the parameters of the adaptive loop filter the parameters of the adaptive offset filter, and the parameters of the quantization matrix, which are arithmetic coding parameters that require arithmetic coding and decoding, will be described in detail.
- the adaptive loop filter is an encoding process used in the HEVC method, and is not adopted in the AVC method.
- the parameters of the adaptive loop filter sent to the decoding side are the coefficients of the two-dimensional filter and the control signal of On / Off in CU units, updated for each picture, and the size is large.
- the adaptive loop filter is to transmit a two-dimensional filter such that the coded image is closest to the original image, and to perform filtering to improve the coding efficiency.
- the parameter set is very large, parameters are sent in I-pictures with a large amount of code.
- a gain may not be obtained in proportion to the overhead for transmitting the parameter, so the parameter is not sent in this case.
- the parameters of the adaptive loop filter are switched according to the code amount of each picture, and the values are changed for each picture.
- the parameter of the adaptive loop filter itself depends on the pattern, the same value is not frequently used repeatedly.
- the adaptive offset filter is an encoding process used in the HEVC method, and is not adopted in the AVC method.
- the parameters of the adaptive offset sent to the decoding side are the type of offset and the On / Off control signal when the picture is divided in certain units, updated for each picture, and the size is relatively large.
- the adaptive offset filter is to add an offset to a certain pixel value.
- the adaptive offset filter can be relatively transmitted on a picture-by-picture basis because the parameter size is smaller compared to the adaptive loop filter.
- the parameter of the adaptive offset filter needs to be arithmetically coded. Since there is a possibility that the parameters are similar in each picture, it can be said that it is possible to use the previous parameters.
- the quantization matrix is adopted from the AVC method, the size increases as the block size increases.
- the parameters of the quantization matrix sent to the decoding side are quantization matrices in units of pictures, and are updated for each picture, for each picture type, or for each GOP, and the size is large.
- the quantization matrix does not need to be changed if the design does not change. Also, the parameters of the quantization matrix are large in size but there is no option to not send.
- parameters of the quantization matrix different values may be used for I picture, B picture and P picture.
- the parameters of the quantization matrix are characterized by being incompatible with parameters that are changed for each picture, such as an adaptive loop filter and an adaptive offset filter.
- the parameters of the adaptive loop filter, the parameters of the adaptive offset filter, and the parameters of the quantization matrix are respectively characterized, they are arithmetically coded because they are large in size and sent for each picture or the like. There is a need.
- FIG. 8 is a timing chart in the case of processing the conventional stream described above with reference to FIG.
- the decoding process is divided into firmware and hardware processes as in the coding side.
- the left represents a variable-length fixed-length decoding process of FW (firmware), and the right represents an arithmetic decoding process of HW (hardware).
- the firmware performs variable length (or fixed length) decoding on the initialization parameter slice_type, which is a coding parameter, and thereafter, variable length (or fixed) on other coding parameters. Long) Decrypt. Then, when the firmware decodes the encoding parameter slice_qp_delta as an encoding parameter, finally, since the slice_type and slice_qp_delta as initialization parameters necessary for CABAC initialization are aligned, the hardware of CABAC is It is initialized.
- the firmware can set the hardware to the sao_param (). Make arithmetic decoding.
- the hardware returns the decrypted sao_param () to the firmware when the arithmetic decryption is finished.
- the firmware performs variable-length (or fixed-length) decoding of the other coding parameters until immediately before alf_cu_control_param () at the end of the slice header. Since alf_cu_control_param () which is an arithmetic coding parameter and a parameter of the adaptive loop filter is disposed at the end of the slice header, the firmware causes the hardware to perform arithmetic decoding of alf_cu_control_param (). The hardware returns the decoded alf_cu_control_param () to the firmware when the arithmetic decoding is finished.
- the firmware causes the hardware to perform arithmetic decoding of slice data.
- the exchange between firmware and hardware is distributed, and further, a waiting time due to initialization occurs.
- the exchange of firmware and hardware requires adjustment such as data exchange, and the encoding and decoding processes are not efficient.
- FIG. 9 is a diagram showing a timing chart in the case of processing the stream of the present technology described above with reference to FIG.
- an example of decoding processing is shown.
- the decoding process is divided into firmware and hardware processes on the decoding side as well.
- the left represents a variable-length fixed-length decoding process of FW (firmware), and the right represents an arithmetic decoding process of HW (hardware).
- the firmware performs variable length (or fixed length) decoding of the initialization parameter slice_type, and then decodes the initialization parameter slice_qp_delta.
- variable length or fixed length
- slice_qp_delta which are initialization parameters necessary for CABAC initialization
- the firmware After CABAC hardware initialization, the firmware also performs variable length (or fixed length) decoding of other coding parameters until just before alf_cu_control_param () at the end of the slice header.
- an arithmetic coding parameter alf_cu_control_param () which is a parameter of the adaptive loop filter, and sao_param () which is an arithmetic coding parameter and a parameter of the adaptive offset filter are arranged.
- the firmware causes the hardware to perform arithmetic decoding of alf_cu_control_param ().
- the hardware returns the decoded alf_cu_control_param () to the firmware when the arithmetic decoding is finished.
- the firmware causes the hardware to perform arithmetic decoding of sao_param ().
- the hardware returns the decrypted sao_param () to the firmware when the arithmetic decryption is finished.
- the firmware causes the hardware to perform arithmetic decoding of slice data.
- FIG. 10 is a diagram showing a timing chart when processing the stream of the present technology described above with reference to FIG.
- the decoding process is divided into firmware and hardware processes on the decoding side as well.
- the left represents a variable-length fixed-length decoding process of FW (firmware), and the right represents an arithmetic decoding process of HW (hardware).
- alf_cu_control_param () which is an arithmetic coding parameter and a parameter of the adaptive loop filter
- sao_param () which is an arithmetic coding parameter and a parameter of the adaptive offset filter
- the firmware causes the hardware to perform arithmetic decoding of alf_cu_control_param ().
- the hardware returns the decoded alf_cu_control_param () to the firmware when the arithmetic decoding is finished.
- the firmware also causes the hardware to perform arithmetic decoding of sao_param ().
- the hardware returns the decrypted sao_param () to the firmware when the arithmetic decryption is finished.
- the hardware performs decoding of arithmetic coding parameters collectively.
- FIG. 11 is a diagram showing a timing chart in the case of processing the stream of the present technology described above with reference to FIG. The same applies to the syntax shown in FIG. That is, in the case of the example of FIG. 11, in the syntax of FIG. 7, since the encoding parameters for which variable-length coding or fixed-length coding is being performed are collectively arranged, the firmware The optimization parameters.
- alf_cu_control_param () which is a parameter of the adaptive loop filter
- sao_param () which is a parameter of the adaptive offset filter
- qmatrix_param () which is a quantization matrix parameter
- the firmware causes the hardware to perform arithmetic decoding of alf_cu_control_param ().
- the hardware returns the decoded alf_cu_control_param () to the firmware when the arithmetic decoding is finished.
- the firmware also causes the hardware to perform arithmetic decoding of sao_param ().
- the hardware returns the decrypted sao_param () to the firmware when the arithmetic decryption is finished.
- the firmware causes the hardware to perform arithmetic decoding of the quantization matrix parameter qmatrix_param ().
- the hardware returns the decoded qmatrix_param () to the firmware when the arithmetic decoding is finished.
- the hardware performs decoding of arithmetic coding parameters collectively.
- the encoding parameters arranged collectively in the syntax are collectively subjected to variable-length (fixed-length) decoding processing.
- arithmetic coding parameters arranged collectively in syntax are collectively subjected to arithmetic decoding processing. Therefore, since the exchange of firmware and hardware is performed collectively, the efficiency of the decoding process can be improved and the process can be speeded up.
- arranging (arranging) the arithmetic coding parameters collectively means that the syntax of the arithmetic coding parameters is performed so that processing exchange is performed collectively in the firmware (FW) processing and the hardware (HW) processing.
- arranging (arranging) the arithmetic coding parameters for example, alf_cu_control_param () / sao_param ()
- arithmetic coding or arithmetic decoding is performed as shown in FIG. 6 and FIG. This means that no coding parameter for performing variable-length (or fixed-length) coding / decoding is arranged between the coding parameters.
- arranging encoding parameters to be subjected to variable-length (or fixed-length) encoding / decoding together means that processing exchanges are combined in firmware (FW) processing and hardware (HW) processing.
- the syntax of the coding parameters to perform variable length (or fixed length) coding / decoding is to be placed. That is, to arrange encoding parameters collectively (arranged) means to perform arithmetic encoding / decoding between encoding parameters that perform variable-length (or fixed-length) encoding or variable-length (or fixed-length) decoding. It means that the arithmetic coding parameter is not allocated.
- step S101 the A / D conversion unit 101 A / D converts the input image.
- step S102 the screen rearrangement buffer 102 stores the A / D converted image, and performs rearrangement from the display order of each picture to the coding order.
- the decoded image to be referred to is read from the frame memory 112 and the intra prediction unit via the selection unit 113 It is supplied to 114.
- the intra prediction unit 114 performs intra prediction on the pixels of the block to be processed in all candidate intra prediction modes. Note that as the decoded pixels to be referenced, pixels not filtered by the deblocking filter 111, the adaptive offset unit 121, and the adaptive loop filter 122 are used.
- intra prediction is performed in all candidate intra prediction modes, and cost function values are calculated for all candidate intra prediction modes. Then, based on the calculated cost function value, the optimal intra prediction mode is selected, and the predicted image generated by intra prediction in the optimal intra prediction mode and its cost function value are supplied to the predicted image selection unit 116.
- the image to be processed supplied from the screen rearrangement buffer 102 is an image to be inter processed
- the image to be referred to is read from the frame memory 112 and supplied to the motion prediction / compensation unit 115 via the selection unit 113. Be done.
- the motion prediction / compensation unit 115 performs motion prediction / compensation processing based on these images.
- motion prediction processing is performed in all the candidate inter prediction modes, cost function values are calculated for all candidate inter prediction modes, and optimal inter prediction is performed based on the calculated cost function values.
- the mode is determined. Then, the predicted image generated in the optimal inter prediction mode and the cost function value thereof are supplied to the predicted image selection unit 116.
- step S105 the predicted image selection unit 116 optimizes one of the optimal intra prediction mode and the optimal inter prediction mode based on the cost function values output from the intra prediction unit 114 and the motion prediction / compensation unit 115. Decide on prediction mode. Then, the prediction image selection unit 116 selects the prediction image of the determined optimal prediction mode, and supplies the prediction image to the calculation units 103 and 110. The predicted image is used for the calculation of steps S106 and S111 described later.
- the selection information of the predicted image is supplied to the intra prediction unit 114 or the motion prediction / compensation unit 115.
- the intra prediction unit 114 supplies the information indicating the optimal intra prediction mode (that is, the parameter related to the intra prediction) to the lossless encoding unit 106.
- the motion prediction / compensation unit 115 losslessly encodes information indicating the optimal inter prediction mode and information according to the optimal inter prediction mode (that is, parameters related to motion prediction) Output to unit 106.
- information according to the optimal inter prediction mode include motion vector information and reference frame information.
- step S106 the computing unit 103 computes the difference between the image rearranged in step S102 and the predicted image selected in step S105.
- the prediction image is supplied from the motion prediction / compensation unit 115 when performing inter prediction, and from the intra prediction unit 114 when performing intra prediction, to the calculation unit 103 via the prediction image selection unit 116.
- the difference data has a smaller amount of data than the original image data. Therefore, the amount of data can be compressed as compared to the case of encoding the image as it is.
- step S107 the orthogonal transformation unit 104 orthogonally transforms the difference information supplied from the calculation unit 103. Specifically, orthogonal transformation such as discrete cosine transformation and Karhunen-Loeve transformation is performed, and transformation coefficients are output.
- step S108 the quantization unit 105 quantizes the transform coefficient. During this quantization, the rate is controlled as described in the process of step S118 described later.
- step S109 the inverse quantization unit 108 inversely quantizes the transform coefficient quantized by the quantization unit 105 with the characteristic corresponding to the characteristic of the quantization unit 105.
- step S110 the inverse orthogonal transform unit 109 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 108 with a characteristic corresponding to the characteristic of the orthogonal transform unit 104.
- step S111 the operation unit 110 adds the predicted image input via the predicted image selection unit 116 to the locally decoded difference information, and the locally decoded (that is, locally decoded) image (The image corresponding to the input to the calculation unit 103) is generated.
- step S112 the deblocking filter 111 performs deblocking filter processing on the image output from the computing unit 110. This removes blockiness.
- the decoded image from the deblocking filter 111 is output to the adaptive offset unit 121.
- the adaptive offset unit 121 performs adaptive offset processing. That is, the adaptive offset unit 121 determines a quad-tree structure based on the decoded image from the deblocking filter 111, and calculates an offset value of the quad-tree divided region. The adaptive offset unit 121 performs offset filter processing on the quad-tree divided region, and supplies the pixel value after offset to the adaptive loop filter 122.
- the adaptive offset unit 121 supplies the quad-tree structure and the offset value to the lossless encoding unit 106 as parameters of the adaptive offset filter.
- step S114 the adaptive loop filter 122 performs adaptive loop filter processing on the pixel value after the offset processing using the adaptive filter coefficient which is calculated and calculated.
- the pixel values after the adaptive filter processing are output to the frame memory 112.
- the adaptive loop filter 122 supplies the calculated adaptive filter coefficient to the lossless encoding unit 106 as a parameter of the adaptive loop filter.
- step S115 the frame memory 112 stores the filtered image.
- the image not filtered by the deblocking filter 111, the adaptive offset unit 121, and the adaptive loop filter 122 is also supplied from the arithmetic unit 110 and stored.
- the transform coefficient quantized in step S108 described above is also supplied to the lossless encoding unit 106.
- the lossless encoding unit 106 encodes the quantized transform coefficient output from the quantization unit 105 and each supplied parameter. That is, the difference image is losslessly encoded such as variable length coding, arithmetic coding or the like and compressed.
- the coding parameters other than the arithmetic coding parameters for controlling the coding process in units of slices or pictures have variable lengths ( Or fixed length) encoded and added to the header in a predetermined order.
- arithmetic coding parameters that control coding processing in units of slices or pictures are arithmetically coded and added to the end of the slice header or the top of the slice data.
- step S117 the accumulation buffer 107 accumulates the encoded difference image (that is, the encoded stream) as a compressed image.
- the compressed image stored in the storage buffer 107 is appropriately read and transmitted to the decoding side via the transmission path.
- step S118 the rate control unit 117 controls the rate of the quantization operation of the quantization unit 105 based on the compressed image stored in the storage buffer 107 so that overflow or underflow does not occur.
- step S118 ends, the encoding process ends.
- the intra prediction unit 114 supplies a parameter related to intra prediction such as information indicating an intra prediction mode to the VLC encoding unit 131.
- the motion prediction / compensation unit 115 supplies, to the VLC coding unit 131, parameters relating to motion prediction such as information indicating an inter prediction mode, motion vector information, reference frame information, and flag information.
- the adaptive offset unit 121 supplies parameters related to the adaptive offset filter such as a quad-tree structure and an offset value to the VLC encoding unit 131.
- the adaptive loop filter 122 supplies parameters related to the adaptive loop filter, such as adaptive loop filter coefficients, to the VLC encoding unit 131.
- slice_type and slice_qp_delta are also supplied, and quantization parameters and the like from the quantization unit 105 are also supplied.
- the quantization unit 105 also supplies the quantized transform coefficient to the VLC encoding unit 131.
- step S131 the VLC encoding unit 131 waits for slice_type and slice_qp_delta, which are initialization parameters necessary for CABAC initialization, to be aligned among the supplied parameters. If it is determined in step S131 that slice_type and slice_qp_delta are aligned, the VLC encoding unit 131 supplies the parameters of slice_type and slice_qp_delta to the CABAC encoding unit 134, and notifies the encoding control unit 132 of this.
- slice_type and slice_qp_delta which are initialization parameters necessary for CABAC initialization
- the encoding control unit 132 causes the CABAC encoding unit 134 to perform initialization in step S132.
- the CABAC encoding unit 134 notifies the encoding control unit 132 of it. After this, CABAC encoding can be performed at any timing.
- step S133 the VLC encoding unit 131 acquires a parameter to be encoded from the supplied parameter.
- the VLC encoding unit 131 determines whether the acquired parameter is sao_param () that is a parameter of the adaptive offset filter or alf_cu_control_param () of the adaptive loop filter. That is, in step S133, it is determined whether the acquired parameter is an arithmetic coding parameter.
- step S134 If it is determined in step S134 that the acquired parameter is not the adaptive offset filter parameter sao_param () and the adaptive loop filter parameter alf_cu_control_param () (that is, the arithmetic coding parameter), the process proceeds to step S135. move on.
- step S135 the VLC encoding unit 131 performs VLC encoding (variable length or fixed length encoding) on the parameter acquired in step S133.
- the VLC encoding unit 131 supplies the encoded encoding parameter to the setting unit 133.
- step S134 If it is determined in step S134 that the acquired parameter is sao_param () which is a parameter of the adaptive offset filter or alf_cu_control_param () (that is, an arithmetic coding parameter) of the adaptive loop filter, the process proceeds to step S136. move on.
- the VLC encoding unit 131 supplies the parameter acquired in step S133 to the CABAC encoding unit 134, and notifies the encoding control unit 132 to that effect.
- step S136 the CABAC encoding unit 134 performs CABAC encoding (arithmetic encoding) on the parameter acquired in step S133 under the control of the encoding control unit 132.
- the parameter to be arithmetically coded here is an arithmetic coding parameter, which is the parameter sao_param () of the adaptive offset filter or the parameter alf_cu_control_param () of the adaptive loop filter.
- the CABAC encoding unit 134 supplies the encoded arithmetic encoding parameter to the setting unit 133, and notifies the encoding control unit 132 of the end of the encoding.
- the coding control unit 132 notifies the VLC coding unit 131 of the end.
- step S137 the VLC encoding unit 131 determines whether or not all parameter encoding has been completed. If it is determined that the parameter encoding has not been completed yet, the process returns to step S133, and the subsequent processes are repeated.
- step S137 If it is determined in step S137 that all parameter encoding has been completed, the process proceeds to step S138.
- the VLC encoding unit 131 supplies the transform coefficient from the quantization unit 105 to the CABAC encoding unit 134, and notifies the encoding control unit 132 to that effect.
- step S138 under the control of the encoding control unit 132, the CABAC encoding unit 134 performs CABAC encoding (arithmetic encoding) on the data (transformation coefficients), and supplies the encoded data to the setting unit 133.
- the coding control unit 132 is notified of the end of the coding.
- the setting unit 133 is supplied with the coding parameter coded by the VLC coding unit 131, the arithmetic coding parameter coded by the CABAC coding unit 134, or the data coded by the CABAC coding unit 134. It is done.
- the setting unit 133 sets and arranges the encoded parameters and the encoded data in the order of FIG. 4 or 5, and generates a slice header and slice data.
- the setting unit 133 supplies the generated slice header and slice data to the accumulation buffer 107. After the process of step S139, the process returns to step S116 of FIG.
- the coding process can be speeded up by arranging the arithmetic coding parameter that controls the coding process in slice or picture units at the end of the slice header or at the top of the slice data.
- FIG. 14 illustrates the configuration of an embodiment of an image decoding apparatus as an image processing apparatus to which the present disclosure is applied.
- the image decoding apparatus 200 shown in FIG. 14 is a decoding apparatus corresponding to the image coding apparatus 100 of FIG.
- encoded data encoded by the image encoding device 100 is transmitted to the image decoding device 200 corresponding to the image encoding device 100 via a predetermined transmission path and decoded.
- the image decoding apparatus 200 includes an accumulation buffer 201, a lossless decoding unit 202, an inverse quantization unit 203, an inverse orthogonal transformation unit 204, an operation unit 205, a deblock filter 206, a screen rearrangement buffer 207, And a D / A converter 208.
- the image decoding apparatus 200 further includes a frame memory 209, a selection unit 210, an intra prediction unit 211, a motion prediction / compensation unit 212, and a selection unit 213.
- the image decoding device 200 includes an adaptive offset unit 221 and an adaptive loop filter 222.
- the accumulation buffer 201 accumulates the transmitted encoded data.
- the encoded data is encoded by the image encoding device 100.
- the lossless decoding unit 202 decodes the encoded data read from the accumulation buffer 201 at a predetermined timing in a method corresponding to the encoding method of the lossless encoding unit 106 in FIG. 1.
- the lossless decoding unit 202 decodes in order of the slice header and the slice data.
- the lossless decoding unit 202 performs arithmetic decoding because arithmetic coding is performed on arithmetic coding parameters that control decoding processing in units of slices or pictures.
- the lossless decoding unit 202 performs variable-length or fixed-length decoding because the other encoding parameters are variable-length or fixed-length encoded.
- the lossless decoding unit 202 supplies a parameter such as information indicating the decoded intra prediction mode to the intra prediction unit 211, and supplies a parameter such as information indicating the inter prediction mode or motion vector information to the motion prediction / compensation unit 212. .
- the lossless decoding unit 202 supplies the parameter of the adaptive offset filter to the adaptive offset unit 221, and supplies the parameter of the adaptive loop filter to the adaptive loop filter 222.
- the inverse quantization unit 203 inversely quantizes the coefficient data (quantization coefficient) obtained by being decoded by the lossless decoding unit 202, using a method corresponding to the quantization method of the quantization unit 105 in FIG. That is, the inverse quantization unit 203 performs inverse quantization of the quantization coefficient by the same method as the inverse quantization unit 108 in FIG. 1 using the quantization parameter supplied from the image coding apparatus 100.
- the inverse quantization unit 203 supplies the inversely quantized coefficient data, that is, the orthogonal transformation coefficient to the inverse orthogonal transformation unit 204.
- the inverse orthogonal transformation unit 204 performs inverse orthogonal transformation on the orthogonal transformation coefficient by a method corresponding to the orthogonal transformation method of the orthogonal transformation unit 104 in FIG. 1, and generates residual data before orthogonal transformation in the image coding apparatus 100. Obtain the corresponding decoded residual data.
- the decoded residual data obtained by the inverse orthogonal transform is supplied to the arithmetic unit 205. Further, the prediction image is supplied to the calculation unit 205 from the intra prediction unit 211 or the motion prediction / compensation unit 212 via the selection unit 213.
- Arithmetic unit 205 adds the decoded residual data and the predicted image to obtain decoded image data corresponding to the image data before the predicted image is subtracted by arithmetic unit 103 of image coding apparatus 100.
- the operation unit 205 supplies the decoded image data to the deblocking filter 206.
- the deblocking filter 206 is configured basically in the same manner as the deblocking filter 111 of the image coding device 100.
- the deblocking filter 206 removes block distortion of the decoded image by appropriately performing deblocking filter processing.
- the deblocking filter 206 supplies the filtering result to the adaptive offset unit 221.
- the screen rearrangement buffer 207 rearranges the images. That is, the order of the frames rearranged for the order of encoding by the screen rearrangement buffer 102 in FIG. 1 is rearranged in the order of the original display.
- the D / A conversion unit 208 D / A converts the image supplied from the screen rearrangement buffer 207, and outputs the image to a display (not shown) for display.
- the output of the deblocking filter 206 is further supplied to a frame memory 209.
- the frame memory 209, the selection unit 210, the intra prediction unit 211, the motion prediction / compensation unit 212, and the selection unit 213 are the frame memory 112, the selection unit 113, the intra prediction unit 114, and the motion prediction / compensation unit of the image coding apparatus 100. 115 and the predicted image selection unit 116 respectively.
- the selection unit 210 reads out the image to be inter-processed and the image to be referred to from the frame memory 209, and supplies the image to the motion prediction / compensation unit 212. In addition, the selection unit 210 reads an image used for intra prediction from the frame memory 209 and supplies the image to the intra prediction unit 211.
- the intra prediction unit 211 generates a prediction image from the reference image acquired from the frame memory 209 based on this information, and supplies the generated prediction image to the selection unit 213.
- the motion prediction / compensation unit 212 is supplied from the lossless decoding unit 202 with information (prediction mode information, motion vector information, reference frame information, flags, various parameters, and the like) obtained by decoding header information.
- the motion prediction / compensation unit 212 generates a prediction image from the reference image acquired from the frame memory 209 based on the information supplied from the lossless decoding unit 202, and supplies the generated prediction image to the selection unit 213.
- the selection unit 213 selects the prediction image generated by the motion prediction / compensation unit 212 or the intra prediction unit 211, and supplies the prediction image to the calculation unit 205.
- the adaptive offset unit 221 is supplied with a quad-tree structure and an offset value, which are adaptive offset parameters from the lossless decoding unit 202.
- the adaptive offset unit 221 performs offset filtering on the pixel values of the decoded image from the deblocking filter 206 using the information.
- the adaptive offset unit 221 supplies the image after the offset processing to the adaptive loop filter 222.
- the adaptive loop filter 222 performs a filtering process on the decoded image from the adaptive offset unit 221 using the adaptive filter coefficient that is a parameter of the adaptive loop filter supplied from the lossless decoding unit 202.
- this filter for example, a Wiener filter is used.
- the adaptive loop filter 222 supplies the image after the filter to the screen rearrangement buffer 207 and the frame memory 209.
- FIG. 15 is a block diagram illustrating a configuration example of the lossless decoding unit 202.
- the lossless decoding unit 202 is configured to include a variable length coding (VLC) decoding unit 231, a decoding control unit 232, an acquisition unit 233, and a context-adaptive binary arithmetic coding (CABAC) decoding unit 234. Ru.
- VLC variable length coding
- CABAC context-adaptive binary arithmetic coding
- the VLC decoding unit 231, the decoding control unit 232, and the acquisition unit 233 are implemented as firmware 241 by being executed by a CPU (not shown) or the like.
- the CABAC decoding unit 234 which performs relatively heavy processing is realized as hardware 242 by setting logic and the like.
- the VLC decoding unit 231 is supplied with the encoded data read from the accumulation buffer 201 at a predetermined timing.
- the VLC decoding unit 231 incorporates an acquisition unit 233.
- the acquisition unit 233 acquires the encoded parameter to be decoded from the encoded data.
- the VLC decoding unit 231 corresponds to the coding of the VLC coding unit 131 in FIG. 2 for coding parameters other than the arithmetic coding parameters for controlling the decoding process in units of slices or pictures acquired by the acquisition unit 233. , Variable-length decoding or fixed-length decoding processing.
- the VLC decoding unit 231 supplies, to the CABAC decoding unit 234, an arithmetic coding parameter for controlling decoding processing in units of slices or pictures among the parameters acquired by the acquisition unit 233, and notifies the decoding control unit 232 to that effect. .
- the VLC decoding unit 231 supplies an initialization parameter, which is also a coding parameter, to the CABAC decoding unit 234, and notifies the decoding control unit 232 to that effect.
- the VLC decoding unit 231 receives the parameter arithmetically decoded by the CABAC decoding unit 234.
- the VLC decoding unit 231 supplies the decoded parameter and the parameter arithmetically decoded by the CABAC decoding unit 234 to the corresponding units of the image decoding apparatus 200.
- prediction mode information is intra prediction mode information
- prediction mode information subjected to variable length decoding or fixed length decoding is supplied to the intra prediction unit 211.
- the prediction mode information is inter prediction mode information
- the motion prediction / compensation unit 212 is supplied with prediction mode information corresponding to variable length decoding or fixed length decoding and corresponding motion vector information.
- parameters related to the arithmetically decoded adaptive offset filter are supplied to the adaptive offset unit 221.
- Parameters related to the arithmetically decoded adaptive loop filter are supplied to the adaptive loop filter 222.
- the decoding control unit 232 In response to the notification from the VLC decoding unit 231, the decoding control unit 232 causes the CABAC decoding unit 234 to start the initialization process and the decoding process. The decoding control unit 232 causes the VLC decoding unit 231 to start the decoding process in response to the notification from the CABAC decoding unit 234.
- the CABAC decoding unit 234 Under the control of the decoding control unit 232, the CABAC decoding unit 234 performs initialization using initialization parameters necessary for initialization from the VLC decoding unit 231. Under the control of the decoding control unit 232, the CABAC decoding unit 234 performs arithmetic coding of the encoded arithmetic coding parameters and data (transformation coefficients) from the VLC decoding unit 231 in the CABAC encoding unit 134 of FIG. Perform arithmetic decoding corresponding to. The CABAC decoding unit 234 supplies the arithmetically decoded parameter to the VLC decoding unit 231.
- the CABAC decoding unit 234 supplies the arithmetically decoded data (quantized orthogonal transformation coefficients) to the inverse quantization unit 203.
- step S201 the accumulation buffer 201 accumulates the transmitted encoded data.
- step S202 the lossless decoding unit 202 decodes the encoded data supplied from the accumulation buffer 201. The details of the decoding process will be described later with reference to FIG. 17. The I picture, P picture, and B picture encoded by the lossless encoding unit 106 of FIG. 1 are decoded.
- coding parameters other than arithmetic coding parameters that control coding processing in units of slices or pictures are variable-length (or fixed-length) decoded.
- arithmetic coding parameters that control decoding processing in units of slices or pictures, which are added to the end of the slice header or the top of slice data are arithmetically decoded.
- Arithmetic coding parameters that control decoding processing in units of slices or pictures are parameters related to the adaptive offset filter, parameters related to the adaptive loop filter, etc., as described above with reference to FIG.
- the prediction mode information is intra prediction mode information
- the prediction mode information is supplied to the intra prediction unit 211.
- the prediction mode information is inter prediction mode information
- motion vector information and the like corresponding to the prediction mode information are supplied to the motion prediction / compensation unit 212.
- Parameters related to the adaptive offset filter are supplied to the adaptive offset unit 221.
- Parameters related to the adaptive loop filter are supplied to the adaptive loop filter 222.
- step S203 the intra prediction unit 211 or the motion prediction / compensation unit 212 performs predicted image generation processing corresponding to the prediction mode information supplied from the lossless decoding unit 202.
- the intra prediction unit 211 when intra prediction mode information is supplied from the lossless decoding unit 202, the intra prediction unit 211 generates Most Probable Mode, and generates an intra prediction image in the intra prediction mode by parallel processing.
- the motion prediction / compensation unit 212 performs motion prediction / compensation processing in the inter prediction mode to generate an inter prediction image.
- the prediction image (intra prediction image) generated by the intra prediction unit 211 or the prediction image (inter prediction image) generated by the motion prediction / compensation unit 212 is supplied to the selection unit 213.
- step S204 the selection unit 213 selects a predicted image. That is, the predicted image generated by the intra prediction unit 211 or the predicted image generated by the motion prediction / compensation unit 212 is supplied. Therefore, the supplied prediction image is selected and supplied to the calculation unit 205, and is added to the output of the inverse orthogonal transformation unit 204 in step S207 described later.
- the transform coefficient decoded by the lossless decoding unit 202 in step S202 described above is also supplied to the inverse quantization unit 203.
- the inverse quantization unit 203 inversely quantizes the transform coefficient decoded by the lossless decoding unit 202 with a characteristic corresponding to the characteristic of the quantization unit 105 in FIG.
- step S206 the inverse orthogonal transform unit 109 performs inverse orthogonal transform on the transform coefficient inversely quantized by the inverse quantization unit 108 with a characteristic corresponding to the characteristic of the orthogonal transform unit 104 in FIG.
- the difference information corresponding to the input (the output of the arithmetic unit 103) of the orthogonal transform unit 104 in FIG. 1 is decoded.
- step S207 the calculation unit 205 adds the prediction image selected in the process of step S204 described above and input through the selection unit 213 to the difference information.
- the original image is thus decoded.
- step S208 the deblocking filter 206 performs deblocking filter processing on the image output from the computing unit 110. This removes block distortion in the entire screen.
- the image after filtering is supplied to the adaptive offset unit 221.
- step S209 the adaptive offset unit 221 performs adaptive offset processing.
- the quad-tree structure and the offset value which are the parameters of the adaptive offset filter from the lossless decoding unit 202, are used to perform offset filter processing on the image after deblocking.
- the pixel values after offset are supplied to the adaptive loop filter 222.
- step S210 the adaptive loop filter 222 performs adaptive loop filter processing on the pixel value after the offset processing using the adaptive filter coefficient which is a parameter of the adaptive loop filter from the lossless decoding unit 202.
- the pixel values after the adaptive filter processing are output to the screen rearrangement buffer 207 and the frame memory 209.
- step S211 the frame memory 209 stores the adaptively filtered image.
- step S212 the screen rearrangement buffer 207 rearranges the image after the adaptive loop filter 222. That is, the order of the frames rearranged for encoding by the screen rearrangement buffer 102 of the image encoding device 100 is rearranged in the original display order.
- step S213 the D / A conversion unit 208 D / A converts the image from the screen rearrangement buffer 207. This image is output to a display not shown, and the image is displayed.
- step S213 ends, the decoding process ends.
- the encoded data read from the accumulation buffer 201 at a predetermined timing is supplied to the VLC decoding unit 231.
- the slice header and slice data of the encoded data are configured as shown in FIG. 4 or 5 described above.
- step S231 the acquisition unit 233 of the VLC decoding unit 231 acquires, from the slice header, the encoded parameter to be decoded.
- step S232 the VLC decoding unit 231 performs VLC decoding on the coding parameter acquired by the acquiring unit 233.
- step S233 the VLC decoding unit 231 determines whether or not slice_type and slice_qp_delta, which are encoding parameters and are initialization parameters necessary for CABAC initialization, are aligned. For example, as shown in FIG. 4, if the slice header is configured, decoding of slice_type and slice_qp_delta is completed and slice_type and slice_qp_delta are aligned near the top.
- step S233 If it is determined in step S233 that slice_type and slice_qp_delta are not aligned, the process returns to step S231, and the subsequent processes are repeated. If it is determined in step S233 that slice_type and slice_qp_delta are aligned, parameters of slice_type and slice_qp_delta are supplied to the CABAC decoding unit 234, and the notification is sent to the decoding control unit 232.
- the decoding control unit 232 causes the CABAC decoding unit 234 to perform initialization in step S234.
- the CABAC decoding unit 234 notifies the decoding control unit 232 of it. After this, CABAC decoding can be performed at any timing.
- step S235 the acquiring unit 233 further acquires the encoded parameter to be decoded from the slice header.
- the VLC decoding unit 231 determines whether the parameter acquired by the acquiring unit 233 is sao_param () that is a parameter of the adaptive offset filter or alf_cu_control_param () of the adaptive loop filter (that is, an arithmetic coding parameter). It is determined whether or not.
- the parameter of the adaptive offset filter, sao_param () or the parameter of the adaptive loop filter alf_cu_control_param () is set at the end of the slice header or at the top of the slice data. Therefore, the acquiring unit 233 acquires these parameters from the end of the slice header or the top of the slice data.
- step S236 If it is determined in step S236 that the acquired parameter is not the adaptive offset filter parameter sao_param () and the adaptive loop filter parameter alf_cu_control_param () (that is, the arithmetic coding parameter), the process proceeds to step S237. move on.
- step S237 the VLC decoding unit 231 performs VLC decoding (variable length or fixed length decoding) on the parameter acquired in step S235.
- step S236 If it is determined in step S236 that the acquired parameter is sao_param () which is a parameter of the adaptive offset filter or alf_cu_control_param () (that is, an arithmetic coding parameter) of the adaptive loop filter, the process proceeds to step S238. move on.
- the VLC decoding unit 231 supplies the parameter acquired in step S235 to the CABAC decoding unit 234, and notifies the decoding control unit 232 to that effect.
- step S2308 under the control of the decoding control unit 232, the CABAC decoding unit 234 performs CABAC decoding (arithmetic decoding) on the parameters obtained in step S235. That is, the parameter to be arithmetically decoded here is an arithmetic coding parameter, which is the parameter sao_param () of the adaptive offset filter or the parameter alf_cu_control_param () of the adaptive loop filter.
- the CABAC decoding unit 234 supplies the decoded parameter to the VLC decoding unit 231 and notifies the decoding control unit 232 of the end of the decoding.
- the decoding control unit 232 notifies the VLC decoding unit 231 of the end.
- step S239 the VLC decoding unit 231 determines whether or not decoding of all the parameters is completed. If it is determined that the decoding is not completed yet, the process returns to step S235, and the subsequent processes are repeated.
- step S239 If it is determined in step S239 that the decoding of all the parameters is completed, the process proceeds to step S240.
- the VLC decoding unit 231 supplies the conversion coefficient, which is the encoded data from the accumulation buffer 201, to the CABAC decoding unit 234, and notifies the decoding control unit 232 to that effect.
- the decoded parameters are accumulated in the VLC decoding unit 231, and are also used for decoding of transform coefficients.
- This decoded parameter may be supplied to the corresponding units of the image decoding apparatus 200 when all decoding of the parameter is completed, or the correspondence of the image decoding apparatus 200 each time decoding is completed. It may be supplied to each unit.
- step S240 the CABAC decoding unit 234 performs CABAC decoding (arithmetic decoding) of data (conversion coefficients) under the control of the decoding control unit 232, supplies the decoded data to the inverse quantization unit 203, and decodes the data. Is notified to the decoding control unit 232. After the process of step S240, the process returns to step S202 of FIG.
- step S236 it is determined whether or not sao_param () or alf_cu_control_param () in step S236. However, it may be simply arithmetically encoded or variable-length (fixed-length) encoding It may be determined by the type of coding.
- decoding processing can be speeded up by arranging arithmetic coding parameters that control coding processing or decoding processing in slice or picture units at the end of the slice header or at the top of slice data.
- FIG. 18 is a diagram illustrating an example of syntax of a slice header generated by the image coding device 100.
- the numbers at the left end of each line are line numbers attached for explanation.
- slice_type is set in the fourth line.
- the slice_type indicates which of the I slice, P slice, and B slice this slice is, and as described above, is a parameter necessary for CABAC initialization.
- cabac_init_idc is set.
- cabac_init_idc is H.
- the H.264 / AVC system was used to set the cabac table, but at present, its use has not been decided in the HEVC system.
- slice_qp_delta is set. This slice_qp_delta is a parameter necessary for CABAC initialization, as described above.
- sao_param () is set as if (sample_adaptive_offset_enabled_flag) sao_param ().
- This sao_param () is a parameter of the adaptive offset filter. That is, in the example of FIG. 18, not all of these parameters, but among these parameters, the parameters for controlling the encoding process or the decoding process in slice or picture units are rearranged at the top of the slice data. The details will be described later with reference to FIGS. 19 and 20.
- alf_param () is set.
- alf_cu_control_param () is set after this alf_param (), but in the example of FIG. 18, the arrangement is moved to the top of slice data.
- FIG. 19 is a diagram illustrating an example of syntax of sao_param () generated by the image coding device 100.
- the numbers at the left end of each line are line numbers attached for explanation.
- sample_adaptive_offset_flag on the second line is a flag indicating whether or not to perform an adaptive offset filter.
- this flag is followed by an arithmetic coding parameter that controls coding or decoding in units of slices or pictures, but in the example of FIG.
- sample_adaptive_offset_flag is set as it is.
- FIG. 20 is a diagram illustrating an example of syntax of slice data generated by the image coding device 100.
- the numbers at the left end of each line are line numbers attached for explanation.
- alf_cu_control_param () is set in the fourth and fifth lines as if (adaptive_loop_filter_enabled_flag) alf_cu_control_param ().
- This alf_cu_control_param () is an arithmetic coding parameter that controls coding processing or decoding processing in units of slices or pictures among parameters of the adaptive loop filter, and is set at the top of slice data.
- sao_split_param (0, 0, 0) sao_offset_param (0, 0, 0) is obtained as if (sample_adaptive_offset_flag) ⁇ sao_split_param (0, 0, 0) sao_offset_param (0, 0, 0) ⁇ . It is set.
- This parameter is an arithmetic coding parameter that controls coding processing or decoding processing in units of slices or pictures among parameters of the adaptive offset filter, and is set at the top of slice data.
- FIG. 21 is a diagram illustrating another example of the syntax of the slice header generated by the image coding device 100.
- the numbers at the left end of each line are line numbers attached for explanation.
- slice_type is set in the fourth line.
- the slice_type indicates which of the I slice, P slice, and B slice this slice is, and as described above, is a parameter necessary for CABAC initialization.
- cabac_init_idc is set.
- cabac_init_idc is H.
- the H.264 / AVC system was used to set the cabac table, but at present, its use has not been decided in the HEVC system.
- slice_qp_delta is set. This slice_qp_delta is a parameter necessary for CABAC initialization, as described above.
- alf_cu_control_param is set.
- This alf_cu_control_param () is an arithmetic coding parameter that controls coding processing or decoding processing in units of slices or pictures among parameters of the adaptive loop filter, and is set at the end of the slice header.
- sao_param () is set as if (sample_adaptive_offset_enabled_flag) sao_param ().
- This sao_param () is a parameter of the adaptive offset filter, is an arithmetic coding parameter that controls coding processing or decoding processing in slice or picture units, and is set at the end of the slice header.
- sao_param () is changed in arrangement at the end of the slice header. Therefore, even if slice_qp_delta is present on the 30th line, immediately after that, slice_qp_delta on the 30th line is not moved intentionally because there is no parameter for controlling the encoding process or the decoding process on a slice or picture basis.
- FIG. 22 is a diagram illustrating still another example of the syntax of the slice header generated by the image coding device 100.
- the numbers at the left end of each line are line numbers attached for explanation.
- slice_type is set in the fourth line.
- the slice_type indicates which of the I slice, P slice, and B slice this slice is, and as described above, is a parameter necessary for CABAC initialization.
- slice_qp_delta is set. This slice_qp_delta is a parameter necessary for CABAC initialization, similar to slice_type.
- cabac_init_idc is set.
- cabac_init_idc is H.
- the H.264 / AVC system was used to set the cabac table, but at present, its use has not been decided in the HEVC system.
- slice_qp_delta may be placed immediately after cabac_init_idc, as shown in FIG. 5 described above, to combine with cabac_init_idc.
- cabac_init_idc may be placed immediately after slice_type together with slice_qp_delta.
- alf_cu_control_param is set.
- This alf_cu_control_param () is an arithmetic coding parameter that controls coding processing or decoding processing in units of slices or pictures among parameters of the adaptive loop filter, and is set at the end of the slice header.
- sao_param () is set as if (sample_adaptive_offset_enabled_flag) sao_param ().
- This sao_param () is a parameter of the adaptive offset filter, is an arithmetic coding parameter that controls coding processing or decoding processing in slice or picture units, and is set at the end of the slice header.
- FIG. 23 is a diagram illustrating an example of the syntax of alf_cu_control_param (). The numbers at the left end of each line are line numbers attached for explanation.
- alf_cu_control_param () only this alf_cu_flag [i] is an arithmetic coding parameter that controls coding processing or decoding processing in slice or picture units.
- alf_cu_control_param () is placed at the top of slice data in the example of FIG. 18 described above, only alf_cu_flag [i] of alf_cu_control_param () is placed at the top of slice data, not the entire alf_cu_control_param (). May be As described above, it is also possible to perform VLC encoding for parameters having other if statements etc. in alf_cu_control_param (). Of course, even if alf_cu_control_param () is placed at the top of slice data as in the example of FIG. 18, VLC and CABAC may be mixed, and all may be placed at the top of slice data and all will be CABAC codes. You may make it
- the parameter sao_param () of the adaptive offset filter is arranged next to the parameter alf_cu_control_param () of the adaptive loop filter at the top of the end or slice header of slice data.
- the parameters of the adaptive offset filter may be arranged before the parameters of the adaptive loop filter if the end of slice data or the top of the slice header.
- the parameter alf_cu_control_param () of the adaptive loop filter and the parameter sao_param () of the adaptive offset filter are described as arithmetic coding parameters for controlling coding processing in units of slices or pictures. It is not limited to. That is, the present technology is a parameter that controls coding processing or decoding processing in slice or picture units, and is also applied to other parameters as long as it is an arithmetic coding parameter to be arithmetically coded.
- arithmetic coding parameters By grouping such arithmetic coding parameters on the end of the slice header and the top of the slice data, the processing can be speeded up.
- arithmetic coding parameters that control coding processing or decoding processing in slice or picture units include parameters such as a quantization matrix.
- H.264 is used as an encoding method.
- the H.264 / AVC system and the HEVC system are used as a base.
- the present disclosure is not limited to this, and other coding schemes / decoding schemes may be applied that include at least one of an adaptive offset filter and an adaptive loop filter in a motion prediction / compensation loop.
- the present disclosure is based on, for example, MPEG, H.264, and so on.
- image information bit stream
- orthogonal transformation such as discrete cosine transformation and motion compensation as in 26x etc. via satellite broadcasting, cable television, the Internet, or network media such as a cellular phone
- the present disclosure can be applied to an image encoding device and an image decoding device used when processing on storage media such as an optical disk, a magnetic disk, and a flash memory.
- FIG. 24 shows an example of a multi-viewpoint image coding method.
- the multi-viewpoint image includes images of a plurality of viewpoints, and an image of a predetermined one of the plurality of viewpoints is designated as the image of the base view.
- An image of each viewpoint other than the base view image is treated as a non-base view image.
- arithmetic coding parameters can be collectively arranged in each view (the same view). Also, in each view (different views), it is possible to share arithmetic coding parameters arranged together in other views.
- the arithmetic coding parameters arranged together in the base view are used in at least one non-base view.
- variable-length (or fixed-length) coding parameters can be arranged together. Also, in each view (different views), variable-length (or fixed-length) coding parameters arranged together in other views can be shared.
- variable length (or fixed length) coding parameters placed together in the base view are used in at least one non-base view.
- variable-length (fixed-length) encoding and variable-length (fixed-length) decoding can be performed efficiently together.
- FIG. 25 is a diagram showing a multi-viewpoint image coding apparatus which performs the above-described multi-viewpoint image coding.
- the multi-viewpoint image coding device 600 includes a coding unit 601, a coding unit 602, and a multiplexing unit 603.
- the encoding unit 601 encodes a base view image to generate a base view image coded stream.
- the encoding unit 602 encodes the non-base view image to generate a non-base view image coded stream.
- the multiplexing unit 603 multiplexes the base view image coded stream generated by the coding unit 601 and the non-base view image coded stream generated by the coding unit 602 to generate a multi-view image coded stream. Do.
- the image coding apparatus 100 (FIG. 1) can be applied to the coding unit 601 and the coding unit 602 of the multi-viewpoint image coding apparatus 600.
- the multi-viewpoint image coding apparatus 600 sets and transmits the arithmetic coding parameters arranged collectively by the coding unit 601 and the arithmetic coding parameters arranged collectively by the coding unit 602.
- the arithmetic coding parameters that the coding unit 601 collectively arranges as described above may be set so as to be shared and used by the coding unit 601 and the coding unit 602 and transmitted.
- the coding units 601 and 602 may set and transmit the arithmetic coding parameters that are collectively arranged by the coding unit 602 so as to be shared by the coding unit 601 and the coding unit 602.
- variable-length (or fixed-length) coding parameters may be set so as to be shared and used by the coding unit 601 and the coding unit 602 and transmitted.
- FIG. 26 is a diagram showing a multi-viewpoint image decoding apparatus that performs the above-described multi-viewpoint image decoding.
- the multi-viewpoint image decoding device 610 includes a demultiplexing unit 611, a decoding unit 612, and a decoding unit 613.
- a demultiplexing unit 611 demultiplexes a multi-view image coded stream in which the base view image coded stream and the non-base view image coded stream are multiplexed, and the base view image coded stream and the non-base view image. Extract the coded stream.
- the decoding unit 612 decodes the base view image coded stream extracted by the demultiplexing unit 611 to obtain a base view image.
- the decoding unit 613 decodes the non-base view image coded stream extracted by the demultiplexing unit 611 to obtain a non-base view image.
- the image decoding apparatus 200 (FIG. 14) can be applied to the decoding unit 612 and the decoding unit 613 of the multi-viewpoint image decoding apparatus 610.
- the encoding unit 601 collectively arranges, the arithmetic coding parameter decoded by the decoding unit 612, the encoding unit 602 arranges collectively, and the arithmetic unit decoded by the decoding unit 613 Processing is performed using coding parameters.
- the encoding unit 601 and the encoding unit 602 share and use the arithmetic encoding parameters arranged together by the encoding unit 601 (or the encoding 602), and are transmitted. May be In this case, in the multi-viewpoint image decoding apparatus 610, the encoding unit 601 (or the encoding 602) arranges together and processing is performed using the arithmetic encoding parameter decoded by the decoding unit 612 (or the decoding unit 613). To be done. The above is also true for variable-length (or fixed-length) coding parameters.
- FIG. 27 shows an example of a multi-viewpoint image coding method.
- the hierarchical image includes images of a plurality of layers (resolutions), and an image of a predetermined layer of the plurality of resolutions is designated as the image of the base layer. Images of each layer other than the image of the base layer are treated as images of the non-base layer.
- arithmetic coding parameters can be collectively arranged in each layer (the same layer).
- arithmetic coding parameters arranged collectively in other layers can be arranged collectively.
- the arithmetic coding parameters arranged together in the base layer are used in at least one non-base layer.
- variable-length (or fixed-length) coding parameters can be arranged collectively. Also, in each layer (different layers), variable-length (or fixed-length) coding parameters arranged in another view can be shared.
- variable length (or fixed length) parameters arranged together in the base layer are used in at least one non-base layer.
- variable-length (fixed-length) encoding and variable-length (fixed-length) decoding can be performed efficiently together.
- FIG. 28 is a diagram showing a hierarchical image coding apparatus which performs the above-mentioned hierarchical image coding.
- hierarchical image coding apparatus 620 has coding section 621, coding section 622, and multiplexing section 623.
- the encoding unit 621 encodes a base layer image to generate a base layer image coded stream.
- the encoding unit 622 encodes the non-base layer image to generate a non-base layer image coded stream.
- the multiplexing unit 623 multiplexes the base layer image coded stream generated by the coding unit 621 and the non-base layer image coded stream generated by the coding unit 622 to generate a hierarchical image coded stream. .
- the image coding apparatus 100 (FIG. 1) can be applied to the coding unit 621 and the coding unit 622 of the hierarchical image coding apparatus 620.
- the hierarchical image coding device 620 sets and transmits the arithmetic coding parameters arranged collectively by the coding unit 621 and the arithmetic coding parameters arranged collectively by the coding unit 602.
- the arithmetic coding parameters that the coding unit 621 collectively arranges as described above may be set to be shared and used by the coding unit 621 and the coding unit 622, and may be transmitted. Conversely, the coding units 621 and the coding unit 622 may be set so as to share and use the arithmetic coding parameters arranged collectively by the coding unit 622 for transmission. The above is also true for variable-length (or fixed-length) coding parameters.
- FIG. 29 is a diagram showing a hierarchical image decoding device that performs the hierarchical image decoding described above.
- the hierarchical image decoding device 630 includes a demultiplexing unit 631, a decoding unit 632, and a decoding unit 633.
- a demultiplexing unit 631 demultiplexes the hierarchical image coded stream in which the base layer image coded stream and the non-base layer image coded stream are multiplexed, and the base layer image coded stream and the non-base layer image code Extract the stream of
- the decoding unit 632 decodes the base layer image coded stream extracted by the demultiplexing unit 631 to obtain a base layer image.
- the decoding unit 633 decodes the non-base layer image coded stream extracted by the demultiplexing unit 631 to obtain a non-base layer image.
- the image decoding apparatus 200 (FIG. 14) can be applied to the decoding unit 632 and the decoding unit 633 of the multi-viewpoint image decoding apparatus 630.
- the encoding unit 621 arranges together, the arithmetic coding parameter decoded by the decoding unit 632 and the encoding unit 622 arranges together, and the arithmetic unit decoded by the decoding unit 633 Processing is performed using coding parameters.
- the encoding unit 621 and the encoding unit 622 share and use the arithmetic encoding parameters arranged together by the encoding unit 621 (or the encoding 622), and are transmitted. May be In this case, in the multi-viewpoint image decoding device 630, the encoding unit 621 (or the encoding 622) is arranged together and processing is performed using the arithmetic encoding parameter decoded by the decoding unit 632 (or the decoding unit 633). Is done. The above is also true for variable-length (or fixed-length) coding parameters.
- the above-described series of processes may be performed by hardware or software.
- a program that configures the software is installed on a computer.
- the computer includes a computer incorporated in dedicated hardware, a general-purpose personal computer capable of executing various functions by installing various programs, and the like.
- a central processing unit (CPU) 801 of a computer 800 executes various programs according to a program stored in a read only memory (ROM) 802 or a program loaded from a storage unit 813 to a random access memory (RAM) 803. Execute the process
- the RAM 803 also stores data necessary for the CPU 801 to execute various processes.
- the CPU 801, the ROM 802, and the RAM 803 are connected to one another via a bus 804.
- An input / output interface 810 is also connected to the bus 804.
- the input / output interface 810 includes an input unit 811 including a keyboard and a mouse, a display including a CRT (Cathode Ray Tube) and an LCD (Liquid Crystal Display), an output unit 812 including a speaker, and a hard disk.
- a communication unit 814 including a storage unit 813 and a modem is connected. The communication unit 814 performs communication processing via a network including the Internet.
- a drive 815 is also connected to the input / output interface 810 as necessary, and removable media 821 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory are appropriately attached, and a computer program read from them is It is installed in the storage unit 813 as necessary.
- a program that configures the software is installed from a network or a recording medium.
- this recording medium is a magnetic disk (including a flexible disk) on which a program is recorded, which is distributed for distributing the program to the user separately from the apparatus main body, an optical disk ( It consists only of removable media 821 consisting of CD-ROM (Compact Disc-Read Only Memory), DVD (Digital Versatile Disc), magneto-optical disc (including MD (Mini Disc), or semiconductor memory etc. Instead, it is configured by the ROM 802 in which the program is recorded and distributed to the user in a state of being incorporated in the apparatus main body, a hard disk included in the storage unit 813, or the like.
- the program executed by the computer may be a program that performs processing in chronological order according to the order described in this specification, in parallel, or when necessary, such as when a call is made. It may be a program to be processed.
- the step of describing the program to be recorded on the recording medium is not limited to processing performed chronologically in the order described, but not necessarily parallel processing It also includes processing to be executed individually.
- system represents the entire apparatus configured by a plurality of devices (apparatus).
- the configuration described above as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
- the configuration described as a plurality of devices (or processing units) in the above may be collectively configured as one device (or processing unit).
- configurations other than those described above may be added to the configuration of each device (or each processing unit).
- part of the configuration of one device (or processing unit) may be included in the configuration of another device (or other processing unit) if the configuration or operation of the entire system is substantially the same. . That is, the present technology is not limited to the above-described embodiment, and various modifications can be made without departing from the scope of the present technology.
- the image encoding device and the image decoding device include a transmitter or a receiver in optical satellite, cable broadcasting such as cable TV, distribution on the Internet, and distribution to terminals by cellular communication, etc.
- the present invention can be applied to various electronic devices such as a recording apparatus which records an image on a medium such as a magnetic disk and a flash memory, or a reproduction apparatus which reproduces an image from the storage medium.
- a recording apparatus which records an image on a medium such as a magnetic disk and a flash memory
- a reproduction apparatus which reproduces an image from the storage medium.
- FIG. 25 shows an example of a schematic configuration of a television set to which the embodiment described above is applied.
- the television device 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
- the tuner 902 extracts a signal of a desired channel from a broadcast signal received via the antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the coded bit stream obtained by demodulation to the demultiplexer 903. That is, the tuner 902 has a role as a transmission means in the television apparatus 900 for receiving a coded stream in which an image is coded.
- the demultiplexer 903 separates the video stream and audio stream of the program to be viewed from the coded bit stream, and outputs the separated streams to the decoder 904. Also, the demultiplexer 903 extracts auxiliary data such as an EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. When the coded bit stream is scrambled, the demultiplexer 903 may perform descrambling.
- EPG Electronic Program Guide
- the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. Further, the decoder 904 outputs the audio data generated by the decoding process to the audio signal processing unit 907.
- the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display a video. Also, the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via the network. Further, the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting. Furthermore, the video signal processing unit 905 may generate an image of a graphical user interface (GUI) such as a menu, a button, or a cursor, for example, and may superimpose the generated image on the output image.
- GUI graphical user interface
- the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays an image on the image surface of a display device (for example, a liquid crystal display, a plasma display, or OELD (Organic ElectroLuminescence Display) (organic EL display)). Or display an image.
- a display device for example, a liquid crystal display, a plasma display, or OELD (Organic ElectroLuminescence Display) (organic EL display)). Or display an image.
- the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on audio data input from the decoder 904, and causes the speaker 908 to output audio. Further, the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
- the external interface 909 is an interface for connecting the television device 900 to an external device or a network.
- a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also serves as a transmission means in the television apparatus 900 for receiving the coded stream in which the image is coded.
- the control unit 910 includes a processor such as a CPU, and memories such as a RAM and a ROM.
- the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the television device 900 is started.
- the CPU controls the operation of the television apparatus 900 according to an operation signal input from, for example, the user interface 911 by executing a program.
- the user interface 911 is connected to the control unit 910.
- the user interface 911 has, for example, buttons and switches for the user to operate the television device 900, a receiver of remote control signals, and the like.
- the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
- the bus 912 mutually connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910.
- the decoder 904 has the function of the image decoding apparatus according to the above-described embodiment. Thus, processing can be performed at high speed when decoding an image in the television device 900.
- FIG. 26 shows an example of a schematic configuration of a mobile phone to which the embodiment described above is applied.
- the mobile phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a multiplexing and separating unit 928, a recording and reproducing unit 929, a display unit 930, a control unit 931, an operation.
- a unit 932 and a bus 933 are provided.
- the antenna 921 is connected to the communication unit 922.
- the speaker 924 and the microphone 925 are connected to the audio codec 923.
- the operation unit 932 is connected to the control unit 931.
- the bus 933 mutually connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931.
- the cellular phone 920 can transmit and receive audio signals, transmit and receive electronic mail or image data, capture an image, and record data in various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode. Do the action.
- the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
- the audio codec 923 converts an analog audio signal into audio data, and A / D converts and compresses the converted audio data. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
- the communication unit 922 encodes and modulates audio data to generate a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
- the audio codec 923 decompresses and D / A converts audio data to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the control unit 931 generates character data constituting an electronic mail in accordance with an operation by the user via the operation unit 932. Further, the control unit 931 causes the display unit 930 to display characters. Further, the control unit 931 generates electronic mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated electronic mail data to the communication unit 922.
- a communication unit 922 encodes and modulates electronic mail data to generate a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. The communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the communication unit 922 demodulates and decodes the received signal to restore the e-mail data, and outputs the restored e-mail data to the control unit 931.
- the control unit 931 causes the display unit 930 to display the content of the e-mail, and stores the e-mail data in the storage medium of the recording and reproduction unit 929.
- the recording and reproducing unit 929 includes an arbitrary readable and writable storage medium.
- the storage medium may be a built-in storage medium such as RAM or flash memory, and may be an externally mounted type such as a hard disk, magnetic disk, magneto-optical disk, optical disk, USB (Unallocated Space Bitmap) memory, or memory card Storage media.
- the camera unit 926 captures an image of a subject to generate image data, and outputs the generated image data to the image processing unit 927.
- the image processing unit 927 encodes the image data input from the camera unit 926, and stores the encoded stream in the storage medium of the storage and reproduction unit 929.
- the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the communication unit 922 multiplexes the multiplexed stream.
- Output to The communication unit 922 encodes and modulates the stream to generate a transmission signal.
- the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the transmission signal and the reception signal may include a coded bit stream.
- the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
- the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
- the image processing unit 927 decodes the video stream to generate video data.
- the video data is supplied to the display unit 930, and the display unit 930 displays a series of images.
- the audio codec 923 decompresses and D / A converts the audio stream to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the image processing unit 927 has functions of the image encoding device and the image decoding device according to the above-described embodiment. Thus, processing can be performed at high speed when encoding and decoding an image in the mobile phone 920.
- FIG. 27 shows an example of a schematic configuration of a recording and reproducing apparatus to which the embodiment described above is applied.
- the recording / reproducing device 940 encodes, for example, audio data and video data of the received broadcast program, and records the encoded data on a recording medium.
- the recording and reproduction device 940 may encode, for example, audio data and video data acquired from another device and record the encoded data on a recording medium.
- the recording / reproducing device 940 reproduces the data recorded on the recording medium on the monitor and the speaker, for example, in accordance with the user's instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
- the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. And 950.
- the tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown) and demodulates the extracted signal. Then, the tuner 941 outputs the coded bit stream obtained by demodulation to the selector 946. That is, the tuner 941 has a role as a transmission means in the recording / reproducing device 940.
- the external interface 942 is an interface for connecting the recording and reproducing device 940 to an external device or a network.
- the external interface 942 may be, for example, an IEEE 1394 interface, a network interface, a USB interface, or a flash memory interface.
- video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 has a role as a transmission unit in the recording / reproducing device 940.
- the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the coded bit stream to the selector 946.
- the HDD 944 records an encoded bit stream obtained by compressing content data such as video and audio, various programs, and other data in an internal hard disk. Also, the HDD 944 reads these data from the hard disk when reproducing video and audio.
- the disk drive 945 records and reads data on the attached recording medium.
- the recording medium mounted on the disk drive 945 is, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or Blu-ray (registered trademark) disk, etc. It may be.
- the selector 946 selects the coded bit stream input from the tuner 941 or the encoder 943 at the time of recording video and audio, and outputs the selected coded bit stream to the HDD 944 or the disk drive 945. Also, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 at the time of reproduction of video and audio.
- the decoder 947 decodes the coded bit stream to generate video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. Also, the decoder 904 outputs the generated audio data to an external speaker.
- the OSD 948 reproduces the video data input from the decoder 947 and displays the video.
- the OSD 948 may superimpose an image of a GUI such as a menu, a button, or a cursor on the video to be displayed.
- the control unit 949 includes a processor such as a CPU, and memories such as a RAM and a ROM.
- the memory stores programs executed by the CPU, program data, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the recording and reproducing device 940 is started.
- the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from, for example, the user interface 950 by executing a program.
- the user interface 950 is connected to the control unit 949.
- the user interface 950 includes, for example, buttons and switches for the user to operate the recording and reproducing device 940, a receiver of a remote control signal, and the like.
- the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
- the encoder 943 has the function of the image coding apparatus according to the embodiment described above.
- the decoder 947 has the function of the image decoding apparatus according to the above-described embodiment. Thus, processing can be performed at high speed when encoding and decoding an image in the recording and reproducing device 940.
- FIG. 28 shows an example of a schematic configuration of an imaging device to which the embodiment described above is applied.
- the imaging device 960 captures an object to generate an image, encodes image data, and records the image data in a recording medium.
- the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972 is provided.
- the optical block 961 is connected to the imaging unit 962.
- the imaging unit 962 is connected to the signal processing unit 963.
- the display unit 965 is connected to the image processing unit 964.
- the user interface 971 is connected to the control unit 970.
- the bus 972 mutually connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970.
- the optical block 961 has a focus lens, an aperture mechanism, and the like.
- the optical block 961 forms an optical image of a subject on the imaging surface of the imaging unit 962.
- the imaging unit 962 includes an image sensor such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS), and converts an optical image formed on an imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
- CCD charge coupled device
- CMOS complementary metal oxide semiconductor
- the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
- the signal processing unit 963 outputs the image data after camera signal processing to the image processing unit 964.
- the image processing unit 964 encodes the image data input from the signal processing unit 963 to generate encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965.
- the image processing unit 964 may output the image data input from the signal processing unit 963 to the display unit 965 to display an image. The image processing unit 964 may superimpose the display data acquired from the OSD 969 on the image to be output to the display unit 965.
- the OSD 969 generates an image of a GUI such as a menu, a button, or a cursor, for example, and outputs the generated image to the image processing unit 964.
- a GUI such as a menu, a button, or a cursor
- the external interface 966 is configured as, for example, a USB input / output terminal.
- the external interface 966 connects the imaging device 960 and the printer, for example, when printing an image.
- a drive is connected to the external interface 966 as necessary.
- removable media such as a magnetic disk or an optical disk may be attached to the drive, and a program read from the removable media may be installed in the imaging device 960.
- the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
- the recording medium mounted in the media drive 968 may be, for example, any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
- the recording medium may be fixedly attached to the media drive 968, and a non-portable storage unit such as, for example, a built-in hard disk drive or a solid state drive (SSD) may be configured.
- SSD solid state drive
- the control unit 970 includes a processor such as a CPU, and memories such as a RAM and a ROM.
- the memory stores programs executed by the CPU, program data, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the imaging device 960 starts up.
- the CPU controls the operation of the imaging device 960 according to an operation signal input from, for example, the user interface 971 by executing a program.
- the user interface 971 is connected to the control unit 970.
- the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
- the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
- the image processing unit 964 has functions of the image coding device and the image decoding device according to the above-described embodiment. Thereby, processing can be performed at high speed when encoding and decoding an image in the imaging device 960.
- each parameter is, for example, an adaptive loop filter, an adaptive offset filter, an arithmetic coding parameter of a quantization matrix, a coding parameter for performing variable-length (fixed-length) coding (decoding), or necessary for initialization of CABAC.
- initialization (encoding) parameters For example, initialization (encoding) parameters.
- the method of transmitting such information is not limited to such an example.
- the information may be transmitted or recorded as separate data associated with the coded bit stream without being multiplexed into the coded bit stream.
- the term “associate” allows an image (a slice or a block, which may be a part of an image) included in a bitstream to be linked at the time of decoding with information corresponding to the image.
- the information may be transmitted on a different transmission path from the image (or bit stream).
- the information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream).
- the information and the image (or bit stream) may be associated with each other in any unit such as, for example, a plurality of frames, one frame, or a part in a frame.
- a receiving unit for receiving A decoding unit that performs arithmetic decoding processing on the arithmetic coding parameter received by the receiving unit, and decoding processing on the coding stream received by the receiving unit using the arithmetic coding parameter subjected to the arithmetic decoding processing apparatus.
- Coding parameters subjected to variable-length coding processing or fixed-length coding processing are collectively arranged in the syntax of the coded stream,
- the receiving unit receives the coding parameter from the coding stream,
- the image processing apparatus according to (1) wherein the decoding unit decodes the encoding parameter received by the receiving unit, and decodes the encoded stream using the decoded encoding parameter.
- Initialization parameters used when initializing arithmetic coding processing or arithmetic decoding processing are collectively arranged in the syntax of the coded stream,
- the receiving unit receives the initialization parameter from the encoded stream,
- the image processing apparatus according to any one of (1) to (3), further comprising: a control unit configured to perform control to initialize the arithmetic decoding process with reference to the initialization parameter received by the receiving unit.
- a control unit configured to perform control to initialize the arithmetic decoding process with reference to the initialization parameter received by the receiving unit.
- the arithmetic coding parameter is a parameter that controls coding processing or decoding processing at the picture level or slice level.
- the image processing apparatus according to any one of (1) to (5), wherein the arithmetic coding parameter is a parameter of a filter used when performing encoding processing or decoding processing.
- the parameter of the adaptive loop filter and the parameter of the adaptive offset filter are collectively arranged at the top of slice data of the encoded stream,
- the receiving unit receives the parameter of the adaptive loop filter and the parameter of the adaptive offset filter from the top of slice data of the encoded stream.
- the parameter of the adaptive loop filter and the parameter of the adaptive offset filter are collectively arranged at the end of the slice header of the encoded stream, The image processing apparatus according to (6), wherein the receiving unit receives the parameter of the adaptive loop filter and the parameter of the adaptive offset filter from an end of slice data of the encoded stream.
- the initialization parameter is disposed near the top of the slice header of the encoded stream, The image processing apparatus according to (4), wherein the receiving unit receives the initialization parameter from near a top of a slice header of the encoded stream.
- the image processing apparatus An arithmetic coding parameter subjected to arithmetic coding processing receives a coding stream in which image data are arranged together in the syntax of a coding stream coded in a unit having a hierarchical structure, and the arithmetic coding parameter, An image processing method for arithmetically decoding received arithmetic coding parameters and decoding the received coded stream using the arithmetically decoded arithmetic coding parameters.
- an encoding unit for encoding image data in units having a hierarchical structure to generate an encoded stream;
- An arrangement unit configured to collectively arrange arithmetic coding parameters to be subjected to arithmetic coding processing in a syntax of a coded stream generated by the coding unit;
- An image processing apparatus comprising: a transmission unit that transmits the coded stream generated by the coding unit and the arithmetic coding parameter arranged by the arrangement unit.
- the arrangement unit arranges collectively the coding parameters to be subjected to variable-length coding processing or fixed-length coding processing, The image processing apparatus according to (11), wherein the transmission unit transmits the encoding parameter arranged by the arrangement unit.
- the image processing device wherein the arrangement unit arranges the arithmetic coding parameter after the coding parameter.
- the arrangement unit arranges initialization parameters used at the time of initialization of arithmetic coding process or arithmetic decoding process collectively.
- the image processing apparatus according to (13), wherein the transmission unit transmits the initialization parameter arranged by the arrangement unit.
- the arithmetic coding parameter is a parameter that controls coding processing or decoding processing at the picture level or slice level.
- the image processing apparatus according to any one of (11) to (15), wherein the arithmetic coding parameter is a parameter of a filter used when performing encoding processing or decoding processing.
- the arrangement unit collectively arranges the parameter of the adaptive loop filter and the parameter of the adaptive offset filter on top of slice data of the coded stream,
- the image processing apparatus according to (16), wherein the transmission unit transmits the parameter of the adaptive loop filter arranged by the arrangement unit and the parameter of the adaptive offset filter.
- the arrangement unit collectively arranges the parameter of the adaptive loop filter and the parameter of the adaptive offset filter at an end of a slice header of the encoded stream,
- the image processing apparatus encodes image data in units having a hierarchical structure to generate an encoded stream, and in the syntax of the generated encoded stream, summarizes the arithmetic encoding parameters to be subjected to the arithmetic encoding process. And transmitting the generated coded stream and the arranged arithmetic coding parameter.
- Reference Signs List 100 image coding device, 105 quantization unit, 106 lossless coding unit, 114 intra prediction unit, 115 motion prediction / compensation unit, 121 adaptive offset unit, 122 adaptive loop filter, 131 VLC coding unit, 132 coding control unit , 133 setting unit, 134 CABAC encoding unit, 141 firmware, 142 hardware, 200 image decoding apparatus, 202 lossless decoding unit, 203 inverse quantization unit, 211 intra prediction unit, 212 motion prediction / compensation unit, 221 adaptive offset Unit, 222 adaptive loop filter, 231 VLC decoding unit, 232 decoding control unit, 233 acquisition unit, 234 CABAC decoding unit, 241 firmware, 242 hardware
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
1.第1の実施の形態(画像符号化装置)
2.第2の実施の形態(画像復号装置)
3.第3の実施の形態(多視点画像符号化・多視点画像復号装置)
4.第4の実施の形態(階層画像符号化・階層画像復号装置)
5.第5の実施の形態(コンピュータ)
6.第6の実施の形態(応用例)
[画像符号化装置の構成例]
図1は、本開示を適用した画像処理装置としての画像符号化装置の一実施の形態の構成を表している。
次に、画像符号化装置100の各部について説明する。図2は、可逆符号化部106の構成例を示すブロック図である。
図3は、従来のストリームのうち、スライスヘッダとスライスデータの一部の構成例を示す図である。
図4は、本技術のストリームのうち、スライスヘッダとスライスデータの一部の構成例を示す図である。図4の例において、斜線のハッチは、図3を参照して上述した従来との差異を表している。
図5は、本技術のストリームのうち、スライスヘッダとスライスデータの一部の他の構成例を示す図である。図5の例において、斜線のハッチは、図3を参照して上述した従来との差異を表している。
図6は、本技術のストリームのうち、シンタックスの一部の構成例を示す図である。
図8は、図3を参照して上述した従来のストリームを処理する場合のタイミングチャートを示す図である。図8の例においては、例えば、復号処理する例が示されている。なお、復号側においても、符号化側と同様に、復号処理は、ファームウエアとハードウエアの処理に分かれる。左が、FW(ファームウエア)の可変長、固定長復号処理を表し、右が、HW(ハードウエア)の算術復号の処理を表している。
図9は、図4を参照して上述した本技術のストリームを処理する場合のタイミングチャートを示す図である。図9の例においては、例えば、復号処理する例が示されている。図15を参照して後述するが、復号側においても、復号処理は、ファームウエアとハードウエアの処理に分かれる。左が、FW(ファームウエア)の可変長、固定長復号処理を表し、右が、HW(ハードウエア)の算術復号の処理を表している。
次に、以上のような画像符号化装置100により実行される各処理の流れについて説明する。最初に、図12のフローチャートを参照して、画像符号化装置100による符号化処理全体の流れの例を説明する。
次に、図13のフローチャートを参照して、図12のステップS116において実行される可逆符号化部106による符号化処理の流れの例を説明する。
[画像復号装置]
図14は、本開示を適用した画像処理装置としての画像復号装置の一実施の形態の構成を表している。図14に示される画像復号装置200は、図1の画像符号化装置100に対応する復号装置である。
次に、画像復号装置200の各部について説明する。図15は、可逆復号部202の構成例を示すブロック図である。
次に、以上のような画像復号装置200により実行される各処理の流れについて説明する。最初に、図16のフローチャートを参照して、画像復号装置200の復号処理全体の流れの例を説明する。
次に、図17のフローチャートを参照して、図16のステップS202において実行される可逆復号部202による復号処理の流れの例を説明する。
図18は、画像符号化装置100により生成されるスライスヘッダのシンタックスの例を示す図である。各行の左端の数字は説明のために付した行番号である。
図21は、画像符号化装置100により生成されるスライスヘッダのシンタックスの他の例を示す図である。各行の左端の数字は説明のために付した行番号である。
図22は、画像符号化装置100により生成されるスライスヘッダのシンタックスのさらに他の例を示す図である。各行の左端の数字は説明のために付した行番号である。
図23は、alf_cu_control_param()のシンタックスの例を示す図である。各行の左端の数字は説明のために付した行番号である。
[多視画像点符号化・多視点画像復号への適用]
上述した一連の処理は、多視点画像符号化・多視点画像復号に適用することができる。図24は、多視点画像符号化方式の一例を示す。
図25は、上述した多視点画像符号化を行う多視点画像符号化装置を示す図である。図25に示されるように、多視点画像符号化装置600は、符号化部601、符号化部602、および多重化部603を有する。
図26は、上述した多視点画像復号を行う多視点画像復号装置を示す図である。図26に示されるように、多視点画像復号装置610は、逆多重化部611、復号部612、および復号部613を有する。
[階層画像点符号化・階層画像復号への適用]
上述した一連の処理は、階層画像符号化・階層画像復号に適用することができる。図27は、多視点画像符号化方式の一例を示す。
図28は、上述した階層画像符号化を行う階層画像符号化装置を示す図である。図28に示されるように、階層画像符号化装置620は、符号化部621、符号化部622、および多重化部623を有する。
図29は、上述した階層画像復号を行う階層画像復号装置を示す図である。図29に示されるように、階層画像復号装置630は、逆多重化部631、復号部632、および復号部633を有する。
[コンピュータ]
上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な汎用のパーソナルコンピュータなどが含まれる。
[第1の応用例:テレビジョン受像機]
図25は、上述した実施形態を適用したテレビジョン装置の概略的な構成の一例を示している。テレビジョン装置900は、アンテナ901、チューナ902、デマルチプレクサ903、デコーダ904、映像信号処理部905、表示部906、音声信号処理部907、スピーカ908、外部インタフェース909、制御部910、ユーザインタフェース911、及びバス912を備える。
図26は、上述した実施形態を適用した携帯電話機の概略的な構成の一例を示している。携帯電話機920は、アンテナ921、通信部922、音声コーデック923、スピーカ924、マイクロホン925、カメラ部926、画像処理部927、多重分離部928、記録再生部929、表示部930、制御部931、操作部932、及びバス933を備える。
図27は、上述した実施形態を適用した記録再生装置の概略的な構成の一例を示している。記録再生装置940は、例えば、受信した放送番組の音声データ及び映像データを符号化して記録媒体に記録する。また、記録再生装置940は、例えば、他の装置から取得される音声データ及び映像データを符号化して記録媒体に記録してもよい。また、記録再生装置940は、例えば、ユーザの指示に応じて、記録媒体に記録されているデータをモニタ及びスピーカ上で再生する。このとき、記録再生装置940は、音声データ及び映像データを復号する。
図28は、上述した実施形態を適用した撮像装置の概略的な構成の一例を示している。撮像装置960は、被写体を撮像して画像を生成し、画像データを符号化して記録媒体に記録する。
(1) 算術符号化処理された算術符号化パラメータが、画像データが階層構造を有する単位で符号化された符号化ストリームのシンタクスにおいてまとめて配置された符号化ストリームと、前記算術符号化パラメータとを受け取る受け取り部と、
前記受け取り部により受け取られた算術符号化パラメータを算術復号処理し、算術復号処理した算術符号化パラメータを用いて、前記受け取り部により受け取られた符号化ストリームを復号処理する復号部と
を備える画像処理装置。
(2) 可変長符号化処理または固定長符号化処理された符号化パラメータは、前記符号化ストリームのシンタクスにおいてまとめて配置されており、
前記受け取り部は、前記符号化ストリームから、前記符号化パラメータを受け取り、
前記復号部は、前記受け取り部により受け取られた符号化パラメータを復号処理し、復号処理した符号化パラメータを用いて、前記符号化ストリームを復号処理する
前記(1)に記載の画像処理装置。
(3) 前記算術符号化パラメータは、前記符号化ストリームのシンタクスにおいて前記符号化パラメータよりも後ろに配置されている
前記(2)に記載の画像処理装置。
(4) 算術符号化処理または算術復号処理の初期化を行う際に用いる初期化パラメータは、前記符号化ストリームのシンタクスにおいてまとめて配置されており、
前記受け取り部は、前記符号化ストリームから、前記初期化パラメータを受け取り、
前記受け取り部により受け取られた初期化パラメータを参照して、前記算術復号処理を初期化するように制御する制御部を
さらに備える前記(1)乃至(3)のいずれかに記載の画像処理装置。
(5) 前記算術符号化パラメータは、前記ピクチャレベルまたはスライスレベルで符号化処理または復号処理をコントロールするパラメータである
前記(1)乃至(4)のいずれかに記載の画像処理装置。
(6) 前記算術符号化パラメータは、符号化処理または復号処理を行う際に用いるフィルタのパラメータである
前記(1)乃至(5)のいずれかに記載の画像処理装置。
(7) 前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータは、前記符号化ストリームのスライスデータのトップにまとめて配置されており、
前記受け取り部は、前記符号化ストリームのスライスデータのトップから、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを受け取る
前記(6)に記載の画像処理装置。
(8) 前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータは、前記符号化ストリームのスライスヘッダのエンドにまとめて配置されており、
前記受け取り部は、前記符号化ストリームのスライスデータのエンドから、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを受け取る
前記(6)に記載の画像処理装置。
(9) 前記初期化パラメータは、前記符号化ストリームのスライスヘッダのトップ付近に配置されており、
前記受け取り部は、前記初期化パラメータを前記符号化ストリームのスライスヘッダのトップ付近から受け取る
前記(4)に記載の画像処理装置。
(10) 画像処理装置が、
算術符号化処理された算術符号化パラメータが、画像データが階層構造を有する単位で符号化された符号化ストリームのシンタクスにおいてまとめて配置された符号化ストリームと、前記算術符号化パラメータとを受け取り、
受け取られた算術符号化パラメータを算術復号処理し、算術復号処理した算術符号化パラメータを用いて、受け取られた符号化ストリームを復号処理する
画像処理方法。
(11) 画像データを階層構造を有する単位で符号化処理して符号化ストリームを生成する符号化部と、
前記符号化部により生成される符号化ストリームのシンタクスにおいて、算術符号化処理する算術符号化パラメータをまとめて配置する配置部と、
前記符号化部により生成された符号化ストリームと、前記配置部により配置された算術符号化パラメータとを伝送する伝送部と
を備える画像処理装置。
(12) 前記配置部は、可変長符号化処理または固定長符号化処理する符号化パラメータをまとめて配置し、
前記伝送部は、前記配置部により配置された符号化パラメータを伝送する
前記(11)に記載の画像処理装置。
(13) 前記配置部は、前記算術符号化パラメータを、前記符号化パラメータよりも後ろに配置する
前記(12)に記載の画像処理装置。
(14) 前記配置部は、算術符号化処理または算術復号処理の初期化を行う際に用いる初期化パラメータをまとめて配置し、
前記伝送部は、前記配置部により配置された初期化パラメータを伝送する
前記(13)に記載の画像処理装置。
(15) 前記算術符号化パラメータは、前記ピクチャレベルまたはスライスレベルで符号化処理または復号処理をコントロールするパラメータである
前記(11)乃至(14)のいずれかに記載の画像処理装置。
(16) 前記算術符号化パラメータは、符号化処理または復号処理を行う際に用いるフィルタのパラメータである
前記(11)乃至(15)のいずれかに記載の画像処理装置。
(17) 前記配置部は、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータを、前記符号化ストリームのスライスデータのトップにまとめて配置し、
前記伝送部は、前記配置部により配置された前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを伝送する
前記(16)に記載の画像処理装置。
(18) 前記配置部は、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータを、前記符号化ストリームのスライスヘッダのエンドにまとめて配置し、
前記伝送部は、前記配置部により配置された前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを伝送する
前記(16)に記載の画像処理装置。
(19)前記配置部は、前記初期化パラメータを前記符号化ストリームのスライスヘッダのトップ付近に配置する
前記(14)に記載の画像処理装置。
(20)画像処理装置が、画像データを階層構造を有する単位で符号化処理して符号化ストリームを生成し、生成される符号化ストリームのシンタクスにおいて、算術符号化処理する算術符号化パラメータをまとめて配置し、生成された符号化ストリームと、配置された算術符号化パラメータとを伝送する
画像処理方法。
Claims (20)
- 算術符号化処理された算術符号化パラメータが、画像データが階層構造を有する単位で符号化された符号化ストリームのシンタクスにおいてまとめて配置された符号化ストリームと、前記算術符号化パラメータとを受け取る受け取り部と、
前記受け取り部により受け取られた算術符号化パラメータを算術復号処理し、算術復号処理した算術符号化パラメータを用いて、前記受け取り部により受け取られた符号化ストリームを復号処理する復号部と
を備える画像処理装置。 - 可変長符号化処理または固定長符号化処理された符号化パラメータは、前記符号化ストリームのシンタクスにおいてまとめて配置されており、
前記受け取り部は、前記符号化ストリームから、前記符号化パラメータを受け取り、
前記復号部は、前記受け取り部により受け取られた符号化パラメータを復号処理し、復号処理した符号化パラメータを用いて、前記符号化ストリームを復号処理する
請求項1に記載の画像処理装置。 - 前記算術符号化パラメータは、前記符号化ストリームのシンタクスにおいて前記符号化パラメータよりも後ろに配置されている
請求項2に記載の画像処理装置。 - 算術符号化処理または算術復号処理の初期化を行う際に用いる初期化パラメータは、前記符号化ストリームのシンタクスにおいてまとめて配置されており、
前記受け取り部は、前記符号化ストリームから、前記初期化パラメータを受け取り、
前記受け取り部により受け取られた初期化パラメータを参照して、前記算術復号処理を初期化するように制御する制御部を
さらに備える請求項3に記載の画像処理装置。 - 前記算術符号化パラメータは、前記ピクチャレベルまたはスライスレベルで符号化処理または復号処理をコントロールするパラメータである
請求項4に記載の画像処理装置。 - 前記算術符号化パラメータは、符号化処理または復号処理を行う際に用いるフィルタのパラメータである
請求項5に記載の画像処理装置。 - 前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータは、前記符号化ストリームのスライスデータのトップにまとめて配置されており、
前記受け取り部は、前記符号化ストリームのスライスデータのトップから、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを受け取る
請求項6に記載の画像処理装置。 - 前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータは、前記符号化ストリームのスライスヘッダのエンドにまとめて配置されており、
前記受け取り部は、前記符号化ストリームのスライスデータのエンドから、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを受け取る
請求項6に記載の画像処理装置。 - 前記初期化パラメータは、前記符号化ストリームのスライスヘッダのトップ付近に配置されており、
前記受け取り部は、前記初期化パラメータを前記符号化ストリームのスライスヘッダのトップ付近から受け取る
請求項4に記載の画像処理装置。 - 画像処理装置が、
算術符号化処理された算術符号化パラメータが、画像データが階層構造を有する単位で符号化された符号化ストリームのシンタクスにおいてまとめて配置された符号化ストリームと、前記算術符号化パラメータとを受け取り、
受け取られた算術符号化パラメータを算術復号処理し、算術復号処理した算術符号化パラメータを用いて、受け取られた符号化ストリームを復号処理する
画像処理方法。 - 画像データを階層構造を有する単位で符号化処理して符号化ストリームを生成する符号化部と、
前記符号化部により生成される符号化ストリームのシンタクスにおいて、算術符号化処理する算術符号化パラメータをまとめて配置する配置部と、
前記符号化部により生成された符号化ストリームと、前記配置部により配置された算術符号化パラメータとを伝送する伝送部と
を備える画像処理装置。 - 前記配置部は、可変長符号化処理または固定長符号化処理する符号化パラメータをまとめて配置し、
前記伝送部は、前記配置部により配置された符号化パラメータを伝送する
請求項11に記載の画像処理装置。 - 前記配置部は、前記算術符号化パラメータを、前記符号化パラメータよりも後ろに配置する
請求項12に記載の画像処理装置。 - 前記配置部は、算術符号化処理または算術復号処理の初期化を行う際に用いる初期化パラメータをまとめて配置し、
前記伝送部は、前記配置部により配置された初期化パラメータを伝送する
請求項13に記載の画像処理装置。 - 前記算術符号化パラメータは、前記ピクチャレベルまたはスライスレベルで符号化処理または復号処理をコントロールするパラメータである
請求項14に記載の画像処理装置。 - 前記算術符号化パラメータは、符号化処理または復号処理を行う際に用いるフィルタのパラメータである
請求項15に記載の画像処理装置。 - 前記配置部は、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータを、前記符号化ストリームのスライスデータのトップにまとめて配置し、
前記伝送部は、前記配置部により配置された前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを伝送する
請求項16に記載の画像処理装置。 - 前記配置部は、前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータを、前記符号化ストリームのスライスヘッダのエンドにまとめて配置し、
前記伝送部は、前記配置部により配置された前記適応ループフィルタのパラメータと前記適応オフセットフィルタのパラメータとを伝送する
請求項16に記載の画像処理装置。 - 前記配置部は、前記初期化パラメータを前記符号化ストリームのスライスヘッダのトップ付近に配置する
請求項14に記載の画像処理装置。 - 画像処理装置が、
画像データを階層構造を有する単位で符号化処理して符号化ストリームを生成し、
生成される符号化ストリームのシンタクスにおいて、算術符号化処理する算術符号化パラメータをまとめて配置し、
生成された符号化ストリームと、配置された算術符号化パラメータとを伝送する
画像処理方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013522968A JP5999449B2 (ja) | 2011-07-07 | 2012-06-29 | 画像処理装置および方法、プログラム、並びに記録媒体 |
US14/126,560 US10412417B2 (en) | 2011-07-07 | 2012-06-29 | Image processing device and method capable of performing an encoding process or a decoding process on an image at high speed |
CN201280032052.6A CN103650513B (zh) | 2011-07-07 | 2012-06-29 | 图像处理装置和方法 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-151186 | 2011-07-07 | ||
JP2011151186 | 2011-07-07 | ||
JP2011-180415 | 2011-08-22 | ||
JP2011180415 | 2011-08-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013005659A1 true WO2013005659A1 (ja) | 2013-01-10 |
Family
ID=47437012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/066647 WO2013005659A1 (ja) | 2011-07-07 | 2012-06-29 | 画像処理装置および方法 |
Country Status (4)
Country | Link |
---|---|
US (1) | US10412417B2 (ja) |
JP (1) | JP5999449B2 (ja) |
CN (1) | CN103650513B (ja) |
WO (1) | WO2013005659A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022201808A1 (ja) * | 2021-03-23 | 2022-09-29 | 日本電気株式会社 | 映像符号化装置、映像符号化方法、映像符号化プログラム、及び非一時的記録媒体 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5914962B2 (ja) * | 2010-04-09 | 2016-05-11 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに、記録媒体 |
CN109309846A (zh) * | 2017-07-26 | 2019-02-05 | 深圳市中兴微电子技术有限公司 | 一种基于可信任环境的视频安全播放系统及方法 |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3219403B2 (ja) * | 1989-05-10 | 2001-10-15 | キヤノン株式会社 | 画像記憶装置 |
US7068825B2 (en) * | 1999-03-08 | 2006-06-27 | Orametrix, Inc. | Scanning system and calibration method for capturing precise three-dimensional information of objects |
JP4241417B2 (ja) | 2004-02-04 | 2009-03-18 | 日本ビクター株式会社 | 算術復号化装置、および算術復号化プログラム |
KR20070006445A (ko) * | 2005-07-08 | 2007-01-11 | 삼성전자주식회사 | 하이브리드 엔트로피 부호화, 복호화 방법 및 장치 |
US7720096B2 (en) * | 2005-10-13 | 2010-05-18 | Microsoft Corporation | RTP payload format for VC-1 |
US8300265B2 (en) * | 2006-01-26 | 2012-10-30 | Brother Kogyo Kabushiki Kaisha | Data processing apparatus capable of calibrating print data to reduce ink consumption |
TWI322551B (en) * | 2006-07-28 | 2010-03-21 | Delta Electronics Inc | Fan for vehicle and its used motor |
US7443318B2 (en) * | 2007-03-30 | 2008-10-28 | Hong Kong Applied Science And Technology Research Institute Co. Ltd. | High speed context memory implementation for H.264 |
US8265144B2 (en) * | 2007-06-30 | 2012-09-11 | Microsoft Corporation | Innovations in video decoder implementations |
US8254455B2 (en) * | 2007-06-30 | 2012-08-28 | Microsoft Corporation | Computing collocated macroblock information for direct mode macroblocks |
US9648325B2 (en) * | 2007-06-30 | 2017-05-09 | Microsoft Technology Licensing, Llc | Video decoding implementations for a graphics processing unit |
JP4850806B2 (ja) * | 2007-10-01 | 2012-01-11 | キヤノン株式会社 | エントロピー符号化装置、エントロピー符号化方法およびコンピュータプログラム |
JP2009152990A (ja) * | 2007-12-21 | 2009-07-09 | Panasonic Corp | 画像符号化装置及び画像復号化装置 |
TW200943270A (en) * | 2008-04-03 | 2009-10-16 | Faraday Tech Corp | Method and related circuit for color depth enhancement of displays |
ITVI20100175A1 (it) * | 2010-06-21 | 2011-12-22 | St Microelectronics Pvt Ltd | Sistema per la codifica entropica di video h.264 per applicazioni hdtv in tempo reale |
EP2533537A1 (en) * | 2011-06-10 | 2012-12-12 | Panasonic Corporation | Transmission of picture size for image or video coding |
US9866859B2 (en) * | 2011-06-14 | 2018-01-09 | Texas Instruments Incorporated | Inter-prediction candidate index coding independent of inter-prediction candidate list construction in video coding |
US10230989B2 (en) * | 2011-06-21 | 2019-03-12 | Texas Instruments Incorporated | Method and apparatus for video encoding and/or decoding to prevent start code confusion |
US20130003823A1 (en) * | 2011-07-01 | 2013-01-03 | Kiran Misra | System for initializing an arithmetic coder |
US9398307B2 (en) * | 2011-07-11 | 2016-07-19 | Sharp Kabushiki Kaisha | Video decoder for tiles |
-
2012
- 2012-06-29 JP JP2013522968A patent/JP5999449B2/ja not_active Expired - Fee Related
- 2012-06-29 WO PCT/JP2012/066647 patent/WO2013005659A1/ja active Application Filing
- 2012-06-29 US US14/126,560 patent/US10412417B2/en not_active Expired - Fee Related
- 2012-06-29 CN CN201280032052.6A patent/CN103650513B/zh not_active Expired - Fee Related
Non-Patent Citations (2)
Title |
---|
NARROSCHKE, M. ET AL.: "Restart of CABAC after coding of ALF and SAO slice header data", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC)OF ITU-T SG16 WP3 AND ISO/ IEC JTC1/SC29/WG11 6TH MEETING JCTVC-F399, 1 July 2011 (2011-07-01) * |
SUZUKI, T.: "Arithmetic coding in high level syntax", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC)OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 JVTVC-F377, 7 July 2011 (2011-07-07) * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022201808A1 (ja) * | 2021-03-23 | 2022-09-29 | 日本電気株式会社 | 映像符号化装置、映像符号化方法、映像符号化プログラム、及び非一時的記録媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2013005659A1 (ja) | 2015-02-23 |
JP5999449B2 (ja) | 2016-09-28 |
CN103650513B (zh) | 2017-10-13 |
US20140226712A1 (en) | 2014-08-14 |
CN103650513A (zh) | 2014-03-19 |
US10412417B2 (en) | 2019-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6521013B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
JP5942990B2 (ja) | 画像処理装置および方法 | |
JP5907367B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
AU2013281945B2 (en) | Image processing device and method | |
WO2014002821A1 (ja) | 画像処理装置および方法 | |
WO2014050676A1 (ja) | 画像処理装置および方法 | |
WO2013137047A1 (ja) | 画像処理装置および方法 | |
WO2014050731A1 (ja) | 画像処理装置および方法 | |
WO2013108688A1 (ja) | 画像処理装置および方法 | |
WO2013047326A1 (ja) | 画像処理装置および方法 | |
WO2014156708A1 (ja) | 画像復号装置および方法 | |
WO2013154026A1 (ja) | 画像処理装置および方法 | |
JP5999449B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
WO2013051453A1 (ja) | 画像処理装置および方法 | |
JP6508553B2 (ja) | 画像処理装置および方法 | |
JP6341304B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
JP6094838B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12806935 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013522968 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14126560 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12806935 Country of ref document: EP Kind code of ref document: A1 |