WO2011148622A1 - Video encoding method, video decoding method, video encoding device and video decoding device - Google Patents
Video encoding method, video decoding method, video encoding device and video decoding device Download PDFInfo
- Publication number
- WO2011148622A1 WO2011148622A1 PCT/JP2011/002895 JP2011002895W WO2011148622A1 WO 2011148622 A1 WO2011148622 A1 WO 2011148622A1 JP 2011002895 W JP2011002895 W JP 2011002895W WO 2011148622 A1 WO2011148622 A1 WO 2011148622A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- memory compression
- coefficient
- memory
- picture
- decoding
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
- H04N19/426—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements using memory downsizing methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
Definitions
- the present invention relates to encoding of multimedia data, more specifically, a moving picture encoding method and decoding method using block-based memory compression and decompression methods, and the like.
- One method of reducing memory access bandwidth in video encoder and decoder implementation is to compress the picture size used as a reference picture in inter-picture prediction processing.
- Patent Document 1 and Patent Document 2 the memory access bandwidth is reduced using the down-conversion method in order to reduce the spatial resolution of the reference picture.
- the disadvantage of these methods is that the high frequency components contained in the picture reconstructed to the original spatial resolution are lost during the down-conversion process to the lower resolution and cannot be restored by the up-conversion process. . In such a system, the picture quality of the up-converted picture is degraded.
- Patent Document 3 includes a step of deleting one pixel for every four image pixels, a step of reducing the number of bits per pixel by 1 bit, and a step of adding 3-bit information related to a method for predicting missing pixels.
- a conversion method is used. The disadvantage of this method is that if one pixel information is lost, this detection method may not be able to restore it correctly. Another drawback is that the image quality will be significantly reduced when compressing with higher efficiency.
- Non-Patent Document 1 describes a block scalar quantization method. The minimum pixel value and the maximum pixel value are calculated and stored for each pixel block. Then, all the pixels in the block are uniformly quantized and stored between the calculated minimum pixel value and maximum pixel value.
- Non-Patent Document 2 describes a pixel quantization method that compresses 8-pixel samples using a one-dimensional DPCM prediction structure and a nonlinear quantizer.
- the disadvantage of the pixel quantization method is that, when the compression ratio is large, a considerably clear quality degradation always occurs. Also, with this method, it is not possible to efficiently compress an image sample group whose pixel value changes greatly.
- Video coding using compressed reference frames Texas Instrument Inc, VCEG-AE17. doc, ITU STUDY GROUP 16 Q6, Video Coding Experts Group (VCEG), Marrakech, Massachusetts, January 15-16, 2007 Memory Compression Method for Avoiding Perceivable Distribution Due to Intra-Prediction in H. H.264 Decoders, NEC Corporation, Proc. Of IEEE International Conference on Consumer Electronics, 2007
- the present invention has been made in view of such problems, and provides a video encoding method, a video decoding method, and the like that can flexibly and easily handle a wide range of compression rates. Objective.
- a moving picture coding method is a block-based memory compression and decompression moving picture coding method, which encodes a picture using inter-picture prediction, and Performing entropy coding of the coded picture; decoding the coded picture using inter-picture prediction; determining a memory compression parameter set; and a compressed video stream including the coded picture
- Writing the memory compression parameter set in a header of the image compressing the decoded picture into a fixed-size data block using the memory compression parameter set, and restoring the data block of the decoded picture Necessary for inter-picture prediction in later picture coding And a step of retrieving the image samples.
- the video decoding method is a block-based memory compression / decompression video decoding method that analyzes a header of a compressed video stream and configures a memory compression parameter set that constitutes a memory compression process.
- a step of performing entropy decoding of the compressed video stream a step of decoding a picture from the compressed video stream using inter-picture prediction, and using the analyzed memory compression parameter set, Compressing the decoded picture into a fixed-size data block; restoring the data block of the decoded picture to generate image samples required for inter-picture prediction in subsequent picture decoding; Restore and output the data block of the decoded picture And generating the image samples.
- the present invention uses a method for a configurable fixed-length block-based compression scheme.
- One new compression method uses transformation processing, bit-plane coding, and variable-length codes that do not require a table, and easily implements all bits of a code block to the target compression rate with low complexity. adjust.
- Another new compression method uses a configurable bit quantization scheme. These new compression methods are configured flexibly so that they can be accurately adapted even at different compression rates and adapted for application use.
- the novelty of the present invention conveys the configuration setting of a new memory compression unit used in the video encoder to reduce the memory access bandwidth to the video decoder that decodes the compressed video stream by a signal in the compressed video stream. Be able to.
- the video decoder configures a memory compression unit that reads configuration settings from the compressed video stream and compresses reference pictures in the same manner as the video encoder.
- the video encoder can be configured with a memory compressor to reduce the complexity of the video encoder or decoder.
- Configuration settings that are signaled to the video decoder include code block target compression ratio, image block spatial dimensions, chroma component packing mode, coefficient block perceptual frequency weighting, DC prediction value, coefficient block frequency removal pattern, or The coefficient scanning pattern is included as a parameter.
- the present invention can be realized not only as such a moving image encoding method and a moving image decoding method, but also as a moving image encoding device, a moving image decoding device, It can also be realized as an integrated circuit, a program for causing a computer to process a moving image by these methods, and a recording medium for storing the program.
- the present invention improves the picture quality of the reference picture at a fixed compression rate, the effect of the present invention is to improve the coding efficiency.
- Another advantage of the present invention is that the video encoder can be interoperated even if the complexity setting of the video encoder is different from the complexity setting of the video decoder.
- Low-complexity video encoder (with significantly reduced memory access bandwidth) produces a compressed video bitstream that can be decoded by a video decoder that accurately reconstructs the reference pictures used in the video encoder and decoder can do. This is realized by configuring the memory compression unit of the video decoder via the encoded signal in the compressed video stream.
- FIG. 1 is a flowchart showing a moving image encoding process using the present invention.
- FIG. 2 is a flowchart showing a moving picture decoding process using the present invention.
- FIG. 3 is a block diagram showing an example of a video encoder apparatus using the present invention.
- FIG. 4 is a block diagram showing an example of a video decoder apparatus using the present invention.
- FIG. 5 is a flowchart showing the memory compression processing of the present invention.
- FIG. 6 is a flowchart showing the memory restoration processing of the present invention.
- FIG. 7 is a block diagram showing an example of a memory compression unit using the present invention.
- FIG. 8 is a block diagram showing an example of an apparatus of a memory restoration unit using the present invention.
- FIG. 1 is a flowchart showing a moving image encoding process using the present invention.
- FIG. 2 is a flowchart showing a moving picture decoding process using the present invention.
- FIG. 3 is a block diagram showing an example of
- FIG. 9 is a flowchart showing the bit plane encoding process of the memory compression unit in the video encoder of the present invention.
- FIG. 10 is a flowchart showing a bit plane encoding process of the memory compression unit in the video decoder of the present invention.
- FIG. 11 is a flowchart showing the bit plane decoding process of the memory restoration unit in the video encoder and decoder of the present invention.
- FIG. 12A is a schematic diagram showing the position of the chroma component packing mode parameter in the sequence header of the compressed video stream.
- FIG. 12B is a schematic diagram showing the position of the chroma component packing mode parameter in the picture header of the compressed video stream.
- FIG. 12A is a schematic diagram showing the position of the chroma component packing mode parameter in the sequence header of the compressed video stream.
- FIG. 12B is a schematic diagram showing the position of the chroma component packing mode parameter in the picture header of the compressed video stream.
- FIG. 12C is a schematic diagram illustrating that chroma component packing mode parameters can be derived from a lookup table based on the encoded profile parameters and level parameters in the sequence header of the compressed video stream.
- FIG. 13 is a flowchart showing a bit plane encoding process for one component.
- FIG. 14 is a schematic diagram illustrating conversion processing from coefficient values to bit planes.
- FIG. 15 is a schematic diagram showing a data structure of a compression unit for one component.
- FIG. 16 is a flowchart showing a bit plane encoding process for three components.
- FIG. 17 is a schematic diagram illustrating a data structure of a compression unit for three components.
- FIG. 18 is a flowchart showing a bit-plane decoding process for one component.
- FIG. 13 is a flowchart showing that chroma component packing mode parameters can be derived from a lookup table based on the encoded profile parameters and level parameters in the sequence header of the compressed video stream.
- FIG. 13 is
- FIG. 19 is a flowchart showing a bit-plane decoding process for three components.
- FIG. 20 is a flowchart showing an encoding process for one bit plane.
- FIG. 21 is a flowchart showing a decoding process for one bit plane.
- FIG. 22A is a schematic diagram illustrating an example of a variable length code used for encoding the run parameter and the most significant bit number.
- FIG. 22B is a schematic diagram illustrating an example of a variable length code used for encoding the run parameter and the most significant bit number.
- FIG. 23 is a schematic diagram illustrating an example of encoding one bit plane of four coefficients.
- FIG. 24 is a flowchart showing an explicit scanning process for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- FIG. 24 is a flowchart showing an explicit scanning process for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- FIG. 25 is a flowchart showing an adaptive scanning process for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- FIG. 26A is a schematic diagram showing the positions of the explicit signal scanning pattern flag parameter and the scanning pattern value parameter in the sequence header of the compressed video stream.
- FIG. 26B is a schematic diagram showing the positions of the explicit signal scanning pattern flag parameter and the scanning pattern value parameter in the picture header of the compressed video stream.
- FIG. 27 is a flowchart showing a selective scanning process for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- FIG. 28 is a flowchart showing the selective scanning process for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- FIG. 29A is a schematic diagram showing the positions of the coefficient removal presence flag parameter and the coefficient removal flag parameter in the sequence header of the compressed video stream.
- FIG. 29B is a schematic diagram illustrating the positions of the coefficient removal presence flag parameter and the coefficient removal flag parameter in the picture header of the compressed video stream.
- FIG. 30 is a flowchart showing explicit DC prediction processing for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- FIG. 31 is a flowchart showing an adaptive DC prediction process for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- FIG. 32A is a schematic diagram showing the positions of the explicit signal DC flag parameter and the DC prediction value parameter in the sequence header of the compressed video stream.
- FIG. 32B is a schematic diagram illustrating the positions of the explicit signal DC flag parameter and the DC prediction value parameter in the picture header of the compressed video stream.
- FIG. 33 is a flowchart showing coefficient bit shift processing for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- FIG. 34 is a flowchart showing a coefficient bit shift process for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- FIG. 35A is a schematic diagram showing the positions of the coefficient weight value presence flag parameter and the coefficient weight value parameter in the sequence header of the compressed video stream.
- FIG. 35B is a schematic diagram illustrating the positions of the coefficient weight value presence flag parameter and the coefficient weight value parameter in the picture header of the compressed video stream.
- FIG. 36 is a flowchart showing an adaptive conversion process for the memory compression unit in the video encoder of the present invention.
- FIG. 37 is a flowchart showing an adaptive conversion process for the memory compression unit in the video decoder of the present invention.
- FIG. 38 is a flowchart showing an adaptive inverse transform process for the memory restoration unit in the video encoder and decoder of the present invention.
- FIG. 39A is a schematic diagram showing the position of the memory compression block size parameter in the sequence header of the compressed video stream.
- FIG. 39B is a schematic diagram showing the position of the memory compression block size parameter in the picture header of the compressed video stream.
- FIG. 39C is a schematic diagram illustrating that a memory compressed block size parameter can be derived from a lookup table based on the encoded profile parameter and level parameter in the sequence header of the compressed video stream.
- FIG. 40 is a flowchart showing an adaptive compression size bit-plane encoding process for the memory compression unit in the video encoder of the present invention.
- FIG. 41 is a flowchart showing an adaptive compression size bit-plane encoding process for the memory compression unit in the video decoder of the present invention.
- FIG. 42A is a schematic diagram showing the position of the target compressed data size parameter in the sequence header of the compressed video stream.
- FIG. 42B is a schematic diagram illustrating the position of the target compressed data size parameter in the picture header of the compressed video stream.
- FIG. 42C is a schematic diagram illustrating that the target compressed data size parameter can be derived from the search table based on the encoded profile parameter and the level parameter in the sequence header of the compressed video stream.
- FIG. 42D is a schematic diagram illustrating that the target compressed data size parameter can be derived from the search table based on the encoded picture width parameter and picture height parameter in the sequence header of the compressed video stream.
- FIG. 43 is a flowchart showing adaptive pixel bit shift and clipping processing for the memory restoration unit in the video encoder of the present invention.
- FIG. 44 is a flowchart showing adaptive pixel bit shift and clipping processing for the memory restoration unit in the video decoder of the present invention.
- FIG. 45A is a schematic diagram showing the position of the target bit accuracy parameter in the sequence header of the compressed video stream.
- FIG. 45B is a schematic diagram illustrating the position of the target bit accuracy parameter in the picture header of the compressed video stream.
- FIG. 45C is a schematic diagram illustrating that the target bit accuracy parameter can be derived from the search table based on the encoded profile parameter and the level parameter in the sequence header of the compressed video stream.
- FIG. 46A is a conceptual diagram showing the position of the memory compression method selection parameter in the sequence header of the compressed video stream.
- FIG. 46B is a conceptual diagram showing the position of the memory compression method selection parameter in the picture header of the compressed video stream.
- FIG. 46C is a conceptual diagram showing that a memory compression scheme selection parameter can be derived from a search table based on the encoded profile parameter and level parameter in the sequence header of the compressed video stream.
- FIG. 46D is a conceptual diagram illustrating that a memory compression scheme selection parameter can be derived from a search table based on the encoded picture width parameter and picture height parameter in the sequence header of the compressed video stream.
- FIG. 47 is a flowchart showing an adaptive memory compression method in the video decoding apparatus of the present invention.
- FIG. 48 is a flowchart showing an adaptive memory compression method in the video encoding apparatus of the present invention.
- FIG. 49 is a flowchart showing an adaptive memory restoration method in the video encoding device and video decoding device of the present invention.
- FIG. 50 is an overall configuration diagram of a content supply system that implements a content distribution service.
- FIG. 51 is an overall configuration diagram of a digital broadcasting system.
- FIG. 52 is a block diagram illustrating an example of a configuration of a television.
- FIG. 53 is a block diagram illustrating an example of a configuration of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
- FIG. 54 is a diagram showing an example of the configuration of a recording medium that is an optical disk.
- FIG. 55A is a diagram illustrating an example of a mobile phone.
- FIG. 55B is a block diagram illustrating an example of a configuration of a mobile phone.
- FIG. 56 is a diagram showing a structure of multiplexed data.
- FIG. 57 is a diagram schematically showing how each stream is multiplexed in the multiplexed data.
- FIG. 58 is a diagram showing in more detail how the video stream is stored in the PES packet sequence.
- FIG. 59 is a diagram showing the structure of TS packets and source packets in multiplexed data.
- FIG. 60 is a diagram illustrating a data structure of the PMT.
- FIG. 61 shows an internal structure of multiplexed data information.
- FIG. 62 shows the internal structure of stream attribute information.
- FIG. 63 is a diagram showing steps for identifying video data.
- FIG. 64 is a block diagram illustrating an example of a configuration of an integrated circuit that implements the moving picture encoding method and the moving picture decoding method according to each embodiment.
- FIG. 65 is a diagram showing a configuration for switching drive frequencies.
- FIG. 66 shows steps for identifying video data and switching between driving frequencies.
- FIG. 67 is a diagram showing an example of a search table in which video data standards are associated with drive frequencies.
- FIG. 68A is a diagram illustrating an example of a configuration for sharing a module of a signal processing unit.
- FIG. 68B is a diagram showing another example of a configuration for sharing a module of the signal processing unit.
- the present invention describes a configurable memory compression and decompression process used to implement video encoders and decoders that reduce memory access bandwidth and memory storage size.
- FIG. 1 is a flowchart showing a moving image encoding process using the present invention.
- Module 100 encodes a picture using inter-picture prediction processing. Thereafter, in the module 102, the encoded picture is entropy encoded. Module 104 then decodes and reconstructs the picture using inter-picture prediction.
- module 106 a memory compression parameter set is determined. These memory compression parameters are determined by the video encoder in order to reduce implementation complexity and improve the compression efficiency of the memory compression process. Then, the module 108 encodes the determined memory compression parameter in the header of the compressed video stream.
- the memory compression parameter includes one or more of the following parameters used for memory compression and decompression processing of the video decoder.
- the width of the input image block such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included? -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
- the reconstructed picture is compressed into a data block having a fixed size by memory compression processing using one or more encoded parameters in the header of the compressed video stream. Then, in the module 112, the data block of the reconstructed picture is restored by the memory restoration process using one or more encoded parameters in the header of the compressed video stream. The restored image sample is used for inter-picture prediction processing for encoding a subsequent picture.
- FIG. 2 is a flowchart showing a moving picture decoding process using the present invention.
- the module 200 analyzes the header of the compressed video stream and acquires parameters necessary for the memory compression process.
- the memory compression parameters include one or more of the following parameters used for memory compression and decompression processing.
- the width of the input image block such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included?
- Flag indicating whether to change the importance of a specific frequency coefficient by sending a flag and weight indicating -Weight of coefficient when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored sample-Selection among multiple compression methods
- the module 202 entropy-decodes the compressed video stream, and the module 204 decodes the picture using inter-picture prediction processing.
- the module 206 compresses the decoded picture into a fixed-size data block by memory compression processing using one or more of the analyzed memory compression parameters.
- the data block of the decoded picture is restored by the memory restoration process using the analyzed parameters.
- the restored image sample is used for inter-picture prediction processing for decoding a subsequent picture.
- the data block of the decoded picture is restored again by the memory restoration process using the analyzed parameters, and the restored picture sample is output.
- FIG. 3 is a block diagram showing an example of a video encoder device (moving image encoding device) using the present invention.
- This moving image encoding apparatus includes a subtracting unit 300, a converting unit 302, a quantizing unit 304, an entropy encoding unit 324, an inverse quantizing unit 306, an inverse converting unit 308, an adding unit 310, a filtering Unit 312, memory compression unit 314, memory unit 316, memory restoration unit 318, motion detection unit 320, and motion interpolation unit 322.
- the subtraction unit 300 receives the original sample D300, performs subtraction with the prediction sample D328, and outputs a residual value D302.
- the conversion unit 302 inputs the residual value D302 and outputs the converted coefficient D304.
- the quantization unit 304 receives the transform coefficient D304 and outputs a quantized coefficient D306.
- the quantized coefficient D306 is entropy-encoded into the compressed video D330 by the entropy encoding unit 324.
- the inverse quantization unit 306 reads the quantization coefficient D306 and outputs the converted coefficient D308.
- the inverse conversion unit 308 reads the coefficient D308 and outputs a residual value D310.
- the adder 310 receives the residual value D310, adds it to the inter-picture prediction value (prediction sample) D328, and reconstructs the image sample D312.
- the filtering unit 312 reads the reconstructed image sample D312 and outputs a filtered image sample D314.
- the memory compression unit 314 reads the filtered image sample D314 and outputs a compressed data block D320 stored in the memory unit 316.
- the parameter D316 used in the memory compression unit 314 is sent to the entropy encoding unit 324 and encoded in the header of the compressed video.
- These parameters include one or more of the following parameters.
- the width of the input image block such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, or the block of compressed data includes one component, or Which frequency coefficient to remove when using the explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates whether to include 3 components -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
- the memory restoration unit 318 reads the compressed data block D322 from the memory unit 316 and outputs the restored image sample D324 to the motion detection unit 320.
- the motion detection unit 320 reads the restored image sample, detects a motion vector, and outputs the motion vector and the restored image sample D326.
- the motion interpolation unit 322 reads the motion vector D326 and the restored image sample D326 and outputs an inter-picture prediction sample D328.
- FIG. 4 is a block diagram showing an example of a video decoder device (video decoding device) using the present invention.
- the moving picture decoding apparatus includes an entropy decoding unit 400, an inverse quantization unit 402, an inverse transformation unit 404, an addition unit 406, a filtering unit 416, a motion interpolation unit 408, and a first memory restoration unit 410.
- the entropy decoding unit 400 reads the compressed video D400 and outputs a quantized coefficient D402.
- the entropy decoding unit 400 analyzes the memory compression parameter D416 from the header of the compressed video stream D400.
- the analyzed parameter D416 includes one or more of the following parameters.
- the width of the input image block such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included?
- -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
- the inverse quantization unit 402 reads the quantization coefficient D402 and outputs the converted coefficient D404.
- the inverse transform unit 404 reads the transform coefficient D404 and outputs a residual value D406.
- the adder 406 reads the decoded residual value D406 and the inter-picture predicted sample D412 and outputs a reconstructed sample D410.
- the filtering unit 416 reads the reconstructed sample D410 and outputs the filtered sample D424.
- the memory compression unit 412 reads the filtered sample D424 and the analyzed parameter D416, compresses the filtered image into a compressed data block D420, and stores it in the memory unit 413.
- the first memory restoration unit reads the analyzed parameter D416 and the compressed data block D418, restores the data block, and outputs the restored image sample D414.
- the motion interpolation unit 408 reads the restored image sample D414 and outputs an inter-picture prediction sample D412.
- the second memory restoration unit 414 reads the compressed data block D422 and the analyzed parameter D416, and outputs the restored image sample D426.
- FIG. 5 is a flowchart showing the memory compression processing in the embodiment of the present invention.
- the module 500 extracts a block of image samples from the image.
- the size of the block is m pixels ⁇ n pixels, and m and n are positive integer values of 1 or more.
- An example of the block size is 16 pixels ⁇ 4 pixels.
- module 502 converts the block of image samples into a block of coefficients.
- each coefficient value is bit-shifted according to a predetermined coefficient weight value as shown in the following (Equation 1).
- the module 504 step may be omitted.
- the first (DC) coefficient of the block becomes a positive value.
- the first coefficient is subtracted by a predetermined value (DC value) to calculate a signed value.
- a predetermined value there is a value that is 1/2 of the sum of the largest number that the first coefficient can take and the value of 1.
- module 508 if the block is two-dimensional, the coefficients of the block are scanned by the scanning pattern, and after the scanning process, the coefficients are arranged in a one-dimensional array.
- the width and height of the block are symmetric, zigzag scanning is an example of the scanning pattern.
- the scanned coefficients are encoded using a bit-plane encoding process.
- the bit plane encoding process is used to reliably realize the target compression size without performing the quantization process.
- FIG. 6 is a flowchart showing a memory restoration process according to the embodiment of the present invention.
- the coefficient of the block is decoded using a bit plane decoding process.
- offset bits are added to the missing bitplane.
- the bit plane is missing because the data block size is limited, and the bit plane is not encoded by the memory compression process.
- the offset bit addition process is to add 1 bit to the bit plane immediately below the bit plane decoded at the end of the coefficient other than 0.
- the coefficients are back scanned into a two dimensional block of coefficients using a reverse scan pattern. If the dimension of the target output block is one dimension, this reverse scanning process is omitted.
- the module 606 adds the first coefficient of the block with a predetermined value (DC value). Then, in the module 608, each coefficient value is bit-shifted according to a predetermined coefficient weight value as shown in the following (Equation 2).
- the module 608 step may be omitted.
- each pixel that is, the image sample value is bit-shifted to the right by a predetermined pixel shift value.
- the pixel shift value is determined by subtracting the bit depth of the image sample by the target bit depth of the restored image sample.
- the bit shift process is performed according to the following (Equation 3).
- each pixel value is clipped to a value between the maximum and minimum values.
- the minimum value is 0, and the maximum value is the maximum value that the target bit depth can take.
- FIG. 7 is a block diagram showing an example of the memory compression unit of the present invention.
- the memory compression unit includes a conversion unit 700, a bit shift unit 702, a DC subtraction unit 704, a scanning unit 706, and a bit plane encoding unit 708.
- the bit shift unit 702 may be optional.
- the conversion unit 700 reads the image sample block D700 and the size of the block, performs conversion processing, and outputs a converted coefficient block D716.
- the bit shift unit 702 (if provided in the apparatus) reads the converted coefficient block D716 and the coefficient weight value D704, performs bit shift processing, and outputs the coefficient block to the DC subtraction unit 704.
- the DC subtraction unit 704 reads a coefficient block and a predetermined DC value D706, and outputs a coefficient block having a new DC coefficient value.
- the scanning unit 706 reads the coefficient block D720, the scanning pattern D708, and the coefficient removal flag set D710, and outputs a one-dimensional array D722 of coefficients and a value D724 representing the maximum number of coefficients. The maximum number of coefficients can be calculated by counting the number of coefficients that are not removed by the coefficient removal flag set.
- bit plane encoding unit 708 reads the coefficient array D722, the value D724 indicating the maximum number of coefficients, the target compressed block size D712, and the chroma component packing mode value D714, and outputs the compressed data block D726.
- FIG. 8 is a block diagram showing an example of the memory restoration unit using the present invention.
- the memory restoration unit includes a bit plane decoding unit 800, a scanning unit 804, a DC addition unit 806, a first bit shift unit 808, an inverse conversion unit 810, a second bit shift unit 812, and a clipping unit 814.
- the two bit shift units may be arbitrary.
- the bit plane decoding unit 800 reads the compressed data block D800, the value D804 representing the maximum number of coefficients in the block, and the chroma component packing mode D802, and outputs the coefficient array D822.
- the scanning unit 804 reads the coefficient array D822, the scanning pattern D806, and the coefficient removal flag set D808, and outputs a coefficient block D824.
- the DC adder 806 reads the coefficient block D824 and the DC value D810, and outputs a coefficient block D826 having a new DC coefficient value.
- the first bit shift unit 808 reads the coefficient block D826 and the coefficient weight value block D812, and outputs the modified coefficient value block D828.
- the inverse transform unit 810 reads the modified coefficient value block D828 and the block dimension D814, and outputs an image sample block D830.
- the second bit shift unit 812 reads the image sample block D830 and the value D818 representing the bit precision per image sample, and outputs the adjusted image sample block D832.
- the clipping unit 814 reads the image sample block D832 and the value D818 representing the bit accuracy per image sample, clips the range of the image sample values within the bit accuracy, and restores the restored image sample block D834 is output.
- FIG. 9 is a flowchart showing the bit plane encoding process of the memory compression unit in the video encoder of the present invention.
- a chroma component packing mode is selected. In a video encoder implementation with less complexity, the chroma component packing mode may be selected as zero.
- the selected chroma component packing mode is written into the header of the compressed video stream.
- a comparison is made to see if the selected chroma component packing mode is set to zero.
- the selected chroma component packing mode is equal to 0, the luminance and chroma samples are each compressed into different compressed data blocks. If the chroma component packing mode is not 0, module 908 compresses the luminance and chroma samples simultaneously into one compressed data block.
- FIG. 10 is a flowchart showing the bit plane encoding process of the memory compression unit in the video decoder of the present invention.
- the chroma component packing mode is read from the header of the compressed video stream.
- Module 1002 performs a comparison to determine if the chroma component packing mode is equal to zero. If the chroma component packing mode is equal to 0, module 1004 compresses the luminance and chroma samples into different compressed data blocks, respectively. If the chroma component packing mode is not 0, module 1006 compresses the luminance and chroma samples simultaneously into one compressed data block.
- FIG. 11 is a flowchart showing the bit plane decoding process of the memory restoration unit in the video encoder and decoder of the present invention.
- the chroma component packing mode is determined. This chroma component packing mode is the same as the chroma component packing mode used in the bit plane encoding process of the memory compression unit.
- Module 1102 then performs a comparison to determine if the chroma component packing mode is equal to zero. If the chroma component packing mode is equal to 0, in module 1104, the fixed-length data block is restored separately to generate a luminance coefficient and a chroma coefficient, respectively. If the chroma component packing mode is not 0, the module 1106 restores each bit plane to generate a luminance coefficient and a chroma coefficient at the same time.
- FIG. 12A to FIG. 12C show the position candidates of the chroma component packing mode in the header of the compressed video stream.
- FIG. 12A shows the position of the chroma component packing mode parameter in the sequence header of the compressed video stream.
- FIG. 12B shows the position of the chroma component packing mode parameter in the picture header of the compressed video stream.
- FIG. 12C shows that the chroma component packing mode parameters can be derived from the lookup table based on the encoded profile parameters, level parameters, or both in the sequence header of the compressed video stream.
- the memory compression process using the chroma component packing mode parameter compresses the reconstructed image samples decoded from the video streams described in FIGS. 12A, 12B, and 12C.
- FIG. 13 is a flowchart showing a bit plane encoding process for one component. This process is performed when the chroma component packing mode is equal to zero.
- the coefficient value is converted into an absolute value and a sign bit value. A value of 0 for the sign bit represents a positive coefficient value, and a value of 1 for the sign bit represents a negative coefficient value.
- the absolute value is converted into a bit plane.
- the module 1304 calculates the maximum number of bitplane values by finding the number of bitplanes necessary to encode the maximum absolute coefficient value in the coefficient value set.
- module 1306 the calculated maximum value of the bit plane is written into the compressed data block using a fixed number of bits.
- module 1308 encodes bit planes and code bits sequentially from the most significant bit plane until the compressed data block is completely filled.
- FIG. 14 is a schematic diagram showing conversion processing from coefficient values to bit planes. As shown in the schematic diagram, the coefficient value D1400 is converted into an absolute value in binary format and a sign bit D1402. Finally, the bit plane D1404 is formed by cutting out the upper bit plane of the binary value.
- FIG. 15 is a schematic diagram showing a data structure of a compressed data unit for one component of an image.
- the compressed data unit includes a value D1500 representing the number of valid planes, followed by a compressed bit plane D1502 starting from the most significant bit plane.
- Each compressed bitplane includes a value D1510 representing the number of most significant bits in the bitplane, a run and sign bit pair (D1526 and D1528), and a binary symbol set D1530 if the bitplane is not the most significant bitplane. And are included.
- FIG. 16 is a flowchart showing a bit plane encoding process for three components of an image. This process is performed when the chroma component packing mode is equal to 1.
- the module 1600 converts all three component coefficients into absolute values and sign bits. A value of 0 for the sign bit represents a positive coefficient value, and a value of 1 for the sign bit represents a negative coefficient value.
- the absolute value is converted into a bit plane of each component.
- the module 1604 calculates the maximum number of bit plane values by finding the number of bit planes required to encode the maximum absolute coefficient value of the coefficient value set for each component.
- module 1606 the calculated maximum value of the bit plane of each component is continuously written into the compressed data block using a fixed number of bits.
- the module 1608 encodes the bit plane and the sign bit of each component sequentially from the most significant bit plane of the three components until the compressed data block is completely filled.
- FIG. 17 is a schematic diagram showing a data structure of a compressed data unit for three components of an image.
- the compressed data unit includes values D1700, S1702, and D1704 respectively representing the number of effective planes of each component, followed by a compressed bitplane D1702 starting from the most significant bitplane.
- Each compressed bit plane includes one of a plurality of components, each component plane including a value D1720 representing the number of most significant bits in the bitplane, a run and sign bit pair (D1726 and D1728), If the bit plane is not the most significant bit plane, a binary symbol set D1730 is included.
- FIG. 18 is a flowchart showing a bit plane decoding process for one component of an image.
- a value representing the maximum number of bit planes is read from the compressed data unit using a fixed number of bits.
- each bit plane and code bit are sequentially decoded from the most significant bit plane. Since the data unit is compressed, not all bit planes are successfully decoded from the data unit. An offset bit with a value of 1 is added to the bit plane immediately below the last successfully decoded bit plane for each non-zero coefficient. This step is to reduce errors caused by missing bitplanes.
- the absolute value of the coefficient is reconstructed from the bit plane.
- the coefficient value is reconstructed by converting the absolute value and the sign bit into a signed coefficient value.
- FIG. 19 is a flowchart showing a bit plane decoding process for three components of an image.
- a value representing the maximum number of bit planes of each component is continuously read from the compressed data unit using a fixed number of bits.
- the three-component bit plane and the sign bit are sequentially decoded from the three-component most significant bit plane. Since the data unit is compressed, not all bit planes are successfully decoded from the data unit. An offset bit with a value of 1 is added to the bit plane immediately below the last successfully decoded bit plane for each non-zero coefficient. This step is to reduce errors caused by missing bitplanes.
- the absolute value of the coefficient is reconstructed from the bit plane.
- the three-component coefficient values are reconstructed by converting the absolute values and sign bits into signed coefficient values.
- FIG. 20 is a flowchart showing an encoding process for one bit plane.
- the number of the most significant bits of the encoding target bit plane is calculated. This number determines the number of run and code bit pairs to be encoded.
- the module 2002 a comparison is performed to check whether the encoding target bit plane is the most significant bit plane. Since the most significant bit plane should include at least one most significant bit, if the bit plane to be encoded is the most significant bit plane in the module 2002, the number of the most significant bits to be encoded is decremented by 1 in the module 2004. To do. Then, in the module 2006, the obtained number of most significant bits is written into the compressed bit plane by variable length coding.
- the run value of the most significant bit position is calculated by removing the coefficient position where the most significant bit was found in the already decoded bit plane.
- the sign bit values of these most significant bit positions are determined.
- the run value and the sign bit value are encoded as a pair by variable length encoding for encoding the run parameter and 1 bit for encoding the signed value.
- the higher-order bits at the other positions are encoded as binary symbols for the positions at which the most significant bits were decoded in the previous bit plane before the encoding target bit plane.
- FIG. 21 is a flowchart showing a decoding process for one bit plane.
- a number representing the most significant bit number of the decoding target bit plane is read from the compressed data block.
- Module 2102 performs a comparison to determine whether the decoding target bitplane is the most significant bitplane of the block. If the decoding target bit plane is the most significant bit plane, the module 2104 adds 1 to the number of decoded most significant bits. The number of most significant bits obtained also represents the number of run and code bit pairs to be decoded.
- the run and code bit pair is decoded using variable length coding for the run parameter and 1 bit for the code bit parameter.
- the higher order bits in other positions where the most significant bit is present in the already decoded bit plane are decoded using fixed length coding.
- FIG. 22 shows an example of a variable length code for encoding the run parameter and the parameter of the number of most significant bits.
- 22A and 22B are examples of variable-length codes that can be easily decoded without a search table.
- An example of a variable length code used to encode the run parameters is shown in FIG. 22A, and an example of a variable length code used to encode the number of most significant bit parameters is shown in FIG. 22B.
- the variable length code shown in FIG. 22A is a zeroth-order exponent Golomb code.
- FIG. 23 shows an example of encoding one bit plane having only four coefficients.
- the compressed bitplane includes a most significant bit number parameter D2306, a run and sign bit pair (D2308 and D2310), and a binary symbol D2316.
- FIG. 24 is a flowchart showing an explicit scanning process for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- a new scan pattern value is determined.
- the explicit signal scan pattern flag is set to a value of 1, and in module 2404 it is written into the header of the compressed video stream.
- Module 2406 then writes the new scan pattern value into the header of the compressed video stream.
- module 2408 scans the transformed coefficients of the image reconstructed from the compressed video stream in the manner indicated by the new scan pattern.
- the module 2410 determines a reverse scanning pattern from the scanning pattern.
- the coefficient of the memory restoration process is reverse-scanned using the derived reverse scanning pattern.
- FIG. 25 is a flowchart showing an adaptive scanning process for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- the module 2500 reads the explicit signal scanning pattern flag from the header of the compressed video stream.
- the explicit signal scanning pattern flag is compared with a value of one. If the explicit signal scan pattern flag is equal to 1, module 2506 reads a new scan pattern value from the header of the compressed video stream. If the explicit signal scan pattern flag is not equal to 1, module 2504 sets the scan pattern to a predetermined value.
- the predetermined scanning pattern is a zigzag scanning pattern.
- module 2508 scans the transformed coefficients of the image decoded from the compressed video stream in the manner indicated by the scan pattern.
- the module 2510 determines a reverse scanning pattern from the scanning pattern.
- the coefficient of the memory restoration process is reversely scanned using the derived reverse scanning pattern.
- FIGS. 26A and 26B show position candidates of the explicit signal scanning pattern flag and the scanning pattern value in the header of the compressed video stream.
- FIG. 26A shows the positions of the explicit signal scanning pattern flag and the scanning pattern value in the sequence header of the compressed video stream.
- FIG. 26B shows the positions of the explicit signal scanning pattern flag and the scanning pattern value in the picture header of the compressed video stream.
- a memory compression process using explicit signal scan pattern flags and scan pattern values compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 26A and 26B.
- FIG. 27 is a flowchart showing a selective scanning process for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- a coefficient removal flag value set is determined.
- An example of a method for setting the coefficient removal flag is to set a position for removing a higher frequency position.
- the coefficient removal presence flag is set to a value of one.
- the coefficient removal presence flag is written in the header of the compressed video stream.
- the determined coefficient removal flag set is written in the header of the compressed video stream.
- the coefficient removal flag set determines which coefficient is removed from the scanning process, and the position where the coefficient removal flag is set to 1 is skipped in the scanning process.
- the coefficient position whose coefficient removal flag is 1 is skipped during the scanning process.
- the coefficient position whose coefficient removal flag is 1 is skipped during the reverse scanning process.
- the coefficient at the position where the coefficient removal flag is set to 1 is set to 0.
- FIG. 28 is a flowchart showing a selective scanning process for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- the coefficient removal presence flag is read from the header of the compressed video stream.
- a comparison is made to determine if the coefficient removal presence flag is equal to one. If the coefficient removal presence flag is equal to 1, module 2806 reads the coefficient removal flag set from the header of the compressed video stream. If the coefficient removal presence flag is not equal to 1, in module 2804, all coefficient removal flag sets are set to 0. The coefficient removal flag set determines which coefficient is removed from the scanning process, and the position where the coefficient removal flag is set to 1 is skipped in the scanning process.
- the coefficient position whose coefficient removal flag is 1 is skipped during the scanning process.
- the coefficient position whose coefficient removal flag is 1 is skipped during the reverse scanning process.
- the coefficient at the position where the coefficient removal flag is set to 1 is set to 0.
- FIGS. 29A and 29B show position candidates of the coefficient removal presence flag and the coefficient removal flag in the header of the compressed video stream.
- FIG. 29A shows the positions of the coefficient removal presence flag and the coefficient removal flag in the sequence header of the compressed video stream.
- FIG. 29B shows the positions of the coefficient removal presence flag and the coefficient removal flag in the picture header of the compressed video stream.
- the memory compression process using the coefficient removal presence flag and the coefficient removal flag compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 29A and 29B.
- FIG. 30 is a flowchart showing an explicit DC prediction process for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- a new DC prediction value is determined.
- a method for determining a new DC prediction value there is a method including a step of determining a DC value of an uncompressed original image.
- the explicit signal DC flag is set to a value of 1, and in module 3004 it is written into the header of the compressed video stream.
- the determined DC prediction value is written in the same header of the compressed video stream.
- the module 3008 in the memory compression process, the first coefficient of the converted block is subtracted using the determined DC prediction value.
- the module 3010 in the memory restoration process, the first coefficient of the converted block is added using the determined DC prediction value.
- FIG. 31 is a flowchart showing an adaptive DC prediction process for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- the module 3100 reads the explicit signal DC flag from the header of the compressed video stream.
- a comparison is made to determine if the explicit signal DC flag is equal to one. If the explicit signal DC flag is equal to 1, the module 3106 reads the DC prediction value from the header of the compressed video stream. If the explicit signal DC flag is not equal to 1, the module 3104 sets the DC prediction value to a predetermined value. As an example of the predetermined value, there is a value that is 1/2 of the sum of the maximum DC value and the value of 1.
- the module 3108 in the memory compression process, the first coefficient of the converted block is subtracted using the DC prediction value.
- the module 3110 in the memory restoration process, the first coefficient of the converted block is added using the DC prediction value.
- FIGS. 32A and 32B show position candidates of the explicit signal DC flag and the DC predicted value in the header of the compressed video stream.
- FIG. 32A shows the positions of the explicit signal DC flag and the DC prediction value in the sequence header of the compressed video stream.
- FIG. 32B shows the positions of the explicit signal DC flag and the DC prediction value in the picture header of the compressed video stream.
- the memory compression process using the explicit signal DC flag and the DC prediction value compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 32A and 32B.
- FIG. 33 is a flowchart showing a coefficient bit shift process for the memory compression unit and the memory restoration unit in the video encoder of the present invention.
- a weight set of coefficients is determined.
- One example of a method for determining the weights of the coefficients is an optimization-based method aimed at providing the best perceptual quality by weighting the lower frequencies more.
- the coefficient weight value presence flag is set to a value of 1, and in module 3304 it is written into the header of the compressed video stream.
- Module 3306 then writes the coefficient weight set to the same header of the compressed video stream.
- the module 3308 in the memory compression process, the value of each coefficient is bit-shifted to the left based on the coefficient weight value.
- the value of each coefficient is bit-shifted to the right based on the coefficient weight value.
- FIG. 34 is a flowchart showing coefficient bit shift processing for the memory compression unit and the memory restoration unit in the video decoder of the present invention.
- the coefficient weight value presence flag is read from the header of the compressed video stream.
- a comparison is made to determine if the coefficient weight value presence flag is equal to one. If the coefficient weight value presence flag is equal to 1, the module 3406 reads the coefficient weight value set from the header of the compressed video stream. If the coefficient weight value presence flag is not equal to 1, in module 3404, the coefficient weight value set is set to 0.
- the module 3408 in the memory compression process, the value of each coefficient is bit-shifted to the left based on the coefficient weight value.
- the value of each coefficient is bit-shifted to the right based on the coefficient weight value.
- the modules 3408 and 3410 can be omitted.
- FIGS. 35A and 35B show position candidates of the coefficient weight value presence flag and the coefficient weight value set in the header of the compressed video stream.
- FIG. 35A shows the positions of the coefficient weight value presence flag and the coefficient weight value set in the sequence header of the compressed video stream.
- FIG. 35B shows the positions of the coefficient weight value presence flag and the coefficient weight value set in the picture header of the compressed video stream.
- the memory compression process using the coefficient weight value presence flag and the coefficient weight value set compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 35A and 35B.
- FIG. 36 is a flowchart showing an adaptive conversion process for the memory compression unit in the video encoder of the present invention.
- An example of a method for determining an appropriate block size is a method for selecting a block size suitable for a specific memory architecture for a specific application.
- module 3602 information regarding the selected block size is written into the header of the compressed video stream.
- a block of the reconstructed image sample is acquired. This reconstructed image sample is a sample decoded from the compressed video stream. The size of this block is based on the selected block size.
- one transformation matrix is selected from the plurality of transformation matrices based on the selected block size.
- An example of the transformation matrix is a transformation matrix based on Hadamard transformation.
- a module 3608 performs block conversion processing on the image sample using the selected conversion matrix.
- FIG. 37 is a flowchart showing an adaptive conversion process for the memory compression unit in the video decoder of the present invention.
- module 3700 information about the block size is read from the header of the compressed video stream.
- module 3702 in the memory compression process, a block of a reconstructed image sample is acquired. This reconstructed image sample is a sample decoded from the compressed video stream. The size of this block is based on the decoded block size information.
- one transformation matrix is selected from a plurality of transformation matrices based on the decoded block size information. An example of the transformation matrix is a transformation matrix based on Hadamard transformation.
- module 3706 the selected transformation matrix is used to transform a block of image samples.
- FIG. 38 is a flowchart showing an adaptive inverse transform process for the memory restoration unit in the video encoder and video decoder of the present invention.
- module 3800 block size information is determined. This block size information is shared between the memory restoration unit and the memory compression unit provided in the same video encoder or the same video decoder.
- the module 3802 in the memory restoration process, the block of the transform coefficient is restored.
- one inverse transformation matrix is selected from a plurality of inverse transformation matrices based on the determined block size. As an example of the inverse transformation matrix, there is an inverse transformation matrix based on Hadamard transformation.
- the module 3806 the inverse transformation process of the coefficient block is performed using the selected inverse transformation matrix.
- FIGS. 39A to 39C show the block size in the header of the compressed video stream, that is, the position information of the dimension information.
- FIG. 39A shows the block size in the sequence header of the compressed video stream, that is, the position of the dimension information.
- FIG. 39B shows the block size in the picture header of the compressed video stream, that is, the position of the dimension information.
- FIG. 39C shows that the block size, ie, dimension information, can be derived from the lookup table based on the encoded profile parameter, level parameter, or both in the sequence header of the compressed video stream.
- the memory compression process using block size, ie, dimension information compresses the reconstructed image samples decoded from the video streams described in FIGS. 39A, 39B, and 39C.
- FIG. 40 is a flowchart showing an adaptive compression size bit-plane encoding process for the memory compression unit in the video encoder of the present invention.
- the target compression size is determined. The target compression size depends on how much you want to reduce the implementation cost.
- information on the target compression rate is written in the header of the compressed video stream.
- the module 4004 in the memory compression process, a block of coefficients is acquired. This coefficient block is decoded and converted from the compressed video stream.
- the module 4006 uses the coefficient bit-plane coding process in order from the coding of the most significant bit plane until the target compression size of the data block is reached. Information that cannot be encoded, that is, bits, is discarded after reaching the target compression size.
- FIG. 41 is a flowchart showing an adaptive compression size bit-plane encoding process for the memory compression unit in the video decoder of the present invention.
- module 4100 information about the target compression size is read from the header of the compressed video stream.
- the module 4102 acquires a block of coefficients. This coefficient block is decoded and converted from the compressed video stream.
- module 4104 the coefficient bit-plane coding process is used in order from the coding of the most significant bit-plane until the target compression size of the data block is reached. Information that cannot be encoded, that is, bits, is discarded after reaching the target compression size.
- FIG. 42A to 42D show position candidates of the target compressed data size information in the header of the compressed video stream.
- FIG. 42A shows the position of the target compressed data size information in the sequence header of the compressed video stream.
- FIG. 42B shows the position of the target compressed data size information in the picture header of the compressed video stream.
- FIG. 42C shows that the target compressed data size information can be derived from the lookup table based on the encoded profile parameter, the level parameter, or both in the sequence header of the compressed video stream.
- FIG. 42D shows that the target compressed data size information can be derived from the search table based on the encoded picture width parameter and picture height parameter in the sequence header of the compressed video stream.
- a memory compression process using target compressed data size information compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 42A, 42B, 42C, and 42D.
- FIG. 43 is a flowchart showing an adaptive pixel bit shift and clipping process for the memory restoration unit in the video encoder of the present invention.
- module 4300 an appropriate target bit precision is determined. The appropriate target bit accuracy depends on the accuracy of the input image.
- module 4302 information regarding the target bit accuracy is written in the header of the compressed video stream.
- the module 4304 obtains a restored block of image samples.
- module 4306 the bit accuracy of the reconstructed image sample is determined.
- a comparison is made to determine if the accuracy of the restored image sample matches the target bit accuracy. If the bit accuracy of the restored image sample does not match the target bit accuracy, module 4310 shifts the value of each image sample to the right to obtain the target bit accuracy.
- each pixel value is clipped to an integer value range between the maximum and minimum values based on the target bit accuracy.
- An example of the maximum value is a positive maximum integer value with target bit precision, and an example of the minimum value is 0.
- FIG. 44 is a flowchart showing adaptive pixel bit shift and clipping processing for the memory restoration unit in the video decoder of the present invention.
- module 4400 information regarding the target bit accuracy is read from the header of the compressed video stream.
- the module 4402 obtains a restored block of image samples.
- the bit accuracy of the reconstructed image sample is determined.
- module 4406 a comparison is made to determine if the bit accuracy of the restored image sample matches the target bit accuracy. If the bit accuracy of the restored image sample does not match the target bit accuracy, module 4408 shifts the value of each image sample to the right to obtain the target bit accuracy. The degree of bit shift is based on the difference between the accuracy of the restored image sample and the target bit accuracy.
- each pixel value is clipped to an integer value range between the maximum and minimum values based on the target bit precision.
- An example of the maximum value is a positive maximum integer value with target bit precision, and an example of the minimum value is 0.
- FIG. 45A to 45C show position candidates of the target bit accuracy information in the header of the compressed video stream.
- FIG. 45A shows the position of the target bit accuracy information in the sequence header of the compressed video stream.
- FIG. 45B shows the position of the target bit accuracy information in the picture header of the compressed video stream.
- FIG. 42C shows that the target bit accuracy information can be derived from the lookup table based on the encoded profile parameter, the level parameter, or both in the sequence header of the compressed video stream.
- the memory restoration process using the target bit accuracy information restores the reconstructed image samples generated from the compressed video streams of FIGS. 45A, 45B, and 45C.
- the memory compression method selection parameter can be arranged in a sequence parameter set (SPS) or a sequence header. Further, the memory compression method selection parameter can be arranged in a picture parameter set (PPS) or a picture header as shown in FIG. 46B. Further, the memory compression method selection parameter may be derived from a profile parameter or a level parameter in a sequence parameter set or a sequence header, as shown in FIG. 46C. Further, as shown in FIG. 46D, the memory compression method selection parameter may be derived from the picture width (Pict_Width) and picture height (Pict_Hight) in the sequence parameter set or the sequence header.
- FIG. 47 shows a flowchart of the adaptive memory compression method in the video decoding apparatus of the present invention.
- memory compression scheme selection parameters are read from the header of the compressed video stream.
- module 4702 it is determined whether or not the memory compression method selection parameter has a predetermined value. If the memory compression scheme selection parameter has a predetermined value, module 4706 compresses the reconstructed samples for each sample group using a compression scheme based on the adaptive bit quantization scheme. If the memory compression scheme selection parameter does not have a predetermined value, module 4704 compresses the reconstructed sample pixel by pixel using a simple pixel bit right shift scheme. Finally, in module 4708, the compressed samples are stored in the memory unit.
- FIG. 48 shows a flowchart of an adaptive memory compression method in the video encoding apparatus of the present invention.
- one memory compression method is selected from a plurality of memory compression methods.
- memory compression scheme selection parameters are written to the header of the compressed video stream.
- module 4804 it is determined whether or not the memory compression method selection parameter has a predetermined value. If the memory compression scheme selection parameter has a predetermined value, module 4808 compresses the reconstructed samples for each sample group using a compression scheme based on the adaptive bit quantization scheme. If the memory compression scheme selection parameter does not have a predetermined value, module 4806 compresses the reconstructed sample pixel by pixel using a simple pixel bit right shift scheme. Finally, in module 4810, the compressed samples are stored in the memory unit.
- FIG. 49 shows a flowchart of an adaptive memory restoration method in the video encoding device and video decoding device of the present invention.
- a memory compression method selection parameter is determined.
- the compressed sample is retrieved from the memory unit.
- module 4904 it is determined whether or not the memory compression method selection parameter has a predetermined value. If the memory compression scheme selection parameter has a predetermined value, in module 4908, compressed samples are decompressed for each sample group using a decompression scheme based on adaptive bit dequantization. If the memory compression scheme selection parameter does not have a predetermined value, in module 4906, the reconstructed samples are restored pixel by pixel using a simple pixel bit left shift scheme. Finally, in module 4910, inter-picture prediction is performed using the reconstructed reconstructed samples.
- the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
- FIG. 50 is a diagram illustrating an overall configuration of a content supply system ex100 that realizes a content distribution service.
- the communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
- the content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
- PDA Personal Digital Assistant
- each device may be directly connected to the telephone network ex104 without going through the base stations ex106 to ex110 which are fixed wireless stations.
- the devices may be directly connected to each other via short-range wireless or the like.
- the camera ex113 is a device that can shoot moving images such as a digital video camera
- the camera ex116 is a device that can shoot still images and movies such as a digital camera.
- the mobile phone ex114 is a GSM (Global System for Mobile Communications) method, a CDMA (Code Division Multiple Access) method, a W-CDMA (Wideband-Code Division Multiple Access L (Semiconductor Access) method, a W-CDMA (Wideband-Code Division Multiple Access L method, or a high access rate).
- GSM Global System for Mobile Communications
- CDMA Code Division Multiple Access
- W-CDMA Wideband-Code Division Multiple Access L (Semiconductor Access) method
- W-CDMA Wideband-Code Division Multiple Access L method
- a high access rate A High Speed Packet Access
- PHS Personal Handyphone System
- the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
- live distribution the content (for example, music live video) captured by the user using the camera ex113 is encoded as described in the above embodiments, and transmitted to the streaming server ex103.
- the streaming server ex103 streams the content data transmitted to the requested client.
- the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, a game machine ex115, and the like that can decode the encoded data.
- Each device that receives the distributed data decodes the received data and reproduces it.
- the encoded processing of the captured data may be performed by the camera ex113, the streaming server ex103 that performs the data transmission processing, or may be performed in a shared manner.
- the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in a shared manner.
- still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
- the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
- encoding / decoding processes are generally performed by the computer ex111 and the LSI ex500 included in each device.
- the LSI ex500 may be configured as a single chip or a plurality of chips.
- moving image encoding / decoding software is incorporated into some recording media (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111 and the like, and encoding / decoding processing is performed using the software. May be.
- moving image data acquired by the camera may be transmitted.
- the moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
- the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
- the encoded data can be received and reproduced by the client.
- the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and even a user who does not have special rights or facilities can realize personal broadcasting.
- At least one of the video encoding device and the video decoding device of each of the above embodiments is incorporated in the digital broadcasting system ex200. be able to.
- the broadcast station ex201 multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves.
- This video data is data encoded by the moving image encoding method described in the above embodiments.
- the broadcasting satellite ex202 transmits a radio wave for broadcasting, and the home antenna ex204 capable of receiving the satellite broadcast receives the radio wave.
- the received multiplexed data is decoded and reproduced by a device such as the television (receiver) ex300 or the set top box (STB) ex217.
- a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system by the recording medium ex215 on which the multiplexed data is recorded.
- a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television. At this time, the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
- FIG. 52 is a diagram illustrating a television (receiver) ex300 that uses the video decoding method and the video encoding method described in each of the above embodiments.
- the television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data.
- the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
- the television ex300 decodes each of the audio data and the video data, or encodes the respective information, the audio signal processing unit ex304, the signal processing unit ex306 including the video signal processing unit ex305, and the decoded audio signal.
- the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation.
- the television ex300 includes a control unit ex310 that controls each unit in an integrated manner, and a power supply circuit unit ex311 that supplies power to each unit.
- the interface unit ex317 includes a bridge ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording such as a hard disk.
- a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
- the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored.
- Each part of the television ex300 is connected to each other via a synchronous bus.
- the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in the above embodiments.
- the decoded audio signal and video signal are output from the output unit ex309 to the outside.
- these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization.
- the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or writes it to a recording medium will be described.
- the television ex300 receives a user operation from the remote controller ex220 or the like, and encodes an audio signal with the audio signal processing unit ex304 based on the control of the control unit ex310, and converts the video signal with the video signal processing unit ex305. Encoding is performed using the encoding method described in (1).
- the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320 and ex321 so that the audio signal and the video signal are synchronized.
- a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
- the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good.
- the television ex300 has been described as a configuration that can perform the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
- the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218.
- the reader / recorder ex218 may be shared with each other.
- FIG. 53 shows the configuration of the information reproducing / recording unit ex400 when data is read from or written to the optical disk.
- the information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below.
- the optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disc to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information.
- the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
- the reproduction demodulator ex403 amplifies a reproduction signal obtained by electrically detecting reflected light from the recording surface by a photodetector built in the optical head ex401, separates and demodulates a signal component recorded on the recording medium ex215, and is necessary. To play back information.
- the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
- the disk motor ex405 rotates the recording medium ex215.
- the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
- the system control unit ex407 controls the entire information reproduction / recording unit ex400.
- the system control unit ex407 uses various kinds of information held in the buffer ex404, and generates and adds new information as necessary, as well as the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner.
- the system control unit ex407 is composed of, for example, a microprocessor, and executes these processes by executing a read / write program.
- the optical head ex401 has been described as irradiating a laser spot, but it may be configured to perform higher-density recording using near-field light.
- FIG. 54 shows a schematic diagram of a recording medium ex215 that is an optical disk.
- Guide grooves grooves
- address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
- This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus.
- the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
- the area used for recording the user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner circumference or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
- the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
- an optical disk such as a single-layer DVD or BD has been described as an example.
- the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used.
- an optical disc with a multi-dimensional recording / reproducing structure such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
- the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
- the configuration of the car navigation ex211 may include a configuration in which a GPS receiving unit is added to the configuration illustrated in FIG.
- FIG. 55A is a diagram showing the mobile phone ex114 using the moving picture decoding method and the moving picture encoding method described in the above embodiment.
- the mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of taking video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data.
- the mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio,
- a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio
- an audio input unit ex356 such as a microphone for inputting audio
- the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data
- a slot ex364 is provided.
- the mobile phone ex114 has a power supply circuit unit ex361, an operation input control unit ex362, and a video signal processing unit ex355 with respect to a main control unit ex360 that comprehensively controls each unit of the main body including the display unit ex358 and the operation key unit ex366.
- a camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
- the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
- the mobile phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. This is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the mobile phone ex114 amplifies the received data received through the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing in the modulation / demodulation unit ex352, and performs voice signal processing unit After converting to an analog audio signal at ex354, this is output from the audio output unit ex356.
- the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362.
- the main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350.
- almost the reverse process is performed on the received data and output to the display unit ex358.
- the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments.
- the encoded video data is sent to the multiplexing / demultiplexing unit ex353.
- the audio signal processing unit ex354 encodes the audio signal picked up by the audio signal input unit ex356 while the camera unit ex365 images a video, a still image, and the like, and the encoded audio data is sent to the multiplexing / demultiplexing unit ex353. Send it out.
- the multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result.
- the multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370.
- the encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355.
- the video signal processing unit ex355 decodes the video signal by decoding using a video decoding method corresponding to the video encoding method shown in each of the above embodiments, and the display unit ex358 via the LCD control unit ex359. From, for example, video and still images included in a moving image file linked to a home page are displayed.
- the audio signal processing unit ex354 decodes the audio signal, and the audio output unit ex357 outputs the audio.
- the terminal such as the mobile phone ex114 is referred to as a transmitting terminal having only an encoder and a receiving terminal having only a decoder.
- a transmitting terminal having only an encoder
- a receiving terminal having only a decoder.
- multiplexed data in which music data is multiplexed with video data is received and transmitted.
- character data related to video is multiplexed. It may be converted data, or may be video data itself instead of multiplexed data.
- the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
- multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to.
- identification information indicating which standard the video data conforms to.
- FIG. 56 is a diagram showing a structure of multiplexed data.
- multiplexed data is obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream.
- the video stream indicates the main video and sub-video of the movie
- the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio
- the presentation graphics stream indicates the subtitles of the movie.
- the main video indicates a normal video displayed on the screen
- the sub-video is a video displayed on a small screen in the main video.
- the interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen.
- the video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing.
- the audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
- Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
- FIG. 57 is a diagram schematically showing how multiplexed data is multiplexed.
- a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240.
- the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246.
- the multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
- FIG. 58 shows in more detail how the video stream is stored in the PES packet sequence.
- the first row in FIG. 58 shows a video frame sequence of the video stream.
- the second level shows a PES packet sequence.
- a plurality of Video Presentation Units in the video stream are divided into pictures, B pictures, and P pictures, and are stored in the payload of the PES packet.
- Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
- PTS Presentation Time-Stamp
- DTS Decoding Time-Stamp
- FIG. 59 shows the format of TS packets that are finally written in the multiplexed data.
- the TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data.
- the PES packet is divided and stored in the TS payload.
- a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data.
- TP_Extra_Header information such as ATS (Arrival_Time_Stamp) is described.
- ATS indicates the transfer start time of the TS packet to the PID filter of the decoder.
- source packets are arranged in the multiplexed data, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
- TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption.
- PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0.
- the PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data.
- the descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data.
- the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
- FIG. 60 is a diagram for explaining the data structure of the PMT in detail.
- a PMT header describing the length of data included in the PMT is arranged at the head of the PMT.
- a plurality of descriptors related to multiplexed data are arranged.
- the copy control information and the like are described as descriptors.
- a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged.
- the stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream.
- the multiplexed data is recorded together with the multiplexed data information file.
- the multiplexed data information file is management information of multiplexed data, has one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
- the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time.
- the system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later.
- the ATS interval included in the multiplexed data is set to be equal to or less than the system rate.
- the playback start time is the PTS of the first video frame of the multiplexed data
- the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
- attribute information about each stream included in the multiplexed data is registered for each PID.
- the attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream.
- the video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is.
- the audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
- the stream type included in the PMT is used.
- video stream attribute information included in the multiplexed data information is used.
- the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT.
- FIG. 63 shows the steps of the moving picture decoding method according to the present embodiment.
- step exS100 the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data.
- step exS101 it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do.
- step exS102 each of the above embodiments. Decoding is performed by the moving picture decoding method shown in the form.
- the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
- FIG. 64 shows a configuration of an LSI ex500 that is made into one chip.
- the LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510.
- the power supply circuit unit ex505 starts up to an operable state by supplying power to each unit when the power supply is in an on state.
- the LSI ex500 uses the AV I / O ex509 to perform the microphone ex117 and the camera ex113 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like.
- the AV signal is input from the above.
- the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
- the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed.
- the encoding process of the video signal is the encoding process described in the above embodiments.
- the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside.
- the output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so that the data is synchronized when multiplexed.
- the memory ex511 has been described as an external configuration of the LSI ex500.
- a configuration included in the LSI ex500 may be used.
- the number of buffers ex508 is not limited to one, and a plurality of buffers may be provided.
- the LSI ex500 may be made into one chip or a plurality of chips.
- control unit ex510 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex510 is not limited to this configuration.
- the signal processing unit ex507 may further include a CPU.
- the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507.
- the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
- LSI LSI
- IC system LSI
- super LSI ultra LSI depending on the degree of integration
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- FIG. 65 shows a configuration ex800 in the present embodiment.
- the drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments. Then, it instructs the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments to decode the video data.
- the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
- the driving frequency switching unit ex803 is composed CPUex502 the driving frequency control unit ex512 in FIG. 64.
- the decoding processing unit ex801 that executes the moving picture decoding method shown in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG.
- the CPU ex502 identifies which standard the video data conforms to. Then, based on the signal from the CPU ex502, the drive frequency control unit ex512 sets the drive frequency. Further, based on the signal from the CPU ex502, the signal processing unit ex507 decodes the video data.
- the identification of the video data for example, it is conceivable to use the identification information described in the ninth embodiment.
- the identification information is not limited to that described in Embodiment 9, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal. In addition, the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a lookup table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to this look-up table.
- FIG. 66 shows steps for executing the method of the present embodiment.
- the signal processing unit ex507 acquires identification information from the multiplexed data.
- the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information.
- the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency.
- step exS203 the CPU ex502 drives a signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
- the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency.
- the drive frequency is set to be low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
- the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method.
- the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
- the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered.
- the voltage applied to the LSI ex500 or the apparatus including the LSI ex500 is set high.
- the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, VC-1, etc.
- the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, the driving of the CPU ex502 is stopped.
- the CPU ex502 is temporarily stopped because there is enough processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is enough processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
- a plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone.
- the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input.
- the signal processing unit ex507 corresponding to each standard is individually used, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
- a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1
- the processing unit is partly shared.
- An example of this configuration is shown as ex900 in FIG. 68A.
- the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common.
- the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for other processing contents unique to the present invention that do not correspond to the MPEG4-AVC standard, the dedicated decoding processing unit ex901 is used.
- Configuration is conceivable.
- a dedicated decoding processing unit ex901 is used for inverse transform, and other entropy coding, inverse quantization, deblocking filter, motion, etc. It is conceivable to share the decoding processing unit for any or all of the compensation processes.
- the decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
- ex1000 in FIG. 68B shows another example in which processing is partially shared.
- a dedicated decoding processing unit ex1001 corresponding to processing content specific to the present invention
- a dedicated decoding processing unit ex1002 corresponding to processing content specific to other conventional standards
- a moving picture decoding method of the present invention A common decoding processing unit ex1003 corresponding to processing contents common to other conventional video decoding methods is used.
- the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in the processing content specific to the present invention or other conventional standards, and may be capable of executing other general-purpose processing.
- the configuration of the present embodiment can be implemented by LSI ex500.
- the circuit scale of the LSI can be reduced and the cost can be reduced. It is possible to reduce.
- the moving picture coding method and the moving picture decoding method according to the present invention have the effect of being able to flexibly and easily cope with a wide range of compression rates.
- the present invention can be applied to computers, recording / reproducing apparatuses, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
The disclosed video encoding method is capable of easily and flexibly accommodating a range of compression ratios involving a step (100) for encoding pictures using inter-picture prediction, a step (102) for entropy encoding the encoded pictures, a step (104) for decoding the encoded pictures using inter-picture prediction, a step (106) for determining a memory compression parameter set, a step (108) for writing the memory compression parameter set into the header of a compression video stream containing the encoded pictures, a step (110) for compressing the decoded pictures to a fixed-size data blocks using the memory compression parameter set, and a step (112) for restoring the data blocks of the decoded pictures and searching for image samples necessary for inter-picture prediction in the later encoding of pictures.
Description
本発明は、マルチメディアデータの符号化、より具体的には、ブロックベースメモリ圧縮および復元方式を用いた動画像の符号化方法および復号化方法などに関する。
The present invention relates to encoding of multimedia data, more specifically, a moving picture encoding method and decoding method using block-based memory compression and decompression methods, and the like.
ビデオエンコーダまたはビデオデコーダを実装する際には、何よりもまず、メモリアクセス帯域幅が大きいことが懸念される。高空間解像度の画像(例えば、1920画素×1080画素以上)または高ダイナミックレンジ(各画像成分に対して1画素あたり8ビットより大きい)に対応する必要があるビデオアプリケーションや、限られた電力消費状態で動作する必要のあるビデオアプリケーションの場合、メモリアクセスの帯域幅を削減することが実装コストおよび電力消費を抑えるキーステップである。
When implementing a video encoder or video decoder, first of all, there is a concern that the memory access bandwidth is large. Video applications that need to support high spatial resolution images (eg, 1920 pixels by 1080 pixels or higher) or high dynamic range (greater than 8 bits per pixel for each image component) and limited power consumption For video applications that need to work with, reducing memory access bandwidth is a key step in reducing implementation cost and power consumption.
ビデオエンコーダおよびデコーダの実装においてメモリアクセス帯域幅を削減する方法の1つは、ピクチャ間予測処理で参照ピクチャとして用いられるピクチャサイズを圧縮することである。参照ピクチャを圧縮してメモリアクセス帯域幅を削減した先行技術はいくつか存在する。
One method of reducing memory access bandwidth in video encoder and decoder implementation is to compress the picture size used as a reference picture in inter-picture prediction processing. There are several prior art techniques that reduce the memory access bandwidth by compressing reference pictures.
解像度サイズ変更方法
特許文献1および特許文献2では、参照ピクチャの空間解像度を低減するためにダウンコンバージョン方式を用いてメモリアクセス帯域幅を削減している。これらの方式の欠点は、元の空間解像度に再構成されたピクチャに含まれる高周波成分が低解像度へのダウンコンバージョン処理中に失われ、アップ変換処理では元に戻すことが不可能なことである。このような方式では、アップ変換されたピクチャの画質は劣化する。 Resolution Size Changing Method InPatent Document 1 and Patent Document 2, the memory access bandwidth is reduced using the down-conversion method in order to reduce the spatial resolution of the reference picture. The disadvantage of these methods is that the high frequency components contained in the picture reconstructed to the original spatial resolution are lost during the down-conversion process to the lower resolution and cannot be restored by the up-conversion process. . In such a system, the picture quality of the up-converted picture is degraded.
特許文献1および特許文献2では、参照ピクチャの空間解像度を低減するためにダウンコンバージョン方式を用いてメモリアクセス帯域幅を削減している。これらの方式の欠点は、元の空間解像度に再構成されたピクチャに含まれる高周波成分が低解像度へのダウンコンバージョン処理中に失われ、アップ変換処理では元に戻すことが不可能なことである。このような方式では、アップ変換されたピクチャの画質は劣化する。 Resolution Size Changing Method In
特許文献3では、4画像画素ごとに1画素を削除するステップと、1画素あたりのビット数を1ビット削減するステップと、欠落画素の予測方法に関する3ビットの情報を追加するステップとを含むダウンコンバージョン方式が用いられている。この方式の欠点は、1つの画素の情報を失うと、この検出方法からは正しく元に戻せない可能性があることである。その他の欠点は、より高効率で圧縮する場合は、画質がかなり低下することであろう。
Patent Document 3 includes a step of deleting one pixel for every four image pixels, a step of reducing the number of bits per pixel by 1 bit, and a step of adding 3-bit information related to a method for predicting missing pixels. A conversion method is used. The disadvantage of this method is that if one pixel information is lost, this detection method may not be able to restore it correctly. Another drawback is that the image quality will be significantly reduced when compressing with higher efficiency.
画素量子化方法
非特許文献1には、ブロックスカラー量子化方式が記載されている。画素ブロックごとに最小画素値と最大画素値とを算出して記憶する。そして、算出した最小画素値と最大画素値との間でそのブロック内の画素全てを均一に量子化して記憶する。 Pixel Quantization Method Non-PatentDocument 1 describes a block scalar quantization method. The minimum pixel value and the maximum pixel value are calculated and stored for each pixel block. Then, all the pixels in the block are uniformly quantized and stored between the calculated minimum pixel value and maximum pixel value.
非特許文献1には、ブロックスカラー量子化方式が記載されている。画素ブロックごとに最小画素値と最大画素値とを算出して記憶する。そして、算出した最小画素値と最大画素値との間でそのブロック内の画素全てを均一に量子化して記憶する。 Pixel Quantization Method Non-Patent
非特許文献2には、1次元のDPCM予測構造と非線形の量子化器を用いて8画素サンプルを圧縮する画素量子化方式が記載されている。
Non-Patent Document 2 describes a pixel quantization method that compresses 8-pixel samples using a one-dimensional DPCM prediction structure and a nonlinear quantizer.
画素量子化方式の欠点は、圧縮率が大きいと、かなりはっきりした質の劣化が必ず生じることである。また、この方式では、画素値が大きく変化する画像サンプル群に対して効率よく圧縮できない。
The disadvantage of the pixel quantization method is that, when the compression ratio is large, a considerably clear quality degradation always occurs. Also, with this method, it is not possible to efficiently compress an image sample group whose pixel value changes greatly.
解像度サイズ変更方法や画素量子化方法を用いたメモリ圧縮方式の先行技術に関する課題は、これらの方式では幅のある圧縮率に柔軟に対応できないことである。通常、これらは特定の圧縮率に対して設計され、対象圧縮率が高い場合はその効率が低い。画像サンプルを周波数領域へ変換するメモリ圧縮方式では、符号化効率はよくなるが、ブロックごとに対象圧縮ビットを調整することが、特に画像ブロックサイズが小さい場合には困難である。メモリ圧縮処理では、画像サンプルの各ブロックを、周辺ブロックのいかなる情報も参照せずに同じ圧縮率でそれぞれ符号化しなければならず、メモリ圧縮部もあまり複雑にならないように実装する必要がある。
The problem with the prior art of the memory compression method using the resolution size changing method and the pixel quantization method is that these methods cannot flexibly cope with a wide range of compression rates. These are usually designed for a specific compression ratio, and the efficiency is low when the target compression ratio is high. In the memory compression method in which the image sample is converted to the frequency domain, the encoding efficiency is improved, but it is difficult to adjust the target compression bit for each block, particularly when the image block size is small. In the memory compression process, each block of the image sample must be encoded at the same compression rate without referring to any information of the peripheral blocks, and the memory compression unit needs to be mounted so as not to be too complicated.
そこで、本発明は、かかる課題に鑑みてさなれたものであって、幅のある圧縮率に柔軟に且つ簡単に対応可能な動画像符号化方法および動画像復号化方法などを提供することを目的とする。
Therefore, the present invention has been made in view of such problems, and provides a video encoding method, a video decoding method, and the like that can flexibly and easily handle a wide range of compression rates. Objective.
これらの課題を解決するために、本発明に係る動画像符号化方法は、ブロックベースメモリ圧縮および復元方式の動画像符号化方法であって、ピクチャ間予測を用いてピクチャを符号化するステップと、前記符号化ピクチャのエントロピー符号化を行うステップと、ピクチャ間予測を用いて前記符号化ピクチャを復号化するステップと、メモリ圧縮パラメータセットを決定するステップと、前記符号化ピクチャを含む圧縮ビデオストリームのヘッダに前記メモリ圧縮パラメータセットを書き込むステップと、前記メモリ圧縮パラメータセットを用いて、固定サイズのデータブロックに前記復号化ピクチャを圧縮するステップと、前記復号化ピクチャの前記データブロックを復元して、後のピクチャの符号化におけるピクチャ間予測に必要な画像サンプルを検索するステップとを含む。また、本発明に係る動画像復号化方法は、ブロックベースメモリ圧縮および復元方式の動画像復号化方法であって、圧縮ビデオストリームのヘッダを解析して、メモリ圧縮処理を構成するメモリ圧縮パラメータセットを取得するステップと、前記圧縮ビデオストリームのエントロピー復号化を行うステップと、ピクチャ間予測を用いて前記圧縮ビデオストリームからピクチャを復号化するステップと、解析された前記メモリ圧縮パラメータセットを用いて、固定サイズのデータブロックに前記復号化ピクチャを圧縮するステップと、前記復号化ピクチャの前記データブロックを復元して、後のピクチャの復号化におけるピクチャ間予測に必要な画像サンプルを生成するステップと、前記復号化ピクチャの前記データブロックを復元して、出力用の画像サンプルを生成するステップとを含む。
In order to solve these problems, a moving picture coding method according to the present invention is a block-based memory compression and decompression moving picture coding method, which encodes a picture using inter-picture prediction, and Performing entropy coding of the coded picture; decoding the coded picture using inter-picture prediction; determining a memory compression parameter set; and a compressed video stream including the coded picture Writing the memory compression parameter set in a header of the image, compressing the decoded picture into a fixed-size data block using the memory compression parameter set, and restoring the data block of the decoded picture Necessary for inter-picture prediction in later picture coding And a step of retrieving the image samples. The video decoding method according to the present invention is a block-based memory compression / decompression video decoding method that analyzes a header of a compressed video stream and configures a memory compression parameter set that constitutes a memory compression process. A step of performing entropy decoding of the compressed video stream, a step of decoding a picture from the compressed video stream using inter-picture prediction, and using the analyzed memory compression parameter set, Compressing the decoded picture into a fixed-size data block; restoring the data block of the decoded picture to generate image samples required for inter-picture prediction in subsequent picture decoding; Restore and output the data block of the decoded picture And generating the image samples.
つまり、本発明では、構成可能な固定長ブロックベースの圧縮方式に対する方法を用いる。1つの新たな圧縮方法では、変換処理と、ビットプレーン符号化と、テーブルを必要としない可変長符号とを用いて、複雑さの少ない実装で、符号ブロックの全ビットを容易に対象圧縮率まで調整する。他の新たな圧縮方法では、構成可能なビット量子化方式を用いる。これら新たな圧縮方法は、異なる圧縮率でも正確に適合するよう、また、応用利用の場合も適合するよう、柔軟に構成されている。
In other words, the present invention uses a method for a configurable fixed-length block-based compression scheme. One new compression method uses transformation processing, bit-plane coding, and variable-length codes that do not require a table, and easily implements all bits of a code block to the target compression rate with low complexity. adjust. Another new compression method uses a configurable bit quantization scheme. These new compression methods are configured flexibly so that they can be accurately adapted even at different compression rates and adapted for application use.
本発明の新規性は、メモリアクセス帯域幅を削減するためにビデオエンコーダで用いられる新たなメモリ圧縮部のコンフィグレーション設定を、圧縮ビデオストリームをデコードするビデオデコーダに、圧縮ビデオストリーム内の信号で伝えることができることである。ビデオデコーダは、圧縮ビデオストリームからコンフィグレーション設定を読み込み、ビデオエンコーダと同様に参照ピクチャを圧縮するメモリ圧縮部を構成する。参照ピクチャの質を向上させるために、ビデオエンコーダはメモリ圧縮部を構成して、ビデオエンコーダまたはデコーダの複雑さを低減することができる。ビデオデコーダに信号で伝えられるコンフィグレーション設定には、符号ブロックの対象圧縮率、画像ブロックの空間寸法、クロマ成分パッキングモード、係数ブロックの知覚周波数重み付け、DC予測値、係数ブロックの周波数除去パターン、または、係数走査パターンがパラメータとして含まれる。
The novelty of the present invention conveys the configuration setting of a new memory compression unit used in the video encoder to reduce the memory access bandwidth to the video decoder that decodes the compressed video stream by a signal in the compressed video stream. Be able to. The video decoder configures a memory compression unit that reads configuration settings from the compressed video stream and compresses reference pictures in the same manner as the video encoder. In order to improve the quality of the reference picture, the video encoder can be configured with a memory compressor to reduce the complexity of the video encoder or decoder. Configuration settings that are signaled to the video decoder include code block target compression ratio, image block spatial dimensions, chroma component packing mode, coefficient block perceptual frequency weighting, DC prediction value, coefficient block frequency removal pattern, or The coefficient scanning pattern is included as a parameter.
なお、本発明は、このような動画像符号化方法および動画像復号化方法として実現することができるだけでなく、それらの方法により動画像を処理する動画像符号化装置、動画像復号化装置、および集積回路、それらの方法により動画像をコンピュータに処理させるプログラム、そのプログラムを格納する記録媒体としても実現することができる。
The present invention can be realized not only as such a moving image encoding method and a moving image decoding method, but also as a moving image encoding device, a moving image decoding device, It can also be realized as an integrated circuit, a program for causing a computer to process a moving image by these methods, and a recording medium for storing the program.
本発明は参照ピクチャの画質を固定圧縮率で向上させるので、本発明の効果は符号化効率を改善する点である。また、本発明の効果は、ビデオエンコーダの複雑さ設定とビデオデコーダの複雑さ設定が違っていても相互運用できる点でもある。低複雑性のビデオエンコーダ(メモリアクセスの帯域幅が著しく削減されている)は、ビデオエンコーダおよびデコーダで用いられた参照ピクチャを正確に再構成するビデオデコーダで復号化可能な圧縮ビデオビットストリームを生成することができる。これは、圧縮ビデオストリーム内の符号化された信号を介して、ビデオデコーダのメモリ圧縮部を構成することにより実現される。
Since the present invention improves the picture quality of the reference picture at a fixed compression rate, the effect of the present invention is to improve the coding efficiency. Another advantage of the present invention is that the video encoder can be interoperated even if the complexity setting of the video encoder is different from the complexity setting of the video decoder. Low-complexity video encoder (with significantly reduced memory access bandwidth) produces a compressed video bitstream that can be decoded by a video decoder that accurately reconstructs the reference pictures used in the video encoder and decoder can do. This is realized by configuring the memory compression unit of the video decoder via the encoded signal in the compressed video stream.
本発明は、メモリアクセス帯域幅とメモリ記憶部のサイズとを削減する、ビデオエンコーダおよびデコーダの実装に用いられる構成可能なメモリ圧縮およびメモリ復元処理について記載している。
The present invention describes a configurable memory compression and decompression process used to implement video encoders and decoders that reduce memory access bandwidth and memory storage size.
(実施の形態1)
図1は、本発明を用いた動画像符号化処理を示すフローチャートである。モジュール100では、ピクチャ間予測処理を用いてピクチャを符号化する。その後、モジュール102において、符号化されたピクチャをエントロピー符号化する。そして、モジュール104では、ピクチャ間予測を用いてそのピクチャを復号化かつ再構成する。モジュール106において、メモリ圧縮パラメータセットを決定する。実装の複雑さを低減し、かつ、メモリ圧縮処理の圧縮効率を向上させるために、これらのメモリ圧縮パラメータはビデオエンコーダによって決定される。そして、モジュール108では、決定されたメモリ圧縮パラメータを圧縮ビデオストリームのヘッダ内に符号化する。メモリ圧縮パラメータには、ビデオデコーダのメモリ圧縮および復元処理に用いられる、以下のパラメータが1以上含まれる。
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 (Embodiment 1)
FIG. 1 is a flowchart showing a moving image encoding process using the present invention.Module 100 encodes a picture using inter-picture prediction processing. Thereafter, in the module 102, the encoded picture is entropy encoded. Module 104 then decodes and reconstructs the picture using inter-picture prediction. In module 106, a memory compression parameter set is determined. These memory compression parameters are determined by the video encoder in order to reduce implementation complexity and improve the compression efficiency of the memory compression process. Then, the module 108 encodes the determined memory compression parameter in the header of the compressed video stream. The memory compression parameter includes one or more of the following parameters used for memory compression and decompression processing of the video decoder.
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included? -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
図1は、本発明を用いた動画像符号化処理を示すフローチャートである。モジュール100では、ピクチャ間予測処理を用いてピクチャを符号化する。その後、モジュール102において、符号化されたピクチャをエントロピー符号化する。そして、モジュール104では、ピクチャ間予測を用いてそのピクチャを復号化かつ再構成する。モジュール106において、メモリ圧縮パラメータセットを決定する。実装の複雑さを低減し、かつ、メモリ圧縮処理の圧縮効率を向上させるために、これらのメモリ圧縮パラメータはビデオエンコーダによって決定される。そして、モジュール108では、決定されたメモリ圧縮パラメータを圧縮ビデオストリームのヘッダ内に符号化する。メモリ圧縮パラメータには、ビデオデコーダのメモリ圧縮および復元処理に用いられる、以下のパラメータが1以上含まれる。
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 (Embodiment 1)
FIG. 1 is a flowchart showing a moving image encoding process using the present invention.
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included? -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
モジュール110では、圧縮ビデオストリームのヘッダ内の符号化されたパラメータを1以上用いて、メモリ圧縮処理により、再構成されたピクチャが固定サイズのデータブロックに圧縮される。そして、モジュール112では、圧縮ビデオストリームのヘッダ内の符号化されたパラメータを1以上用いて、メモリ復元処理により、再構成されたピクチャのデータブロックが復元される。復元された画像サンプルは、後のピクチャを符号化するピクチャ間予測処理に用いられる。
In the module 110, the reconstructed picture is compressed into a data block having a fixed size by memory compression processing using one or more encoded parameters in the header of the compressed video stream. Then, in the module 112, the data block of the reconstructed picture is restored by the memory restoration process using one or more encoded parameters in the header of the compressed video stream. The restored image sample is used for inter-picture prediction processing for encoding a subsequent picture.
図2は、本発明を用いた動画像復号化処理を示すフローチャートである。まず、モジュール200では、圧縮ビデオストリームのヘッダを解析してメモリ圧縮処理に必要なパラメータを取得する。メモリ圧縮パラメータには、メモリ圧縮および復元処理に用いられる、以下のパラメータが1以上含まれる。
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 FIG. 2 is a flowchart showing a moving picture decoding process using the present invention. First, themodule 200 analyzes the header of the compressed video stream and acquires parameters necessary for the memory compression process. The memory compression parameters include one or more of the following parameters used for memory compression and decompression processing.
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included? Flag indicating whether to change the importance of a specific frequency coefficient by sending a flag and weight indicating
-Weight of coefficient when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored sample-Selection among multiple compression methods
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 FIG. 2 is a flowchart showing a moving picture decoding process using the present invention. First, the
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included? Flag indicating whether to change the importance of a specific frequency coefficient by sending a flag and weight indicating
-Weight of coefficient when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored sample-Selection among multiple compression methods
次に、モジュール202では、圧縮ビデオストリームをエントロピー復号化し、モジュール204において、ピクチャ間予測処理を用いてピクチャを復号化する。そして、モジュール206では、解析されたメモリ圧縮パラメータを1以上用いて、メモリ圧縮処理により、復号化ピクチャが固定サイズのデータブロックに圧縮される。モジュール208において、復号化ピクチャのデータブロックは、解析されたパラメータを用いて、メモリ復元処理により復元される。復元された画像サンプルは、後のピクチャを復号化するピクチャ間予測処理に用いられる。最後に、モジュール210において、復号化ピクチャのデータブロックは解析されたパラメータを用いてメモリ復元処理により再び復元され、その復元された画像サンプルが出力される。
Next, the module 202 entropy-decodes the compressed video stream, and the module 204 decodes the picture using inter-picture prediction processing. Then, the module 206 compresses the decoded picture into a fixed-size data block by memory compression processing using one or more of the analyzed memory compression parameters. In module 208, the data block of the decoded picture is restored by the memory restoration process using the analyzed parameters. The restored image sample is used for inter-picture prediction processing for decoding a subsequent picture. Finally, in module 210, the data block of the decoded picture is restored again by the memory restoration process using the analyzed parameters, and the restored picture sample is output.
図3は、本発明を用いたビデオエンコーダの装置(動画像符号化装置)の一例を示すブロック図である。この動画像符号化装置は、減算部300と、変換部302と、量子化部304と、エントロピー符号化部324と、逆量子化部306と、逆変換部308と、加算部310と、フィルタリング部312と、メモリ圧縮部314と、メモリ部316と、メモリ復元部318と、動き検出部320と、動き補間部322とを備える。
FIG. 3 is a block diagram showing an example of a video encoder device (moving image encoding device) using the present invention. This moving image encoding apparatus includes a subtracting unit 300, a converting unit 302, a quantizing unit 304, an entropy encoding unit 324, an inverse quantizing unit 306, an inverse converting unit 308, an adding unit 310, a filtering Unit 312, memory compression unit 314, memory unit 316, memory restoration unit 318, motion detection unit 320, and motion interpolation unit 322.
図3に示すように、減算部300は、元サンプルD300を入力して予測サンプルD328で減算を行い、残余値D302を出力する。変換部302は、その残余値D302を入力して、変換された係数D304を出力する。量子化部304は、その変換係数D304を入力して、量子化された係数D306を出力する。そして、その量子化係数D306は、エントロピー符号化部324によって、圧縮ビデオD330へエントロピー符号化される。逆量子化部306は、量子化係数D306を読み込み、変換された係数D308を出力する。逆変換部308は、係数D308を読み込み、残余値D310を出力する。加算部310は、残余値D310を入力して、ピクチャ間予測値(予測サンプル)D328と加算し、画像サンプルD312を再構成する。フィルタリング部312は、その再構成画像サンプルD312を読み込み、フィルタ処理された画像サンプルD314を出力する。
As shown in FIG. 3, the subtraction unit 300 receives the original sample D300, performs subtraction with the prediction sample D328, and outputs a residual value D302. The conversion unit 302 inputs the residual value D302 and outputs the converted coefficient D304. The quantization unit 304 receives the transform coefficient D304 and outputs a quantized coefficient D306. The quantized coefficient D306 is entropy-encoded into the compressed video D330 by the entropy encoding unit 324. The inverse quantization unit 306 reads the quantization coefficient D306 and outputs the converted coefficient D308. The inverse conversion unit 308 reads the coefficient D308 and outputs a residual value D310. The adder 310 receives the residual value D310, adds it to the inter-picture prediction value (prediction sample) D328, and reconstructs the image sample D312. The filtering unit 312 reads the reconstructed image sample D312 and outputs a filtered image sample D314.
メモリ圧縮部314は、そのフィルタ処理済画像サンプルD314を読み込み、メモリ部316に格納される圧縮データブロックD320を出力する。メモリ圧縮部314で用いたパラメータD316は、エントロピー符号化部324に送られて、圧縮ビデオのヘッダ内に符号化される。これらのパラメータには、以下のパラメータが1以上含まれる。
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 Thememory compression unit 314 reads the filtered image sample D314 and outputs a compressed data block D320 stored in the memory unit 316. The parameter D316 used in the memory compression unit 314 is sent to the entropy encoding unit 324 and encoded in the header of the compressed video. These parameters include one or more of the following parameters.
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, or the block of compressed data includes one component, or Which frequency coefficient to remove when using the explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates whether to include 3 components -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 The
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, or the block of compressed data includes one component, or Which frequency coefficient to remove when using the explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates whether to include 3 components -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
これらのパラメータD318はまた、メモリ圧縮部314からメモリ復元部318へも送られる。メモリ復元部318は、圧縮データブロックD322をメモリ部316から読み込み、復元された画像サンプルD324を動き検出部320に出力する。動き検出部320は、その復元画像サンプルを読み込んで動きベクトルを検出し、動きベクトルと復元画像サンプルD326とを出力する。動き補間部322は、動きベクトルD326と復元画像サンプルD326とを読み込み、ピクチャ間予測サンプルD328を出力する。
These parameters D318 are also sent from the memory compression unit 314 to the memory restoration unit 318. The memory restoration unit 318 reads the compressed data block D322 from the memory unit 316 and outputs the restored image sample D324 to the motion detection unit 320. The motion detection unit 320 reads the restored image sample, detects a motion vector, and outputs the motion vector and the restored image sample D326. The motion interpolation unit 322 reads the motion vector D326 and the restored image sample D326 and outputs an inter-picture prediction sample D328.
図4は、本発明を用いたビデオデコーダの装置(動画像復号化装置)の一例を示すブロック図である。この動画像復号化装置は、エントロピー復号化部400と、逆量子化部402と、逆変換部404と、加算部406と、フィルタリング部416と、動き補間部408と、第1メモリ復元部410と、メモリ圧縮部412と、第2メモリ復元部414と、メモリ部413とを備える。
FIG. 4 is a block diagram showing an example of a video decoder device (video decoding device) using the present invention. The moving picture decoding apparatus includes an entropy decoding unit 400, an inverse quantization unit 402, an inverse transformation unit 404, an addition unit 406, a filtering unit 416, a motion interpolation unit 408, and a first memory restoration unit 410. A memory compression unit 412, a second memory restoration unit 414, and a memory unit 413.
図4に示されているように、エントロピー復号化部400は、圧縮ビデオD400を読み込み、量子化係数D402を出力する。また、エントロピー復号化部400は、圧縮ビデオストリームD400のヘッダからメモリ圧縮パラメータD416を解析する。解析されたパラメータD416には、以下のパラメータが1以上含まれる。
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 As shown in FIG. 4, theentropy decoding unit 400 reads the compressed video D400 and outputs a quantized coefficient D402. In addition, the entropy decoding unit 400 analyzes the memory compression parameter D416 from the header of the compressed video stream D400. The analyzed parameter D416 includes one or more of the following parameters.
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included? -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
・横幅など、入力画像ブロックの寸法
・走査処理において係数の明示型走査パターンを用いるかどうかを示すフラグ
・明示型走査パターンを用いる場合の走査パターン
・圧縮データのブロックが1成分を含むのか、または、3成分を含むのかを示すクロマ成分パッキングモード
・メモリ圧縮処理に対し、係数をいくつか明示的に除去するかどうかを示すフラグ
・係数の明示型除去を用いる場合、どの周波数係数を除去するのかを示すフラグ
・重みを送って特定の周波数係数の重要度を変更するかどうかを示すフラグ
・周波数係数重み付けを用いる場合の係数の重み
・対象圧縮データサイズ
・復元されたサンプルの対象ビット精度
・複数の圧縮方式間の選択 As shown in FIG. 4, the
The width of the input image block, such as the width, a flag indicating whether or not the explicit scanning pattern of the coefficient is used in the scanning process, the scanning pattern when the explicit scanning pattern is used, whether the block of compressed data includes one component, or Which frequency coefficients should be removed when using explicit removal of flags and coefficients to indicate whether or not to explicitly remove some coefficients for the chroma component packing mode and memory compression process that indicates that 3 components are included? -Flag indicating whether to change the importance of a specific frequency coefficient by sending a weight-Coefficient weight when using frequency coefficient weighting-Target compressed data size-Target bit accuracy of restored samples-Multiple Between different compression methods
逆量子化部402は、量子化係数D402を読み込み、変換された係数D404を出力する。逆変換部404は、その変換係数D404を読み込み、残余値D406を出力する。加算部406は、復号化された残余値D406とピクチャ間予測されたサンプルD412とを読み込み、再構成されたサンプルD410を出力する。フィルタリング部416は、その再構成サンプルD410を読み込み、フィルタ処理されたサンプルD424を出力する。
The inverse quantization unit 402 reads the quantization coefficient D402 and outputs the converted coefficient D404. The inverse transform unit 404 reads the transform coefficient D404 and outputs a residual value D406. The adder 406 reads the decoded residual value D406 and the inter-picture predicted sample D412 and outputs a reconstructed sample D410. The filtering unit 416 reads the reconstructed sample D410 and outputs the filtered sample D424.
メモリ圧縮部412は、そのフィルタ処理済サンプルD424と解析されたパラメータD416とを読み込んで、フィルタ処理済画像を圧縮データブロックD420に圧縮し、メモリ部413に格納する。第1メモリ復元部は、解析されたパラメータD416と圧縮データブロックD418とを読み込んでデータブロックを復元し、復元された画像サンプルD414を出力する。動き補間部408は、その復元画像サンプルD414を読み込み、ピクチャ間予測サンプルD412を出力する。最後に、第2メモリ復元部414は、圧縮データブロックD422と解析されたパラメータD416とを読み込み、復元された画像サンプルD426を出力する。
The memory compression unit 412 reads the filtered sample D424 and the analyzed parameter D416, compresses the filtered image into a compressed data block D420, and stores it in the memory unit 413. The first memory restoration unit reads the analyzed parameter D416 and the compressed data block D418, restores the data block, and outputs the restored image sample D414. The motion interpolation unit 408 reads the restored image sample D414 and outputs an inter-picture prediction sample D412. Finally, the second memory restoration unit 414 reads the compressed data block D422 and the analyzed parameter D416, and outputs the restored image sample D426.
図5は、本発明の実施の形態におけるメモリ圧縮処理を示すフローチャートである。まず、モジュール500では、画像サンプルのブロックを画像から抽出する。ブロックの大きさはm画素×n画素であり、mおよびnは1以上の正の整数値である。ブロックの大きさの一例として、16画素×4画素がある。次に、モジュール502では、画像サンプルのブロックを係数のブロックに変換する。モジュール504において、各係数値は、次の(式1)で示されるように、所定の係数重み値に応じてビットシフトされる。
FIG. 5 is a flowchart showing the memory compression processing in the embodiment of the present invention. First, the module 500 extracts a block of image samples from the image. The size of the block is m pixels × n pixels, and m and n are positive integer values of 1 or more. An example of the block size is 16 pixels × 4 pixels. Next, module 502 converts the block of image samples into a block of coefficients. In the module 504, each coefficient value is bit-shifted according to a predetermined coefficient weight value as shown in the following (Equation 1).
係数の重みが正の値である場合は、係数値は左にビットシフトされ、係数の重みが負の値である場合は、係数値は右にビットシフトされる。係数の重みが全て0である場合(つまり、ビットシフトが必要ない場合)は、モジュール504のステップを省略してもよい。
When the coefficient weight is a positive value, the coefficient value is bit-shifted to the left, and when the coefficient weight is a negative value, the coefficient value is bit-shifted to the right. When the coefficient weights are all 0 (that is, when no bit shift is necessary), the module 504 step may be omitted.
モジュール502の変換処理後には、ブロックの第1(DC)係数は正の値となる。モジュール506において、所定の値(DC値)で第1係数の減算を行い、符号付きの値を算出する。所定値の一例として、第1係数がとり得る最も大きな数と1の値との和の1/2という値がある。
After the conversion process of the module 502, the first (DC) coefficient of the block becomes a positive value. In the module 506, the first coefficient is subtracted by a predetermined value (DC value) to calculate a signed value. As an example of the predetermined value, there is a value that is 1/2 of the sum of the largest number that the first coefficient can take and the value of 1.
次に、モジュール508において、ブロックが2次元であれば、そのブロックの係数は走査パターンによって走査され、その走査処理の後に、係数は1次元配列に並べられる。ブロックの幅と高さが対称性を持つ場合には、走査パターンの一例として、ジグザグ走査がある。
Next, in module 508, if the block is two-dimensional, the coefficients of the block are scanned by the scanning pattern, and after the scanning process, the coefficients are arranged in a one-dimensional array. When the width and height of the block are symmetric, zigzag scanning is an example of the scanning pattern.
そして最後に、モジュール510において、走査された係数をビットプレーン符号化処理を用いて符号化する。ビットプレーン符号化処理は、量子化処理を行うことなく対象圧縮サイズを確実に実現するために用いられる。
Finally, in module 510, the scanned coefficients are encoded using a bit-plane encoding process. The bit plane encoding process is used to reliably realize the target compression size without performing the quantization process.
図6は、本発明の実施の形態におけるメモリ復元処理を示すフローチャートである。まず、モジュール600では、ブロックの係数をビットプレーン復号化処理を用いて復号化する。次に、モジュール602において、オフセットビットを欠落ビットプレーンに追加する。ビットプレーンが欠落しているのは、データブロックサイズが制限されるため、メモリ圧縮処理によってビットプレーンは符号化されないからである。また、オフセットビットの追加処理とは、0以外の係数の最後に復号化されたビットプレーンのすぐ下位のビットプレーンに1ビットを追加することである。
FIG. 6 is a flowchart showing a memory restoration process according to the embodiment of the present invention. First, in the module 600, the coefficient of the block is decoded using a bit plane decoding process. Next, in module 602, offset bits are added to the missing bitplane. The bit plane is missing because the data block size is limited, and the bit plane is not encoded by the memory compression process. The offset bit addition process is to add 1 bit to the bit plane immediately below the bit plane decoded at the end of the coefficient other than 0.
モジュール604において、係数は、逆走査パターンを用いて係数の2次元ブロックに逆走査される。目的とする出力ブロックの次元が1次元であれば、この逆走査処理は省略される。そして、モジュール606では、ブロックの第1係数を所定の値(DC値)で加算する。そして、モジュール608において、各係数値は、次の(式2)で示されるように、所定の係数重み値に応じてビットシフトされる。
In module 604, the coefficients are back scanned into a two dimensional block of coefficients using a reverse scan pattern. If the dimension of the target output block is one dimension, this reverse scanning process is omitted. The module 606 adds the first coefficient of the block with a predetermined value (DC value). Then, in the module 608, each coefficient value is bit-shifted according to a predetermined coefficient weight value as shown in the following (Equation 2).
係数の重みが正の値である場合は、係数値は右にビットシフトされ、係数の重みが負の値である場合は、係数値は左にビットシフトされる。係数の重みが全て0である場合(つまり、ビットシフトが必要ない場合)は、モジュール608のステップを省略してもよい。
When the coefficient weight is a positive value, the coefficient value is bit-shifted to the right, and when the coefficient weight is a negative value, the coefficient value is bit-shifted to the left. If the coefficient weights are all 0 (ie, no bit shift is required), the module 608 step may be omitted.
そして、モジュール610において、係数のブロックを画像サンプルのブロックに逆変換する。モジュール612では、各画素つまり画像サンプル値を所定の画素シフト値分右にビットシフトする。画素シフト値は、画像サンプルのビット深度を復元された画像サンプルの対象ビット深度で減算することによって決定される。ビットシフト処理は、以下の(式3)により行われる。
Then, in module 610, the block of coefficients is inversely transformed into a block of image samples. In module 612, each pixel, that is, the image sample value is bit-shifted to the right by a predetermined pixel shift value. The pixel shift value is determined by subtracting the bit depth of the image sample by the target bit depth of the restored image sample. The bit shift process is performed according to the following (Equation 3).
最後に、モジュール614において、各画素値は、最大値と最小値の間の値にクリッピングされる。最小値とは0であり、最大値とは対象ビット深度がとり得る最大の値のことである。
Finally, in module 614, each pixel value is clipped to a value between the maximum and minimum values. The minimum value is 0, and the maximum value is the maximum value that the target bit depth can take.
図7は、本発明のメモリ圧縮部の装置の一例を示すブロック図である。このメモリ圧縮部は、変換部700と、ビットシフト部702と、DC減算部704と、走査部706と、ビットプレーン符号化部708とを備える。他の装置例では、ビットシフト部702は任意でよい。
FIG. 7 is a block diagram showing an example of the memory compression unit of the present invention. The memory compression unit includes a conversion unit 700, a bit shift unit 702, a DC subtraction unit 704, a scanning unit 706, and a bit plane encoding unit 708. In other device examples, the bit shift unit 702 may be optional.
図7に示すように、変換部700は、画像サンプルのブロックD700とそのブロックの寸法とを読み込んで変換処理を行い、変換された係数のブロックD716を出力する。ビットシフト部702(装置に備えられていれば)は、変換された係数のブロックD716と係数重み値D704とを読み込んでビットシフト処理を行い、係数のブロックをDC減算部704に出力する。DC減算部704は、係数のブロックと所定のDC値D706とを読み込み、新たなDC係数値を有する係数のブロックを出力する。走査部706は、係数のブロックD720と走査パターンD708と係数除去フラグセットD710とを読み込み、係数の1次元配列D722と係数の最大数を表す値D724とを出力する。係数の最大数は、係数除去フラグセットで除去されない係数の数をカウントすることによって算出できる。
7, the conversion unit 700 reads the image sample block D700 and the size of the block, performs conversion processing, and outputs a converted coefficient block D716. The bit shift unit 702 (if provided in the apparatus) reads the converted coefficient block D716 and the coefficient weight value D704, performs bit shift processing, and outputs the coefficient block to the DC subtraction unit 704. The DC subtraction unit 704 reads a coefficient block and a predetermined DC value D706, and outputs a coefficient block having a new DC coefficient value. The scanning unit 706 reads the coefficient block D720, the scanning pattern D708, and the coefficient removal flag set D710, and outputs a one-dimensional array D722 of coefficients and a value D724 representing the maximum number of coefficients. The maximum number of coefficients can be calculated by counting the number of coefficients that are not removed by the coefficient removal flag set.
最後に、ビットプレーン符号化部708は、係数の配列D722と係数の最大数を表す値D724と対象圧縮ブロックサイズD712とクロマ成分パッキングモード値D714とを読み込み、圧縮データのブロックD726を出力する。
Finally, the bit plane encoding unit 708 reads the coefficient array D722, the value D724 indicating the maximum number of coefficients, the target compressed block size D712, and the chroma component packing mode value D714, and outputs the compressed data block D726.
図8は、本発明を用いたメモリ復元部の装置の一例を示すブロック図である。このメモリ復元部は、ビットプレーン復号化部800と、走査部804と、DC加算部806と、第1ビットシフト部808と、逆変換部810と、第2ビットシフト部812と、クリッピング部814とを備える。本発明の他の装置例では、2つのビットシフト部は任意でよい。
FIG. 8 is a block diagram showing an example of the memory restoration unit using the present invention. The memory restoration unit includes a bit plane decoding unit 800, a scanning unit 804, a DC addition unit 806, a first bit shift unit 808, an inverse conversion unit 810, a second bit shift unit 812, and a clipping unit 814. With. In another example of the present invention, the two bit shift units may be arbitrary.
ビットプレーン復号化部800は、圧縮データブロックD800とブロック内の係数の最大数を表す値D804とクロマ成分パッキングモードD802とを読み込み、係数の配列D822を出力する。走査部804は、係数の配列D822と走査パターンD806と係数除去フラグセットD808とを読み込み、係数のブロックD824を出力する。DC加算部806は、係数のブロックD824とDC値D810とを読み込み、新たなDC係数値を有する係数のブロックD826を出力する。第1ビットシフト部808は、係数のブロックD826と係数重み値のブロックD812とを読み込み、修正された係数値のブロックD828を出力する。次に、逆変換部810は、修正された係数値のブロックD828とブロックの寸法D814とを読み込み、画像サンプルのブロックD830を出力する。第2ビットシフト部812は、画像サンプルのブロックD830と1画像サンプルあたりのビット精度を表す値D818とを読み込み、調節された画像サンプルのブロックD832を出力する。最後に、クリッピング部814は、画像サンプルのブロックD832と1画像サンプルあたりのビット精度を表す値D818とを読み込んで、画像サンプル値の範囲をビット精度内にクリッピングし、復元された画像サンプルのブロックD834を出力する。
The bit plane decoding unit 800 reads the compressed data block D800, the value D804 representing the maximum number of coefficients in the block, and the chroma component packing mode D802, and outputs the coefficient array D822. The scanning unit 804 reads the coefficient array D822, the scanning pattern D806, and the coefficient removal flag set D808, and outputs a coefficient block D824. The DC adder 806 reads the coefficient block D824 and the DC value D810, and outputs a coefficient block D826 having a new DC coefficient value. The first bit shift unit 808 reads the coefficient block D826 and the coefficient weight value block D812, and outputs the modified coefficient value block D828. Next, the inverse transform unit 810 reads the modified coefficient value block D828 and the block dimension D814, and outputs an image sample block D830. The second bit shift unit 812 reads the image sample block D830 and the value D818 representing the bit precision per image sample, and outputs the adjusted image sample block D832. Finally, the clipping unit 814 reads the image sample block D832 and the value D818 representing the bit accuracy per image sample, clips the range of the image sample values within the bit accuracy, and restores the restored image sample block D834 is output.
図9は、本発明のビデオエンコーダにおけるメモリ圧縮部のビットプレーン符号化処理を示すフローチャートである。モジュール900において、クロマ成分パッキングモードを選択する。複雑さの少ないビデオエンコーダの実装では、クロマ成分パッキングモードを0として選択してもよい。次に、モジュール902において、選択されたクロマ成分パッキングモードを圧縮ビデオストリームのヘッダに書き込む。モジュール904では、比較を行って、選択されたクロマ成分パッキングモードが0に設定されているかどうかを確認する。モジュール904において、選択されたクロマ成分パッキングモードが0に等しければ、輝度サンプルおよびクロマサンプルを異なる圧縮データブロックにそれぞれ圧縮する。クロマ成分パッキングモードが0でない場合は、モジュール908において、輝度サンプルおよびクロマサンプルを同時に1つの圧縮データブロックに圧縮する。
FIG. 9 is a flowchart showing the bit plane encoding process of the memory compression unit in the video encoder of the present invention. In module 900, a chroma component packing mode is selected. In a video encoder implementation with less complexity, the chroma component packing mode may be selected as zero. Next, in module 902, the selected chroma component packing mode is written into the header of the compressed video stream. In module 904, a comparison is made to see if the selected chroma component packing mode is set to zero. In module 904, if the selected chroma component packing mode is equal to 0, the luminance and chroma samples are each compressed into different compressed data blocks. If the chroma component packing mode is not 0, module 908 compresses the luminance and chroma samples simultaneously into one compressed data block.
図10は、本発明のビデオデコーダにおけるメモリ圧縮部のビットプレーン符号化処理を示すフローチャートである。まず、モジュール1000において、クロマ成分パッキングモードを圧縮ビデオストリームのヘッダから読み込む。モジュール1002では、比較を行って、クロマ成分パッキングモードが0と等しいかどうかを判断する。クロマ成分パッキングモードが0に等しければ、モジュール1004において、輝度サンプルおよびクロマサンプルを異なる圧縮データブロックにそれぞれ圧縮する。クロマ成分パッキングモードが0でない場合は、モジュール1006において、輝度サンプルおよびクロマサンプルを同時に1つの圧縮データブロックに圧縮する。
FIG. 10 is a flowchart showing the bit plane encoding process of the memory compression unit in the video decoder of the present invention. First, in the module 1000, the chroma component packing mode is read from the header of the compressed video stream. Module 1002 performs a comparison to determine if the chroma component packing mode is equal to zero. If the chroma component packing mode is equal to 0, module 1004 compresses the luminance and chroma samples into different compressed data blocks, respectively. If the chroma component packing mode is not 0, module 1006 compresses the luminance and chroma samples simultaneously into one compressed data block.
図11は、本発明のビデオエンコーダおよびデコーダにおけるメモリ復元部のビットプレーン復号化処理を示すフローチャートである。まず、モジュール1100において、クロマ成分パッキングモードを決定する。このクロマ成分パッキングモードは、メモリ圧縮部のビットプレーン符号化処理で用いられたクロマ成分パッキングモードと同じである。次に、モジュール1102では、比較を行って、クロマ成分パッキングモードが0と等しいかどうかを判断する。クロマ成分パッキングモードが0に等しければ、モジュール1104において、固定長データブロックを別々に復元して輝度係数およびクロマ係数をそれぞれ生成する。クロマ成分パッキングモードが0でない場合は、モジュール1106において、各ビットプレーンを復元して輝度係数とクロマ係数を同時に生成する。
FIG. 11 is a flowchart showing the bit plane decoding process of the memory restoration unit in the video encoder and decoder of the present invention. First, in the module 1100, the chroma component packing mode is determined. This chroma component packing mode is the same as the chroma component packing mode used in the bit plane encoding process of the memory compression unit. Module 1102 then performs a comparison to determine if the chroma component packing mode is equal to zero. If the chroma component packing mode is equal to 0, in module 1104, the fixed-length data block is restored separately to generate a luminance coefficient and a chroma coefficient, respectively. If the chroma component packing mode is not 0, the module 1106 restores each bit plane to generate a luminance coefficient and a chroma coefficient at the same time.
図12A~図12Cは、圧縮ビデオストリームのヘッダ内におけるクロマ成分パッキングモードの位置候補を示す。図12Aは、圧縮ビデオストリームのシーケンスヘッダ内におけるクロマ成分パッキングモードパラメータの位置を示す。図12Bは、圧縮ビデオストリームのピクチャヘッダ内におけるクロマ成分パッキングモードパラメータの位置を示す。図12Cは、圧縮ビデオストリームのシーケンスヘッダ内の符号化されたプロファイルパラメータ、レベルパラメータ、またはその両方に基づき、クロマ成分パッキングモードパラメータが検索テーブルから導出できることを示している。クロマ成分パッキングモードパラメータを用いるメモリ圧縮処理は、図12A、12B、および12Cに記載されたビデオストリームから復号化して再構成された画像サンプルを圧縮する。
FIG. 12A to FIG. 12C show the position candidates of the chroma component packing mode in the header of the compressed video stream. FIG. 12A shows the position of the chroma component packing mode parameter in the sequence header of the compressed video stream. FIG. 12B shows the position of the chroma component packing mode parameter in the picture header of the compressed video stream. FIG. 12C shows that the chroma component packing mode parameters can be derived from the lookup table based on the encoded profile parameters, level parameters, or both in the sequence header of the compressed video stream. The memory compression process using the chroma component packing mode parameter compresses the reconstructed image samples decoded from the video streams described in FIGS. 12A, 12B, and 12C.
図13は、1成分に対するビットプレーン符号化処理を示すフローチャートである。この処理は、クロマ成分パッキングモードが0に等しい場合に行われる。まず、モジュール1300において、絶対値と符号ビット値とに係数値を変換する。符号ビットの0の値は正の係数値を表し、符号ビットの1の値は負の係数値を表す。次に、モジュール1302において、絶対値をビットプレーンに変換する。そして、モジュール1304では、係数値セットのうちの最大絶対係数値を符号化するのに必要なビットプレーンの数を見つけることによってビットプレーン値の最大数を算出する。そして、モジュール1306において、ビットプレーンの算出された最大値を、固定ビット数を用いて圧縮データブロックに書き込む。最後に、モジュール1308では、圧縮データブロックが完全に埋まるまで、最上位ビットプレーンから順次、ビットプレーンと符号ビットとを符号化する。
FIG. 13 is a flowchart showing a bit plane encoding process for one component. This process is performed when the chroma component packing mode is equal to zero. First, in the module 1300, the coefficient value is converted into an absolute value and a sign bit value. A value of 0 for the sign bit represents a positive coefficient value, and a value of 1 for the sign bit represents a negative coefficient value. Next, in the module 1302, the absolute value is converted into a bit plane. The module 1304 then calculates the maximum number of bitplane values by finding the number of bitplanes necessary to encode the maximum absolute coefficient value in the coefficient value set. Then, in module 1306, the calculated maximum value of the bit plane is written into the compressed data block using a fixed number of bits. Finally, module 1308 encodes bit planes and code bits sequentially from the most significant bit plane until the compressed data block is completely filled.
図14は、係数値からビットプレーンへの変換処理を示す概略図である。概略図に示すように、係数値D1400は、バイナリ形式の絶対値と符号ビットD1402とに変換される。そして、最終的には、バイナリ値の上位ビットプレーンを切り取ることによってビットプレーンD1404が形成される。
FIG. 14 is a schematic diagram showing conversion processing from coefficient values to bit planes. As shown in the schematic diagram, the coefficient value D1400 is converted into an absolute value in binary format and a sign bit D1402. Finally, the bit plane D1404 is formed by cutting out the upper bit plane of the binary value.
図15は、画像の1成分に対する圧縮データ単位のデータ構造を示す概略図である。図に示すように、圧縮データ単位には、有効プレーンの数を表す値D1500が含まれており、最上位ビットプレーンから始まる圧縮ビットプレーンD1502がその後に続く。各圧縮ビットプレーンには、ビットプレーン内の最上位ビットの数を表す値D1510と、ランおよび符号ビットのペア(D1526およびD1528)と、ビットプレーンが最上位ビットプレーンでない場合にはバイナリシンボルセットD1530とが含まれる。
FIG. 15 is a schematic diagram showing a data structure of a compressed data unit for one component of an image. As shown in the figure, the compressed data unit includes a value D1500 representing the number of valid planes, followed by a compressed bit plane D1502 starting from the most significant bit plane. Each compressed bitplane includes a value D1510 representing the number of most significant bits in the bitplane, a run and sign bit pair (D1526 and D1528), and a binary symbol set D1530 if the bitplane is not the most significant bitplane. And are included.
図16は、画像の3成分に対するビットプレーン符号化処理を示すフローチャートである。この処理は、クロマ成分パッキングモードが1に等しい場合に行われる。図に示すように、モジュール1600では、絶対値と符号ビットとに3成分全ての係数を変換する。符号ビットの0の値は正の係数値を表し、符号ビットの1の値は負の係数値を表す。次に、モジュール1602において、絶対値を、各成分のビットプレーンに変換する。そして、モジュール1604では、各成分において、係数値セットのうちの最大絶対係数値を符号化するのに必要なビットプレーンの数を見つけることによってビットプレーン値の最大数を算出する。そして、モジュール1606において、各成分のビットプレーンの算出された最大値を、固定ビット数を用いて圧縮データブロックに連続して書き込む。最後に、モジュール1608では、圧縮データブロックが完全に埋まるまで、3成分のうちの最上位ビットプレーンから順次連続して、各成分のビットプレーンと符号ビットとを符号化する。
FIG. 16 is a flowchart showing a bit plane encoding process for three components of an image. This process is performed when the chroma component packing mode is equal to 1. As shown, the module 1600 converts all three component coefficients into absolute values and sign bits. A value of 0 for the sign bit represents a positive coefficient value, and a value of 1 for the sign bit represents a negative coefficient value. Next, in the module 1602, the absolute value is converted into a bit plane of each component. The module 1604 then calculates the maximum number of bit plane values by finding the number of bit planes required to encode the maximum absolute coefficient value of the coefficient value set for each component. Then, in module 1606, the calculated maximum value of the bit plane of each component is continuously written into the compressed data block using a fixed number of bits. Finally, the module 1608 encodes the bit plane and the sign bit of each component sequentially from the most significant bit plane of the three components until the compressed data block is completely filled.
図17は、画像の3成分に対する圧縮データ単位のデータ構造を示す概略図である。図に示すように、圧縮データ単位には、各成分の有効プレーンの数をそれぞれ表す値D1700、S1702、およびD1704が含まれており、最上位ビットプレーンから始まる圧縮ビットプレーンD1702がその後に続く。各圧縮ビットプレーンは複数の成分のうち1つを含み、各成分のプレーンには、ビットプレーン内の最上位ビットの数を表す値D1720と、ランおよび符号ビットのペア(D1726およびD1728)と、ビットプレーンが最上位ビットプレーンでない場合にはバイナリシンボルセットD1730とが含まれる。
FIG. 17 is a schematic diagram showing a data structure of a compressed data unit for three components of an image. As shown in the figure, the compressed data unit includes values D1700, S1702, and D1704 respectively representing the number of effective planes of each component, followed by a compressed bitplane D1702 starting from the most significant bitplane. Each compressed bit plane includes one of a plurality of components, each component plane including a value D1720 representing the number of most significant bits in the bitplane, a run and sign bit pair (D1726 and D1728), If the bit plane is not the most significant bit plane, a binary symbol set D1730 is included.
図18は、画像の1成分に対するビットプレーン復号化処理を示すフローチャートである。まず、モジュール1800において、ビットプレーンの最大数を表す値を圧縮データ単位から固定ビット数を用いて読み込む。次に、モジュール1802において、各ビットプレーンと符号ビットとを最上位ビットプレーンから順次復号化する。データ単位は圧縮されているため、全てのビットプレーンがデータ単位から成功裏に復号化されるわけではない。0でない各係数の成功裏に最後に復号化されたビットプレーンのすぐ下位のビットプレーンに、値が1であるオフセットビットを追加する。このステップは、欠落ビットプレーンが原因となるエラーを削減するためのものである。そして、モジュール1804において、係数の絶対値をビットプレーンから再構成する。最後に、モジュール1806において、絶対値と符号ビットとを符号付の係数値に変換することによって係数値を再構成する。
FIG. 18 is a flowchart showing a bit plane decoding process for one component of an image. First, in module 1800, a value representing the maximum number of bit planes is read from the compressed data unit using a fixed number of bits. Next, in module 1802, each bit plane and code bit are sequentially decoded from the most significant bit plane. Since the data unit is compressed, not all bit planes are successfully decoded from the data unit. An offset bit with a value of 1 is added to the bit plane immediately below the last successfully decoded bit plane for each non-zero coefficient. This step is to reduce errors caused by missing bitplanes. Then, in the module 1804, the absolute value of the coefficient is reconstructed from the bit plane. Finally, in module 1806, the coefficient value is reconstructed by converting the absolute value and the sign bit into a signed coefficient value.
図19は、画像の3成分に対するビットプレーン復号化処理を示すフローチャートである。まず、モジュール1900において、各成分のビットプレーンの最大数を表す値を圧縮データ単位から固定ビット数を用いて連続して読み込む。次に、モジュール1902において、3成分のビットプレーンと符号ビットとを、3成分の最上位ビットプレーンから順次復号化する。データ単位は圧縮されているため、全てのビットプレーンがデータ単位から成功裏に復号化されるわけではない。0でない各係数の成功裏に最後に復号化されたビットプレーンのすぐ下位のビットプレーンに、値が1であるオフセットビットを追加する。このステップは、欠落ビットプレーンが原因となるエラーを削減するためのものである。そして、モジュール1904において、係数の絶対値をビットプレーンから再構成する。最後に、モジュール1906において、絶対値と符号ビットとを符号付の係数値に変換することによって3成分の係数値を再構成する。
FIG. 19 is a flowchart showing a bit plane decoding process for three components of an image. First, in the module 1900, a value representing the maximum number of bit planes of each component is continuously read from the compressed data unit using a fixed number of bits. Next, in module 1902, the three-component bit plane and the sign bit are sequentially decoded from the three-component most significant bit plane. Since the data unit is compressed, not all bit planes are successfully decoded from the data unit. An offset bit with a value of 1 is added to the bit plane immediately below the last successfully decoded bit plane for each non-zero coefficient. This step is to reduce errors caused by missing bitplanes. Then, in the module 1904, the absolute value of the coefficient is reconstructed from the bit plane. Finally, in module 1906, the three-component coefficient values are reconstructed by converting the absolute values and sign bits into signed coefficient values.
図20は、1つのビットプレーンに対する符号化処理を示すフローチャートである。モジュール2000において、符号化対象ビットプレーンの最上位ビットの数を算出する。この数によって、符号化されるランおよび符号ビットのペアの数が決定される。モジュール2002では、比較を行って、符号化対象ビットプレーンが最上位ビットプレーンであるかどうかをチェックする。最上位ビットプレーンは最上位ビットを少なくとも1つ含んでいるはずなので、モジュール2002において符号化対象ビットプレーンが最上位ビットプレーンであれば、モジュール2004において、符号化する最上位ビットの数を1減算する。そして、モジュール2006において、得られた最上位ビットの数を可変長符号化により圧縮ビットプレーンに書き込む。次に、モジュール2008において、すでに復号化されたビットプレーン内で最上位ビットが見つかった係数位置を取り除くことによって、最上位ビット位置のラン値を算出する。次に、モジュール2010において、これら最上位ビット位置の符号ビット値を決定する。そして、ラン値と符号ビット値とを、ランパラメータを符号化する可変長符号化と符号付の値を符号化する1ビットとによりペアとして符号化する。最後に、モジュール2012において、その他の位置の上位ビットを、符号化対象ビットプレーンより前の前回のビットプレーン内で最上位ビットが復号化された位置に対するバイナリシンボルとして符号化する。
FIG. 20 is a flowchart showing an encoding process for one bit plane. In the module 2000, the number of the most significant bits of the encoding target bit plane is calculated. This number determines the number of run and code bit pairs to be encoded. In the module 2002, a comparison is performed to check whether the encoding target bit plane is the most significant bit plane. Since the most significant bit plane should include at least one most significant bit, if the bit plane to be encoded is the most significant bit plane in the module 2002, the number of the most significant bits to be encoded is decremented by 1 in the module 2004. To do. Then, in the module 2006, the obtained number of most significant bits is written into the compressed bit plane by variable length coding. Next, in module 2008, the run value of the most significant bit position is calculated by removing the coefficient position where the most significant bit was found in the already decoded bit plane. Next, in module 2010, the sign bit values of these most significant bit positions are determined. Then, the run value and the sign bit value are encoded as a pair by variable length encoding for encoding the run parameter and 1 bit for encoding the signed value. Finally, in the module 2012, the higher-order bits at the other positions are encoded as binary symbols for the positions at which the most significant bits were decoded in the previous bit plane before the encoding target bit plane.
図21は、1つのビットプレーンに対する復号化処理を示すフローチャートである。まず、モジュール2100において、復号化対象ビットプレーンの最上位ビット数を表す数を圧縮データブロックから読み込む。モジュール2102では、比較を行って、復号化対象ビットプレーンがブロックの最上位ビットプレーンであるかどうかを判断する。復号化対象ビットプレーンが最上位ビットプレーンであれば、モジュール2104において、復号化された最上位ビット数に1を加算する。得られた最上位ビットの数は、復号化されるランおよび符号ビットのペアの数も表している。モジュール2106において、ランおよび符号ビットのペアを、ランパラメータ用の可変長符号化と符号ビットパラメータ用の1ビットとを用いて復号化する。最後に、モジュール2108において、すでに復号化されたビットプレーン内に最上位ビットが存在する他の位置の上位ビットを、固定長符号化を用いて復号化する。
FIG. 21 is a flowchart showing a decoding process for one bit plane. First, in module 2100, a number representing the most significant bit number of the decoding target bit plane is read from the compressed data block. Module 2102 performs a comparison to determine whether the decoding target bitplane is the most significant bitplane of the block. If the decoding target bit plane is the most significant bit plane, the module 2104 adds 1 to the number of decoded most significant bits. The number of most significant bits obtained also represents the number of run and code bit pairs to be decoded. In module 2106, the run and code bit pair is decoded using variable length coding for the run parameter and 1 bit for the code bit parameter. Finally, in module 2108, the higher order bits in other positions where the most significant bit is present in the already decoded bit plane are decoded using fixed length coding.
図22は、ランパラメータと最上位ビットの数のパラメータとを符号化する可変長符号の例を示したものである。図22Aおよび22Bは、検索テーブルがなくても容易に復号化できる可変長符号の例である。ランパラメータを符号化するのに用いられる可変長符号の一例を図22Aに示し、最上位ビットパラメータの数を符号化するのに用いられる可変長符号の一例を図22Bに示す。図22Aに示した可変長符号は0次の指数ゴロム符号である。
FIG. 22 shows an example of a variable length code for encoding the run parameter and the parameter of the number of most significant bits. 22A and 22B are examples of variable-length codes that can be easily decoded without a search table. An example of a variable length code used to encode the run parameters is shown in FIG. 22A, and an example of a variable length code used to encode the number of most significant bit parameters is shown in FIG. 22B. The variable length code shown in FIG. 22A is a zeroth-order exponent Golomb code.
図23は、係数が4つのみのビットプレーンを1つ符号化する一例を示す。概略図に示すように、圧縮ビットプレーンは、最上位ビットの数のパラメータD2306と、ランおよび符号ビットのペア(D2308およびD2310)と、バイナリシンボルD2316とを含んでいる。
FIG. 23 shows an example of encoding one bit plane having only four coefficients. As shown in the schematic diagram, the compressed bitplane includes a most significant bit number parameter D2306, a run and sign bit pair (D2308 and D2310), and a binary symbol D2316.
図24は、本発明のビデオエンコーダにおけるメモリ圧縮部とメモリ復元部とに対する明示型走査処理を示すフローチャートである。まず、モジュール2400において、新たな走査パターン値を決定する。最もよい走査パターンを決定する方法の一例として、圧縮されていない元画像の変換係数に基づいてパターンを作成するステップを含んだ方法がある。次に、モジュール2402において、明示型シグナル走査パターンフラグを1の値に設定し、モジュール2404において、それを圧縮ビデオストリームのヘッダに書き込む。そして、モジュール2406において、新たな走査パターン値を圧縮ビデオストリームのヘッダに書き込む。メモリ圧縮処理では、モジュール2408において、圧縮ビデオストリームから再構成された画像の変換された係数を、新たな走査パターンが示す方法で走査する。メモリ復元処理では、モジュール2410において、逆走査パターンを走査パターンから決定する。そして、最後に、モジュール2412において、導出された逆走査パターンを用いてメモリ復元処理の係数を逆走査する。
FIG. 24 is a flowchart showing an explicit scanning process for the memory compression unit and the memory restoration unit in the video encoder of the present invention. First, in module 2400, a new scan pattern value is determined. As an example of a method for determining the best scanning pattern, there is a method including a step of creating a pattern based on a conversion coefficient of an uncompressed original image. Next, in module 2402, the explicit signal scan pattern flag is set to a value of 1, and in module 2404 it is written into the header of the compressed video stream. Module 2406 then writes the new scan pattern value into the header of the compressed video stream. In the memory compression process, module 2408 scans the transformed coefficients of the image reconstructed from the compressed video stream in the manner indicated by the new scan pattern. In the memory restoration process, the module 2410 determines a reverse scanning pattern from the scanning pattern. Finally, in module 2412, the coefficient of the memory restoration process is reverse-scanned using the derived reverse scanning pattern.
図25は、本発明のビデオデコーダにおけるメモリ圧縮部とメモリ復元部とに対する適応型走査処理を示すフローチャートである。まず、モジュール2500では、明示型シグナル走査パターンフラグを、圧縮ビデオストリームのヘッダから読み込む。モジュール2502において、明示型シグナル走査パターンフラグを1の値と比較する。明示型シグナル走査パターンフラグが1に等しければ、モジュール2506において、圧縮ビデオストリームのヘッダから新たな走査パターン値を読み込む。明示型シグナル走査パターンフラグが1に等しくなければ、モジュール2504において、走査パターンを所定の値に設定する。画像サンプルが対称性のあるブロックサイズの場合、所定の走査パターンの一例として、ジグザグ走査パターンがある。メモリ圧縮処理では、モジュール2508において、圧縮ビデオストリームから復号化された画像の変換された係数を、走査パターンが示す方法で走査する。メモリ復元処理では、モジュール2510において、逆走査パターンを走査パターンから決定する。そして、最後に、モジュール2512において、導出された逆走査パターンを用いてメモリ復元処理の係数を逆走査する。
FIG. 25 is a flowchart showing an adaptive scanning process for the memory compression unit and the memory restoration unit in the video decoder of the present invention. First, the module 2500 reads the explicit signal scanning pattern flag from the header of the compressed video stream. In module 2502, the explicit signal scanning pattern flag is compared with a value of one. If the explicit signal scan pattern flag is equal to 1, module 2506 reads a new scan pattern value from the header of the compressed video stream. If the explicit signal scan pattern flag is not equal to 1, module 2504 sets the scan pattern to a predetermined value. When the image sample has a symmetrical block size, an example of the predetermined scanning pattern is a zigzag scanning pattern. In the memory compression process, module 2508 scans the transformed coefficients of the image decoded from the compressed video stream in the manner indicated by the scan pattern. In the memory restoration process, the module 2510 determines a reverse scanning pattern from the scanning pattern. Finally, in the module 2512, the coefficient of the memory restoration process is reversely scanned using the derived reverse scanning pattern.
図26Aおよび図26Bは、圧縮ビデオストリームのヘッダ内における明示型シグナル走査パターンフラグと走査パターン値との位置候補を示している。図26Aは、圧縮ビデオストリームのシーケンスヘッダ内における明示型シグナル走査パターンフラグと走査パターン値との位置を示している。図26Bは、圧縮ビデオストリームのピクチャヘッダ内における明示型シグナル走査パターンフラグと走査パターン値との位置を示している。明示型シグナル走査パターンフラグと走査パターン値とを用いるメモリ圧縮処理は、図26Aおよび26Bの圧縮ビデオストリームから復号化された再構成画像サンプルを圧縮する。
26A and 26B show position candidates of the explicit signal scanning pattern flag and the scanning pattern value in the header of the compressed video stream. FIG. 26A shows the positions of the explicit signal scanning pattern flag and the scanning pattern value in the sequence header of the compressed video stream. FIG. 26B shows the positions of the explicit signal scanning pattern flag and the scanning pattern value in the picture header of the compressed video stream. A memory compression process using explicit signal scan pattern flags and scan pattern values compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 26A and 26B.
図27は、本発明のビデオエンコーダにおけるメモリ圧縮部とメモリ復元部とに対する選択型走査処理を示すフローチャートである。まず、モジュール2700では、係数除去フラグ値セットを決定する。係数除去フラグを設定する方法の一例は、より高周波数の位置を取り除く位置として設定することである。モジュール2702において、係数除去存在フラグを1の値に設定する。次に、モジュール2704において、係数除去存在フラグを圧縮ビデオストリームのヘッダに書き込む。そして、モジュール2706において、決定した係数除去フラグセットを圧縮ビデオストリームのヘッダに書き込む。係数除去フラグセットによって、どの係数を走査処理から取り除くのかが決定され、係数除去フラグが1に設定された位置は、走査処理においてスキップされる。このように、モジュール2708において、メモリ圧縮処理では、係数除去フラグが1である係数位置は走査処理中スキップされる。同様に、モジュール2710において、メモリ復元処理では、係数除去フラグが1である係数位置は逆走査処理中スキップされる。そして、モジュール2712において、メモリ復元処理では、係数除去フラグが1に設定された位置での係数を0に設定する。
FIG. 27 is a flowchart showing a selective scanning process for the memory compression unit and the memory restoration unit in the video encoder of the present invention. First, in the module 2700, a coefficient removal flag value set is determined. An example of a method for setting the coefficient removal flag is to set a position for removing a higher frequency position. In module 2702, the coefficient removal presence flag is set to a value of one. Next, in the module 2704, the coefficient removal presence flag is written in the header of the compressed video stream. Then, in the module 2706, the determined coefficient removal flag set is written in the header of the compressed video stream. The coefficient removal flag set determines which coefficient is removed from the scanning process, and the position where the coefficient removal flag is set to 1 is skipped in the scanning process. Thus, in the module 2708, in the memory compression process, the coefficient position whose coefficient removal flag is 1 is skipped during the scanning process. Similarly, in the module 2710, in the memory restoration process, the coefficient position whose coefficient removal flag is 1 is skipped during the reverse scanning process. In the module 2712, in the memory restoration process, the coefficient at the position where the coefficient removal flag is set to 1 is set to 0.
図28は、本発明のビデオデコーダにおけるメモリ圧縮部とメモリ復元部とに対する選択型走査処理を示すフローチャートである。まず、モジュール2800において、係数除去存在フラグを圧縮ビデオストリームのヘッダから読み込む。モジュール2802において、比較を行って、係数除去存在フラグが1と等しいかどうか判断する。係数除去存在フラグが1に等しければ、モジュール2806において、係数除去フラグセットを圧縮ビデオストリームのヘッダから読み込む。係数除去存在フラグが1に等しくなければ、モジュール2804において、係数除去フラグセットを全て0に設定する。係数除去フラグセットによって、どの係数を走査処理から取り除くのか決定され、係数除去フラグが1に設定された位置は、走査処理においてスキップされる。このように、モジュール2808において、メモリ圧縮処理では、係数除去フラグが1である係数位置は、走査処理中スキップされる。同様に、モジュール2810において、メモリ復元処理では、係数除去フラグが1である係数位置は、逆走査処理中スキップされる。そして、モジュール2812において、メモリ復元処理では、係数除去フラグが1に設定された位置での係数を0に設定する。
FIG. 28 is a flowchart showing a selective scanning process for the memory compression unit and the memory restoration unit in the video decoder of the present invention. First, in the module 2800, the coefficient removal presence flag is read from the header of the compressed video stream. In module 2802, a comparison is made to determine if the coefficient removal presence flag is equal to one. If the coefficient removal presence flag is equal to 1, module 2806 reads the coefficient removal flag set from the header of the compressed video stream. If the coefficient removal presence flag is not equal to 1, in module 2804, all coefficient removal flag sets are set to 0. The coefficient removal flag set determines which coefficient is removed from the scanning process, and the position where the coefficient removal flag is set to 1 is skipped in the scanning process. Thus, in the module 2808, in the memory compression process, the coefficient position whose coefficient removal flag is 1 is skipped during the scanning process. Similarly, in the module 2810, in the memory restoration process, the coefficient position whose coefficient removal flag is 1 is skipped during the reverse scanning process. In the module 2812, in the memory restoration process, the coefficient at the position where the coefficient removal flag is set to 1 is set to 0.
図29Aおよび図29Bは、圧縮ビデオストリームのヘッダ内における係数除去存在フラグと係数除去フラグとの位置候補を示している。図29Aは、圧縮ビデオストリームのシーケンスヘッダ内における係数除去存在フラグと係数除去フラグとの位置を示している。図29Bは、圧縮ビデオストリームのピクチャヘッダ内における係数除去存在フラグと係数除去フラグとの位置を示している。係数除去存在フラグと係数除去フラグとを用いるメモリ圧縮処理は、図29Aおよび29Bの圧縮ビデオストリームから復号化された再構成画像サンプルを圧縮する。
29A and 29B show position candidates of the coefficient removal presence flag and the coefficient removal flag in the header of the compressed video stream. FIG. 29A shows the positions of the coefficient removal presence flag and the coefficient removal flag in the sequence header of the compressed video stream. FIG. 29B shows the positions of the coefficient removal presence flag and the coefficient removal flag in the picture header of the compressed video stream. The memory compression process using the coefficient removal presence flag and the coefficient removal flag compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 29A and 29B.
図30は、本発明のビデオエンコーダにおけるメモリ圧縮部とメモリ復元部とに対する明示型DC予測処理を示すフローチャートである。まず、モジュール3000において、新たなDC予測値を決定する。新たなDC予測値を決定する方法の一例として、圧縮されていない元画像のDC値を決定するステップを含んだ方法がある。次に、モジュール3002において、明示型シグナルDCフラグを1の値に設定し、モジュール3004において、それを圧縮ビデオストリームのヘッダに書き込む。そして、モジュール3006において、決定したDC予測値を圧縮ビデオストリームの同じヘッダに書き込む。そして、モジュール3008において、メモリ圧縮処理では、決定したDC予測値を用いて、変換されたブロックの第1係数を減算する。最後に、モジュール3010において、メモリ復元処理では、決定したDC予測値を用いて、変換されたブロックの第1係数を加算する。
FIG. 30 is a flowchart showing an explicit DC prediction process for the memory compression unit and the memory restoration unit in the video encoder of the present invention. First, in module 3000, a new DC prediction value is determined. As an example of a method for determining a new DC prediction value, there is a method including a step of determining a DC value of an uncompressed original image. Next, in module 3002, the explicit signal DC flag is set to a value of 1, and in module 3004 it is written into the header of the compressed video stream. Then, in the module 3006, the determined DC prediction value is written in the same header of the compressed video stream. In the module 3008, in the memory compression process, the first coefficient of the converted block is subtracted using the determined DC prediction value. Finally, in the module 3010, in the memory restoration process, the first coefficient of the converted block is added using the determined DC prediction value.
図31は、本発明のビデオデコーダにおけるメモリ圧縮部とメモリ復元部とに対する適応型DC予測処理を示すフローチャートである。まず、モジュール3100では、明示型シグナルDCフラグを、圧縮ビデオストリームのヘッダから読み込む。モジュール3102において、比較を行って、明示型シグナルDCフラグが1と等しいかどうか判断する。明示型シグナルDCフラグが1に等しければ、モジュール3106において、圧縮ビデオストリームのヘッダからDC予測値を読み込む。明示型シグナルDCフラグが1に等しくなければ、モジュール3104において、DC予測値を所定値に設定する。所定値の一例として、最大DC値と1の値との和の1/2という値がある。モジュール3108において、メモリ圧縮処理では、DC予測値を用いて、変換されたブロックの第1係数を減算する。最後に、モジュール3110において、メモリ復元処理では、DC予測値を用いて、変換されたブロックの第1係数を加算する。
FIG. 31 is a flowchart showing an adaptive DC prediction process for the memory compression unit and the memory restoration unit in the video decoder of the present invention. First, the module 3100 reads the explicit signal DC flag from the header of the compressed video stream. In module 3102 a comparison is made to determine if the explicit signal DC flag is equal to one. If the explicit signal DC flag is equal to 1, the module 3106 reads the DC prediction value from the header of the compressed video stream. If the explicit signal DC flag is not equal to 1, the module 3104 sets the DC prediction value to a predetermined value. As an example of the predetermined value, there is a value that is 1/2 of the sum of the maximum DC value and the value of 1. In the module 3108, in the memory compression process, the first coefficient of the converted block is subtracted using the DC prediction value. Finally, in the module 3110, in the memory restoration process, the first coefficient of the converted block is added using the DC prediction value.
図32Aおよび図32Bは、圧縮ビデオストリームのヘッダ内における明示型シグナルDCフラグとDC予測値との位置候補を示している。図32Aは、圧縮ビデオストリームのシーケンスヘッダ内における明示型シグナルDCフラグとDC予測値との位置を示している。図32Bは、圧縮ビデオストリームのピクチャヘッダ内における明示型シグナルDCフラグとDC予測値との位置を示している。明示型シグナルDCフラグとDC予測値とを用いるメモリ圧縮処理は、図32Aおよび32Bの圧縮ビデオストリームから復号化された再構成画像サンプルを圧縮する。
32A and 32B show position candidates of the explicit signal DC flag and the DC predicted value in the header of the compressed video stream. FIG. 32A shows the positions of the explicit signal DC flag and the DC prediction value in the sequence header of the compressed video stream. FIG. 32B shows the positions of the explicit signal DC flag and the DC prediction value in the picture header of the compressed video stream. The memory compression process using the explicit signal DC flag and the DC prediction value compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 32A and 32B.
図33は、本発明のビデオエンコーダにおけるメモリ圧縮部とメモリ復元部とに対する係数ビットシフト処理を示すフローチャートである。まず、モジュール3300において、係数の重みセットを決定する。係数の重みを決定する方法の一例として、低周波数ほどより重み付けることによって最もよい知覚品質を提供することを目的とした最適化に基づく方法がある。モジュール3302において、係数重み値存在フラグを1の値に設定し、モジュール3304において、それを圧縮ビデオストリームのヘッダに書き込む。そして、モジュール3306において、係数の重みセットを圧縮ビデオストリームの同じヘッダに書き込む。そして、モジュール3308において、メモリ圧縮処理では、各係数の値を、係数重み値に基づき左へビットシフトする。最後に、モジュール3310において、メモリ復元処理では、各係数の値を、係数重み値に基づき右へビットシフトする。
FIG. 33 is a flowchart showing a coefficient bit shift process for the memory compression unit and the memory restoration unit in the video encoder of the present invention. First, in module 3300, a weight set of coefficients is determined. One example of a method for determining the weights of the coefficients is an optimization-based method aimed at providing the best perceptual quality by weighting the lower frequencies more. In module 3302, the coefficient weight value presence flag is set to a value of 1, and in module 3304 it is written into the header of the compressed video stream. Module 3306 then writes the coefficient weight set to the same header of the compressed video stream. In the module 3308, in the memory compression process, the value of each coefficient is bit-shifted to the left based on the coefficient weight value. Finally, in the module 3310, in the memory restoration process, the value of each coefficient is bit-shifted to the right based on the coefficient weight value.
図34は、本発明のビデオデコーダにおけるメモリ圧縮部とメモリ復元部とに対する係数ビットシフト処理を示すフローチャートである。まず、モジュール3400において、係数重み値存在フラグを圧縮ビデオストリームのヘッダから読み込む。モジュール3402において、比較を行って、係数重み値存在フラグが1と等しいかどうか判断する。係数重み値存在フラグが1に等しければ、モジュール3406において、係数重み値セットを圧縮ビデオストリームのヘッダから読み込む。係数重み値存在フラグが1に等しくなければ、モジュール3404において、係数重み値セットを0に設定する。モジュール3408において、メモリ圧縮処理では、各係数の値を、係数重み値に基づき左へビットシフトする。最後に、モジュール3410において、メモリ復元処理では、各係数の値を、係数重み値に基づき右へビットシフトする。係数重み値セットが全て0である場合、または、係数重み値存在フラグが0である場合には、モジュール3408および3410は省略可能である。
FIG. 34 is a flowchart showing coefficient bit shift processing for the memory compression unit and the memory restoration unit in the video decoder of the present invention. First, in the module 3400, the coefficient weight value presence flag is read from the header of the compressed video stream. In module 3402, a comparison is made to determine if the coefficient weight value presence flag is equal to one. If the coefficient weight value presence flag is equal to 1, the module 3406 reads the coefficient weight value set from the header of the compressed video stream. If the coefficient weight value presence flag is not equal to 1, in module 3404, the coefficient weight value set is set to 0. In the module 3408, in the memory compression process, the value of each coefficient is bit-shifted to the left based on the coefficient weight value. Finally, in the module 3410, in the memory restoration process, the value of each coefficient is bit-shifted to the right based on the coefficient weight value. When the coefficient weight value set is all 0, or when the coefficient weight value presence flag is 0, the modules 3408 and 3410 can be omitted.
図35Aおよび図35Bは、圧縮ビデオストリームのヘッダ内における係数重み値存在フラグと係数重み値セットとの位置候補を示している。図35Aは、圧縮ビデオストリームのシーケンスヘッダ内における係数重み値存在フラグと係数重み値セットとの位置を示している。図35Bは、圧縮ビデオストリームのピクチャヘッダ内における係数重み値存在フラグと係数重み値セットとの位置を示している。係数重み値存在フラグと係数重み値セットとを用いるメモリ圧縮処理は、図35Aおよび35Bの圧縮ビデオストリームから復号化された再構成画像サンプルを圧縮する。
35A and 35B show position candidates of the coefficient weight value presence flag and the coefficient weight value set in the header of the compressed video stream. FIG. 35A shows the positions of the coefficient weight value presence flag and the coefficient weight value set in the sequence header of the compressed video stream. FIG. 35B shows the positions of the coefficient weight value presence flag and the coefficient weight value set in the picture header of the compressed video stream. The memory compression process using the coefficient weight value presence flag and the coefficient weight value set compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 35A and 35B.
図36は、本発明のビデオエンコーダにおけるメモリ圧縮部に対する適応型変換処理を示すフローチャートである。適切なブロックサイズを決定する方法の一例として、特定アプリケーション向けの固有メモリアーキテクチャに適するブロックサイズを選択する方法がある。次に、モジュール3602において、選択されたブロックサイズに関する情報を圧縮ビデオストリームのヘッダに書き込む。モジュール3604において、メモリ圧縮処理では、再構成画像サンプルのブロックを取得する。この再構成画像サンプルは、圧縮ビデオストリームから復号化されたサンプルである。このブロックのサイズは、選択されたブロックサイズに基づく。そして、モジュール3606において、選択されたブロックサイズに基づき、複数の変換行列の中から変換行列を1つ選択する。変換行列の一例として、アダマール変換に基づく変換行列がある。最後に、モジュール3608において、選択された変換行列を用いて、画像サンプルのブロックの変換処理を行う。
FIG. 36 is a flowchart showing an adaptive conversion process for the memory compression unit in the video encoder of the present invention. An example of a method for determining an appropriate block size is a method for selecting a block size suitable for a specific memory architecture for a specific application. Next, in module 3602, information regarding the selected block size is written into the header of the compressed video stream. In the module 3604, in the memory compression process, a block of the reconstructed image sample is acquired. This reconstructed image sample is a sample decoded from the compressed video stream. The size of this block is based on the selected block size. Then, in the module 3606, one transformation matrix is selected from the plurality of transformation matrices based on the selected block size. An example of the transformation matrix is a transformation matrix based on Hadamard transformation. Finally, a module 3608 performs block conversion processing on the image sample using the selected conversion matrix.
図37は、本発明のビデオデコーダにおけるメモリ圧縮部に対する適応型変換処理を示すフローチャートである。モジュール3700において、ブロックサイズに関する情報を圧縮ビデオストリームのヘッダから読み込む。モジュール3702において、メモリ圧縮処理では、再構成画像サンプルのブロックを取得する。この再構成画像サンプルは、圧縮ビデオストリームから復号化されたサンプルである。このブロックのサイズは、復号化されたブロックサイズ情報に基づく。そして、モジュール3704において、復号化されたブロックサイズ情報に基づき、複数の変換行列の中から変換行列を1つ選択する。変換行列の一例として、アダマール変換に基づく変換行列がある。最後に、モジュール3706において、選択された変換行列を用いて、画像サンプルのブロックの変換処理を行う。
FIG. 37 is a flowchart showing an adaptive conversion process for the memory compression unit in the video decoder of the present invention. In module 3700, information about the block size is read from the header of the compressed video stream. In the module 3702, in the memory compression process, a block of a reconstructed image sample is acquired. This reconstructed image sample is a sample decoded from the compressed video stream. The size of this block is based on the decoded block size information. In module 3704, one transformation matrix is selected from a plurality of transformation matrices based on the decoded block size information. An example of the transformation matrix is a transformation matrix based on Hadamard transformation. Finally, in module 3706, the selected transformation matrix is used to transform a block of image samples.
図38は、本発明のビデオエンコーダおよびビデオデコーダにおけるメモリ復元部に対する適応型逆変換処理を示すフローチャートである。モジュール3800において、ブロックサイズ情報を決定する。このブロックサイズ情報は、同じビデオエンコーダまたは同じビデオデコーダに備えられているメモリ復元部とメモリ圧縮部との間で共有化される。モジュール3802において、メモリ復元処理では、変換係数のブロックを復元する。そして、モジュール3804において、決定されたブロックサイズに基づき、複数の逆変換行列の中から逆変換行列を1つ選択する。逆変換行列の一例として、アダマール変換に基づく逆変換行列がある。最後に、モジュール3806において、選択された逆変換行列を用いて、係数ブロックの逆変換処理を行う。
FIG. 38 is a flowchart showing an adaptive inverse transform process for the memory restoration unit in the video encoder and video decoder of the present invention. In module 3800, block size information is determined. This block size information is shared between the memory restoration unit and the memory compression unit provided in the same video encoder or the same video decoder. In the module 3802, in the memory restoration process, the block of the transform coefficient is restored. Then, in module 3804, one inverse transformation matrix is selected from a plurality of inverse transformation matrices based on the determined block size. As an example of the inverse transformation matrix, there is an inverse transformation matrix based on Hadamard transformation. Finally, in the module 3806, the inverse transformation process of the coefficient block is performed using the selected inverse transformation matrix.
図39A~図39Cは、圧縮ビデオストリームのヘッダ内におけるブロックサイズ、つまり、寸法情報の位置候補を示している。図39Aは、圧縮ビデオストリームのシーケンスヘッダ内におけるブロックサイズ、つまり、寸法情報の位置を示している。図39Bは、圧縮ビデオストリームのピクチャヘッダ内におけるブロックサイズ、つまり、寸法情報の位置を示している。図39Cは、圧縮ビデオストリームのシーケンスヘッダ内の符号化されたプロファイルパラメータ、レベルパラメータ、またはその両方に基づき、ブロックサイズ、つまり、寸法情報が検索テーブルから導出できることを示している。ブロックサイズ、つまり、寸法情報を用いるメモリ圧縮処理は、図39A、39B、および39Cに記載されたビデオストリームから復号化して再構成された画像サンプルを圧縮する。
39A to 39C show the block size in the header of the compressed video stream, that is, the position information of the dimension information. FIG. 39A shows the block size in the sequence header of the compressed video stream, that is, the position of the dimension information. FIG. 39B shows the block size in the picture header of the compressed video stream, that is, the position of the dimension information. FIG. 39C shows that the block size, ie, dimension information, can be derived from the lookup table based on the encoded profile parameter, level parameter, or both in the sequence header of the compressed video stream. The memory compression process using block size, ie, dimension information, compresses the reconstructed image samples decoded from the video streams described in FIGS. 39A, 39B, and 39C.
図40は、本発明のビデオエンコーダにおけるメモリ圧縮部に対する適応型圧縮サイズビットプレーン符号化処理を示すフローチャートである。モジュール4000において、対象圧縮サイズを決定する。対象圧縮サイズは、実装コストをどのくらい削減したいか次第である。次に、モジュール4002において、対象圧縮率に関する情報を圧縮ビデオストリームのヘッダに書き込む。モジュール4004において、メモリ圧縮処理では、係数のブロックを取得する。この係数ブロックは、圧縮ビデオストリームから復号化かつ変換される。最後に、モジュール4006において、データブロックの対象圧縮サイズに到達するまで、最上位ビットプレーンの符号化から順に係数のビットプレーン符号化処理を用いる。符号化できない情報つまりビットは、対象圧縮サイズに到達した後に破棄される。
FIG. 40 is a flowchart showing an adaptive compression size bit-plane encoding process for the memory compression unit in the video encoder of the present invention. In module 4000, the target compression size is determined. The target compression size depends on how much you want to reduce the implementation cost. Next, in the module 4002, information on the target compression rate is written in the header of the compressed video stream. In the module 4004, in the memory compression process, a block of coefficients is acquired. This coefficient block is decoded and converted from the compressed video stream. Finally, the module 4006 uses the coefficient bit-plane coding process in order from the coding of the most significant bit plane until the target compression size of the data block is reached. Information that cannot be encoded, that is, bits, is discarded after reaching the target compression size.
図41は、本発明のビデオデコーダにおけるメモリ圧縮部に対する適応型圧縮サイズビットプレーン符号化処理を示すフローチャートである。モジュール4100において、対象圧縮サイズに関する情報を圧縮ビデオストリームのヘッダから読み込む。メモリ圧縮処理では、モジュール4102において、係数のブロックを取得する。この係数ブロックは、圧縮ビデオストリームから復号化かつ変換される。最後に、モジュール4104において、データブロックの対象圧縮サイズに到達するまで、最上位ビットプレーンの符号化から順に係数のビットプレーン符号化処理を用いる。符号化できない情報つまりビットは、対象圧縮サイズに到達した後に破棄される。
FIG. 41 is a flowchart showing an adaptive compression size bit-plane encoding process for the memory compression unit in the video decoder of the present invention. In module 4100, information about the target compression size is read from the header of the compressed video stream. In the memory compression process, the module 4102 acquires a block of coefficients. This coefficient block is decoded and converted from the compressed video stream. Finally, in module 4104, the coefficient bit-plane coding process is used in order from the coding of the most significant bit-plane until the target compression size of the data block is reached. Information that cannot be encoded, that is, bits, is discarded after reaching the target compression size.
図42A~図42Dは、圧縮ビデオストリームのヘッダ内における対象圧縮データサイズ情報の位置候補を示している。図42Aは、圧縮ビデオストリームのシーケンスヘッダ内における対象圧縮データサイズ情報の位置を示している。図42Bは、圧縮ビデオストリームのピクチャヘッダ内における対象圧縮データサイズ情報の位置を示している。図42Cは、圧縮ビデオストリームのシーケンスヘッダ内の符号化されたプロファイルパラメータ、レベルパラメータ、またはその両方に基づき、対象圧縮データサイズ情報が検索テーブルから導出できることを示している。図42Dは、圧縮ビデオストリームのシーケンスヘッダ内の符号化されたピクチャ幅パラメータとピクチャ高さパラメータとに基づき、対象圧縮データサイズ情報が検索テーブルから導出できることを示している。対象圧縮データサイズ情報を用いるメモリ圧縮処理は、図42A、42B、42C、および42Dの圧縮ビデオストリームから復号化された再構成画像サンプルを圧縮する。
42A to 42D show position candidates of the target compressed data size information in the header of the compressed video stream. FIG. 42A shows the position of the target compressed data size information in the sequence header of the compressed video stream. FIG. 42B shows the position of the target compressed data size information in the picture header of the compressed video stream. FIG. 42C shows that the target compressed data size information can be derived from the lookup table based on the encoded profile parameter, the level parameter, or both in the sequence header of the compressed video stream. FIG. 42D shows that the target compressed data size information can be derived from the search table based on the encoded picture width parameter and picture height parameter in the sequence header of the compressed video stream. A memory compression process using target compressed data size information compresses the reconstructed image samples decoded from the compressed video streams of FIGS. 42A, 42B, 42C, and 42D.
図43は、本発明のビデオエンコーダにおけるメモリ復元部に対する適応型画素ビットシフトおよびクリッピング処理を示すフローチャートである。モジュール4300において、適切な対象ビット精度を決定する。この適切な対象ビット精度は、入力画像の精度に依存する。次に、モジュール4302において、対象ビット精度に関する情報を圧縮ビデオストリームのヘッダに書き込む。メモリ復元処理では、モジュール4304において、復元された画像サンプルのブロックを取得する。モジュール4306において、復元された画像サンプルのビット精度を決定する。モジュール4308では、比較を行って、その復元画像サンプルの精度が対象ビット精度と一致するかどうかを決定する。復元画像サンプルのビット精度が対象ビット精度と一致しなければ、モジュール4310において、各画像サンプルの値を右へシフトして対象ビット精度を得る。ビットシフトの程度は、復元画像サンプルのビット精度と対象ビット精度との差に基づく。最後に、モジュール4312において、各画素値は、対象ビット精度に基づき、最大値と最小値との間の整数値の範囲にクリッピングされる。最大値の一例としては、対象ビット精度の正の最大整数値があり、最小値の一例としては、0がある。
FIG. 43 is a flowchart showing an adaptive pixel bit shift and clipping process for the memory restoration unit in the video encoder of the present invention. In module 4300, an appropriate target bit precision is determined. The appropriate target bit accuracy depends on the accuracy of the input image. Next, in module 4302, information regarding the target bit accuracy is written in the header of the compressed video stream. In the memory restoration process, the module 4304 obtains a restored block of image samples. In module 4306, the bit accuracy of the reconstructed image sample is determined. In module 4308, a comparison is made to determine if the accuracy of the restored image sample matches the target bit accuracy. If the bit accuracy of the restored image sample does not match the target bit accuracy, module 4310 shifts the value of each image sample to the right to obtain the target bit accuracy. The degree of bit shift is based on the difference between the bit accuracy of the restored image sample and the target bit accuracy. Finally, in module 4312, each pixel value is clipped to an integer value range between the maximum and minimum values based on the target bit accuracy. An example of the maximum value is a positive maximum integer value with target bit precision, and an example of the minimum value is 0.
図44は、本発明のビデオデコーダにおけるメモリ復元部に対する適応型画素ビットシフトおよびクリッピング処理を示すフローチャートである。モジュール4400において、対象ビット精度に関する情報を圧縮ビデオストリームのヘッダから読み込む。メモリ復元処理では、モジュール4402において、復元された画像サンプルのブロックを取得する。モジュール4404において、復元された画像サンプルのビット精度を決定する。モジュール4406では、比較を行って、その復元画像サンプルのビット精度が対象ビット精度と一致するかどうかを決定する。復元画像サンプルのビット精度が対象ビット精度と一致しなければ、モジュール4408において、各画像サンプルの値を右へシフトして対象ビット精度を得る。ビットシフトの程度は、復元画像サンプルの精度と対象ビット精度との差に基づく。最後に、モジュール4410において、各画素値は、対象ビット精度に基づき、最大値と最小値との間の整数値の範囲にクリッピングされる。最大値の一例としては、対象ビット精度の正の最大整数値があり、最小値の一例としては、0がある。
FIG. 44 is a flowchart showing adaptive pixel bit shift and clipping processing for the memory restoration unit in the video decoder of the present invention. In module 4400, information regarding the target bit accuracy is read from the header of the compressed video stream. In the memory restoration process, the module 4402 obtains a restored block of image samples. In module 4404, the bit accuracy of the reconstructed image sample is determined. In module 4406, a comparison is made to determine if the bit accuracy of the restored image sample matches the target bit accuracy. If the bit accuracy of the restored image sample does not match the target bit accuracy, module 4408 shifts the value of each image sample to the right to obtain the target bit accuracy. The degree of bit shift is based on the difference between the accuracy of the restored image sample and the target bit accuracy. Finally, in module 4410, each pixel value is clipped to an integer value range between the maximum and minimum values based on the target bit precision. An example of the maximum value is a positive maximum integer value with target bit precision, and an example of the minimum value is 0.
図45A~図45Cは、圧縮ビデオストリームのヘッダ内における対象ビット精度情報の位置候補を示している。図45Aは、圧縮ビデオストリームのシーケンスヘッダ内における対象ビット精度情報の位置を示している。図45Bは、圧縮ビデオストリームのピクチャヘッダ内における対象ビット精度情報の位置を示している。図42Cは、圧縮ビデオストリームのシーケンスヘッダ内の符号化されたプロファイルパラメータ、レベルパラメータ、またはその両方に基づき、対象ビット精度情報が検索テーブルから導出できることを示している。対象ビット精度情報を用いるメモリ復元処理は、図45A、45B、および45Cの圧縮ビデオストリームから生成された再構成画像サンプルを復元する。
45A to 45C show position candidates of the target bit accuracy information in the header of the compressed video stream. FIG. 45A shows the position of the target bit accuracy information in the sequence header of the compressed video stream. FIG. 45B shows the position of the target bit accuracy information in the picture header of the compressed video stream. FIG. 42C shows that the target bit accuracy information can be derived from the lookup table based on the encoded profile parameter, the level parameter, or both in the sequence header of the compressed video stream. The memory restoration process using the target bit accuracy information restores the reconstructed image samples generated from the compressed video streams of FIGS. 45A, 45B, and 45C.
図46A~図46Dは、圧縮ビデオのヘッダ内におけるメモリ圧縮方式選択パラメータの可能な位置を示す。メモリ圧縮方式選択パラメータは、図46Aに示すように、シーケンスパラメータセット(SPS)内またはシーケンスヘッダ内に配置可能である。また、メモリ圧縮方式選択パラメータは、図46Bに示すように、ピクチャパラメータセット(PPS)内またはピクチャヘッダ内に配置可能である。また、メモリ圧縮方式選択パラメータは、図46Cに示すように、シーケンスパラメータセット内またはシーケンスヘッダ内にあるプロファイルパラメータまたはレベルパラメータから導出されてもよい。また、メモリ圧縮方式選択パラメータは、図46Dに示すように、シーケンスパラメータセット内またはシーケンスヘッダ内にあるピクチャ幅(Pict_Width)とピクチャ高さ(Pict_Hight)から導出されてもよい。
46A to 46D show possible positions of the memory compression method selection parameter in the header of the compressed video. As shown in FIG. 46A, the memory compression method selection parameter can be arranged in a sequence parameter set (SPS) or a sequence header. Further, the memory compression method selection parameter can be arranged in a picture parameter set (PPS) or a picture header as shown in FIG. 46B. Further, the memory compression method selection parameter may be derived from a profile parameter or a level parameter in a sequence parameter set or a sequence header, as shown in FIG. 46C. Further, as shown in FIG. 46D, the memory compression method selection parameter may be derived from the picture width (Pict_Width) and picture height (Pict_Hight) in the sequence parameter set or the sequence header.
図47は、本発明の動画像復号化装置における適応型メモリ圧縮方式のフローチャートを示す。図47に示すように、モジュール4700において、メモリ圧縮方式選択パラメータが圧縮ビデオストリームのヘッダから読み出される。次に、モジュール4702において、メモリ圧縮方式選択パラメータが予め定められた値を有するか否かが判別される。メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、モジュール4706において、適応型ビット量子化方式に基づく圧縮方式を用いて再構成サンプルがサンプルグループごと圧縮される。メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、モジュール4704において、単純画素ビット右シフト方式を用いて再構成サンプルが画素ごとに圧縮される。最後に、モジュール4708において、圧縮サンプルがメモリユニットに格納される。
FIG. 47 shows a flowchart of the adaptive memory compression method in the video decoding apparatus of the present invention. As shown in FIG. 47, in module 4700, memory compression scheme selection parameters are read from the header of the compressed video stream. Next, in module 4702, it is determined whether or not the memory compression method selection parameter has a predetermined value. If the memory compression scheme selection parameter has a predetermined value, module 4706 compresses the reconstructed samples for each sample group using a compression scheme based on the adaptive bit quantization scheme. If the memory compression scheme selection parameter does not have a predetermined value, module 4704 compresses the reconstructed sample pixel by pixel using a simple pixel bit right shift scheme. Finally, in module 4708, the compressed samples are stored in the memory unit.
図48は、本発明の動画像符号化装置における適応型メモリ圧縮方式のフローチャートを示す。図48に示すように、モジュール4800において、複数のメモリ圧縮方式の中から、1つのメモリ圧縮方式が選択される。モジュール4802において、メモリ圧縮方式選択パラメータが圧縮ビデオストリームのヘッダに書き込まれる。次に、モジュール4804において、メモリ圧縮方式選択パラメータが予め定められた値を有するか否かが判別される。メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、モジュール4808において、適応型ビット量子化方式に基づく圧縮方式を用いて再構成サンプルがサンプルグループごとに圧縮される。メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、モジュール4806において、単純画素ビット右シフト方式を用いて再構成サンプルが画素ごと圧縮される。最後に、モジュール4810において、圧縮サンプルがメモリユニットに格納される。
FIG. 48 shows a flowchart of an adaptive memory compression method in the video encoding apparatus of the present invention. As shown in FIG. 48, in the module 4800, one memory compression method is selected from a plurality of memory compression methods. In module 4802, memory compression scheme selection parameters are written to the header of the compressed video stream. Next, in module 4804, it is determined whether or not the memory compression method selection parameter has a predetermined value. If the memory compression scheme selection parameter has a predetermined value, module 4808 compresses the reconstructed samples for each sample group using a compression scheme based on the adaptive bit quantization scheme. If the memory compression scheme selection parameter does not have a predetermined value, module 4806 compresses the reconstructed sample pixel by pixel using a simple pixel bit right shift scheme. Finally, in module 4810, the compressed samples are stored in the memory unit.
図49は、本発明の動画像符号化装置および動画像復号化装置における適応型メモリ復元方式のフローチャートを示す。図49に示すように、モジュール4900において、メモリ圧縮方式選択パラメータが決定される。モジュール4902において、メモリユニットから圧縮サンプルが検索される。次に、モジュール4904において、メモリ圧縮方式選択パラメータが予め定められた値を有するか否かが判別される。メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、モジュール4908において、適応型ビット逆量子化に基づく復元方式を用いて圧縮サンプルが、サンプルグループごとに復元される。メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、モジュール4906において、単純画素ビット左シフト方式を用いて再構成サンプルがピクセルごとに復元される。最後に、モジュール4910において、復元された再構成サンプルを用いてピクチャ間予測が行われる。
FIG. 49 shows a flowchart of an adaptive memory restoration method in the video encoding device and video decoding device of the present invention. As shown in FIG. 49, in a module 4900, a memory compression method selection parameter is determined. In module 4902, the compressed sample is retrieved from the memory unit. Next, in module 4904, it is determined whether or not the memory compression method selection parameter has a predetermined value. If the memory compression scheme selection parameter has a predetermined value, in module 4908, compressed samples are decompressed for each sample group using a decompression scheme based on adaptive bit dequantization. If the memory compression scheme selection parameter does not have a predetermined value, in module 4906, the reconstructed samples are restored pixel by pixel using a simple pixel bit left shift scheme. Finally, in module 4910, inter-picture prediction is performed using the reconstructed reconstructed samples.
(実施の形態2)
上記各実施の形態で示した動画像符号化方法または動画像復号化方法の構成を実現するためのプログラムを記憶メディアに記録することにより、上記各実施の形態で示した処理を独立したコンピュータシステムにおいて簡単に実施することが可能となる。記憶メディアは、磁気ディスク、光ディスク、光磁気ディスク、ICカード、半導体メモリ等、プログラムを記録できるものであればよい。 (Embodiment 2)
By recording a program for realizing the configuration of the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments on a storage medium, the computer system in which the processing shown in each of the above embodiments is independent It becomes possible to carry out easily. The storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
上記各実施の形態で示した動画像符号化方法または動画像復号化方法の構成を実現するためのプログラムを記憶メディアに記録することにより、上記各実施の形態で示した処理を独立したコンピュータシステムにおいて簡単に実施することが可能となる。記憶メディアは、磁気ディスク、光ディスク、光磁気ディスク、ICカード、半導体メモリ等、プログラムを記録できるものであればよい。 (Embodiment 2)
By recording a program for realizing the configuration of the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments on a storage medium, the computer system in which the processing shown in each of the above embodiments is independent It becomes possible to carry out easily. The storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
さらにここで、上記各実施の形態で示した動画像符号化方法や動画像復号化方法の応用例とそれを用いたシステムを説明する。
Further, application examples of the moving picture encoding method and the moving picture decoding method shown in the above embodiments and a system using the same will be described.
図50は、コンテンツ配信サービスを実現するコンテンツ供給システムex100の全体構成を示す図である。通信サービスの提供エリアを所望の大きさに分割し、各セル内にそれぞれ固定無線局である基地局ex106、ex107、ex108、ex109、ex110が設置されている。
FIG. 50 is a diagram illustrating an overall configuration of a content supply system ex100 that realizes a content distribution service. The communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
このコンテンツ供給システムex100は、インターネットex101にインターネットサービスプロバイダex102および電話網ex104、および基地局ex106からex110を介して、コンピュータex111、PDA(Personal Digital Assistant)ex112、カメラex113、携帯電話ex114、ゲーム機ex115などの各機器が接続される。
The content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
しかし、コンテンツ供給システムex100は図50のような構成に限定されず、いずれかの要素を組合せて接続するようにしてもよい。また、固定無線局である基地局ex106からex110を介さずに、各機器が電話網ex104に直接接続されてもよい。また、各機器が近距離無線等を介して直接相互に接続されていてもよい。
However, the content supply system ex100 is not limited to the configuration as shown in FIG. In addition, each device may be directly connected to the telephone network ex104 without going through the base stations ex106 to ex110 which are fixed wireless stations. In addition, the devices may be directly connected to each other via short-range wireless or the like.
カメラex113はデジタルビデオカメラ等の動画撮影が可能な機器であり、カメラex116はデジタルカメラ等の静止画撮影、動画撮影が可能な機器である。また、携帯電話ex114は、GSM(Global System for Mobile Communications)方式、CDMA(Code Division Multiple Access)方式、W-CDMA(Wideband-Code Division Multiple Access)方式、若しくはLTE(Long Term Evolution)方式、HSPA(High Speed Packet Access)の携帯電話機、またはPHS(Personal Handyphone System)等であり、いずれでも構わない。
The camera ex113 is a device that can shoot moving images such as a digital video camera, and the camera ex116 is a device that can shoot still images and movies such as a digital camera. In addition, the mobile phone ex114 is a GSM (Global System for Mobile Communications) method, a CDMA (Code Division Multiple Access) method, a W-CDMA (Wideband-Code Division Multiple Access L (Semiconductor Access) method, a W-CDMA (Wideband-Code Division Multiple Access L method, or a high access rate). A High Speed Packet Access) mobile phone or a PHS (Personal Handyphone System) may be used.
コンテンツ供給システムex100では、カメラex113等が基地局ex109、電話網ex104を通じてストリーミングサーバex103に接続されることで、ライブ配信等が可能になる。ライブ配信では、ユーザがカメラex113を用いて撮影するコンテンツ(例えば、音楽ライブの映像等)に対して上記各実施の形態で説明したように符号化処理を行い、ストリーミングサーバex103に送信する。一方、ストリーミングサーバex103は要求のあったクライアントに対して送信されたコンテンツデータをストリーム配信する。クライアントとしては、上記符号化処理されたデータを復号化することが可能な、コンピュータex111、PDAex112、カメラex113、携帯電話ex114、ゲーム機ex115等がある。配信されたデータを受信した各機器では、受信したデータを復号化処理して再生する。
In the content supply system ex100, the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like. In live distribution, the content (for example, music live video) captured by the user using the camera ex113 is encoded as described in the above embodiments, and transmitted to the streaming server ex103. On the other hand, the streaming server ex103 streams the content data transmitted to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, a game machine ex115, and the like that can decode the encoded data. Each device that receives the distributed data decodes the received data and reproduces it.
なお、撮影したデータの符号化処理はカメラex113で行っても、データの送信処理をするストリーミングサーバex103で行ってもよいし、互いに分担して行ってもよい。同様に配信されたデータの復号化処理はクライアントで行っても、ストリーミングサーバex103で行ってもよいし、互いに分担して行ってもよい。また、カメラex113に限らず、カメラex116で撮影した静止画像および/または動画像データを、コンピュータex111を介してストリーミングサーバex103に送信してもよい。この場合の符号化処理はカメラex116、コンピュータex111、ストリーミングサーバex103のいずれで行ってもよいし、互いに分担して行ってもよい。
Note that the encoded processing of the captured data may be performed by the camera ex113, the streaming server ex103 that performs the data transmission processing, or may be performed in a shared manner. Similarly, the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in a shared manner. In addition to the camera ex113, still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111. The encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
また、これら符号化・復号化処理は、一般的にコンピュータex111や各機器が有するLSIex500において処理する。LSIex500は、ワンチップであっても複数チップからなる構成であってもよい。なお、動画像符号化・復号化用のソフトウェアをコンピュータex111等で読み取り可能な何らかの記録メディア(CD-ROM、フレキシブルディスク、ハードディスクなど)に組み込み、そのソフトウェアを用いて符号化・復号化処理を行ってもよい。さらに、携帯電話ex114がカメラ付きである場合には、そのカメラで取得した動画データを送信してもよい。このときの動画データは携帯電話ex114が有するLSIex500で符号化処理されたデータである。
These encoding / decoding processes are generally performed by the computer ex111 and the LSI ex500 included in each device. The LSI ex500 may be configured as a single chip or a plurality of chips. It should be noted that moving image encoding / decoding software is incorporated into some recording media (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111 and the like, and encoding / decoding processing is performed using the software. May be. Furthermore, when the mobile phone ex114 is equipped with a camera, moving image data acquired by the camera may be transmitted. The moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
また、ストリーミングサーバex103は複数のサーバや複数のコンピュータであって、データを分散して処理したり記録したり配信するものであってもよい。
Also, the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
以上のようにして、コンテンツ供給システムex100では、符号化されたデータをクライアントが受信して再生することができる。このようにコンテンツ供給システムex100では、ユーザが送信した情報をリアルタイムでクライアントが受信して復号化し、再生することができ、特別な権利や設備を有さないユーザでも個人放送を実現できる。
As described above, in the content supply system ex100, the encoded data can be received and reproduced by the client. In this way, in the content supply system ex100, the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and even a user who does not have special rights or facilities can realize personal broadcasting.
なお、コンテンツ供給システムex100の例に限らず、図51に示すように、デジタル放送用システムex200にも、上記各実施の形態の少なくとも動画像符号化装置または動画像復号化装置のいずれかを組み込むことができる。具体的には、放送局ex201では映像データに音楽データなどが多重化された多重化データが電波を介して通信または衛星ex202に伝送される。この映像データは上記各実施の形態で説明した動画像符号化方法により符号化されたデータである。これを受けた放送衛星ex202は、放送用の電波を発信し、この電波を衛星放送の受信が可能な家庭のアンテナex204が受信する。受信した多重化データを、テレビ(受信機)ex300またはセットトップボックス(STB)ex217等の装置が復号化して再生する。
In addition to the example of the content supply system ex100, as shown in FIG. 51, at least one of the video encoding device and the video decoding device of each of the above embodiments is incorporated in the digital broadcasting system ex200. be able to. Specifically, in the broadcast station ex201, multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves. This video data is data encoded by the moving image encoding method described in the above embodiments. Receiving this, the broadcasting satellite ex202 transmits a radio wave for broadcasting, and the home antenna ex204 capable of receiving the satellite broadcast receives the radio wave. The received multiplexed data is decoded and reproduced by a device such as the television (receiver) ex300 or the set top box (STB) ex217.
また、DVD、BD等の記録メディアex215に記録した多重化データを読み取り復号化する、または記録メディアex215に映像信号を符号化し、さらに場合によっては音楽信号と多重化して書き込むリーダ/レコーダex218にも上記各実施の形態で示した動画像復号化装置または動画像符号化装置を実装することが可能である。この場合、再生された映像信号はモニタex219に表示され、多重化データが記録された記録メディアex215により他の装置やシステムにおいて映像信号を再生することができる。また、ケーブルテレビ用のケーブルex203または衛星/地上波放送のアンテナex204に接続されたセットトップボックスex217内に動画像復号化装置を実装し、これをテレビのモニタex219で表示してもよい。このときセットトップボックスではなく、テレビ内に動画像復号化装置を組み込んでもよい。
Also, a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system by the recording medium ex215 on which the multiplexed data is recorded. Further, a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television. At this time, the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
図52は、上記各実施の形態で説明した動画像復号化方法および動画像符号化方法を用いたテレビ(受信機)ex300を示す図である。テレビex300は、上記放送を受信するアンテナex204またはケーブルex203等を介して映像データに音声データが多重化された多重化データを取得、または出力するチューナex301と、受信した多重化データを復調する、または外部に送信する多重化データに変調する変調/復調部ex302と、復調した多重化データを映像データと、音声データとに分離する、または信号処理部ex306で符号化された映像データ、音声データを多重化する多重/分離部ex303を備える。
FIG. 52 is a diagram illustrating a television (receiver) ex300 that uses the video decoding method and the video encoding method described in each of the above embodiments. The television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data. Alternatively, the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
また、テレビex300は、音声データ、映像データそれぞれを復号化する、またはそれぞれの情報を符号化する音声信号処理部ex304、映像信号処理部ex305を有する信号処理部ex306と、復号化した音声信号を出力するスピーカex307、復号化した映像信号を表示するディスプレイ等の表示部ex308を有する出力部ex309とを有する。さらに、テレビex300は、ユーザ操作の入力を受け付ける操作入力部ex312等を有するインタフェース部ex317を有する。さらに、テレビex300は、各部を統括的に制御する制御部ex310、各部に電力を供給する電源回路部ex311を有する。インタフェース部ex317は、操作入力部ex312以外に、リーダ/レコーダex218等の外部機器と接続されるブリッジex313、SDカード等の記録メディアex216を装着可能とするためのスロット部ex314、ハードディスク等の外部記録メディアと接続するためのドライバex315、電話網と接続するモデムex316等を有していてもよい。なお記録メディアex216は、格納する不揮発性/揮発性の半導体メモリ素子により電気的に情報の記録を可能としたものである。テレビex300の各部は同期バスを介して互いに接続されている。
Also, the television ex300 decodes each of the audio data and the video data, or encodes the respective information, the audio signal processing unit ex304, the signal processing unit ex306 including the video signal processing unit ex305, and the decoded audio signal. A speaker ex307 for outputting, and an output unit ex309 having a display unit ex308 such as a display for displaying the decoded video signal. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that controls each unit in an integrated manner, and a power supply circuit unit ex311 that supplies power to each unit. In addition to the operation input unit ex312, the interface unit ex317 includes a bridge ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording such as a hard disk. A driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included. The recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored. Each part of the television ex300 is connected to each other via a synchronous bus.
まず、テレビex300がアンテナex204等により外部から取得した多重化データを復号化し、再生する構成について説明する。テレビex300は、リモートコントローラex220等からのユーザ操作を受け、CPU等を有する制御部ex310の制御に基づいて、変調/復調部ex302で復調した多重化データを多重/分離部ex303で分離する。さらにテレビex300は、分離した音声データを音声信号処理部ex304で復号化し、分離した映像データを映像信号処理部ex305で上記各実施の形態で説明した復号化方法を用いて復号化する。復号化した音声信号、映像信号は、それぞれ出力部ex309から外部に向けて出力される。出力する際には、音声信号と映像信号が同期して再生するよう、バッファex318、ex319等に一旦これらの信号を蓄積するとよい。また、テレビex300は、放送等からではなく、磁気/光ディスク、SDカード等の記録メディアex215、ex216から多重化データを読み出してもよい。次に、テレビex300が音声信号や映像信号を符号化し、外部に送信または記録メディア等に書き込む構成について説明する。テレビex300は、リモートコントローラex220等からのユーザ操作を受け、制御部ex310の制御に基づいて、音声信号処理部ex304で音声信号を符号化し、映像信号処理部ex305で映像信号を上記各実施の形態で説明した符号化方法を用いて符号化する。符号化した音声信号、映像信号は多重/分離部ex303で多重化され外部に出力される。多重化する際には、音声信号と映像信号が同期するように、バッファex320、ex321等に一旦これらの信号を蓄積するとよい。なお、バッファex318、ex319、ex320、ex321は図示しているように複数備えていてもよいし、1つ以上のバッファを共有する構成であってもよい。さらに、図示している以外に、例えば変調/復調部ex302や多重/分離部ex303の間等でもシステムのオーバフロー、アンダーフローを避ける緩衝材としてバッファにデータを蓄積することとしてもよい。
First, a configuration in which the television ex300 decodes and reproduces multiplexed data acquired from the outside by the antenna ex204 or the like will be described. The television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in the above embodiments. The decoded audio signal and video signal are output from the output unit ex309 to the outside. When outputting, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Also, the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or writes it to a recording medium will be described. The television ex300 receives a user operation from the remote controller ex220 or the like, and encodes an audio signal with the audio signal processing unit ex304 based on the control of the control unit ex310, and converts the video signal with the video signal processing unit ex305. Encoding is performed using the encoding method described in (1). The encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320 and ex321 so that the audio signal and the video signal are synchronized. Note that a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
また、テレビex300は、放送等や記録メディア等から音声データ、映像データを取得する以外に、マイクやカメラのAV入力を受け付ける構成を備え、それらから取得したデータに対して符号化処理を行ってもよい。なお、ここではテレビex300は上記の符号化処理、多重化、および外部出力ができる構成として説明したが、これらの処理を行うことはできず、上記受信、復号化処理、外部出力のみが可能な構成であってもよい。
In addition to acquiring audio data and video data from broadcasts, recording media, and the like, the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good. Here, the television ex300 has been described as a configuration that can perform the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
また、リーダ/レコーダex218で記録メディアから多重化データを読み出す、または書き込む場合には、上記復号化処理または符号化処理はテレビex300、リーダ/レコーダex218のいずれで行ってもよいし、テレビex300とリーダ/レコーダex218が互いに分担して行ってもよい。
When reading or writing multiplexed data from a recording medium by the reader / recorder ex218, the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218. The reader / recorder ex218 may be shared with each other.
一例として、光ディスクからデータの読み込みまたは書き込みをする場合の情報再生/記録部ex400の構成を図53に示す。情報再生/記録部ex400は、以下に説明する要素ex401、ex402、ex403、ex404、ex405、ex406、ex407を備える。光ヘッドex401は、光ディスクである記録メディアex215の記録面にレーザスポットを照射して情報を書き込み、記録メディアex215の記録面からの反射光を検出して情報を読み込む。変調記録部ex402は、光ヘッドex401に内蔵された半導体レーザを電気的に駆動し記録データに応じてレーザ光の変調を行う。再生復調部ex403は、光ヘッドex401に内蔵されたフォトディテクタにより記録面からの反射光を電気的に検出した再生信号を増幅し、記録メディアex215に記録された信号成分を分離して復調し、必要な情報を再生する。バッファex404は、記録メディアex215に記録するための情報および記録メディアex215から再生した情報を一時的に保持する。ディスクモータex405は記録メディアex215を回転させる。サーボ制御部ex406は、ディスクモータex405の回転駆動を制御しながら光ヘッドex401を所定の情報トラックに移動させ、レーザスポットの追従処理を行う。システム制御部ex407は、情報再生/記録部ex400全体の制御を行う。上記の読み出しや書き込みの処理はシステム制御部ex407が、バッファex404に保持された各種情報を利用し、また必要に応じて新たな情報の生成・追加を行うと共に、変調記録部ex402、再生復調部ex403、サーボ制御部ex406を協調動作させながら、光ヘッドex401を通して、情報の記録再生を行うことにより実現される。システム制御部ex407は例えばマイクロプロセッサで構成され、読み出し書き込みのプログラムを実行することでそれらの処理を実行する。
As an example, FIG. 53 shows the configuration of the information reproducing / recording unit ex400 when data is read from or written to the optical disk. The information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below. The optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disc to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information. The modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data. The reproduction demodulator ex403 amplifies a reproduction signal obtained by electrically detecting reflected light from the recording surface by a photodetector built in the optical head ex401, separates and demodulates a signal component recorded on the recording medium ex215, and is necessary. To play back information. The buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215. The disk motor ex405 rotates the recording medium ex215. The servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process. The system control unit ex407 controls the entire information reproduction / recording unit ex400. In the reading and writing processes described above, the system control unit ex407 uses various kinds of information held in the buffer ex404, and generates and adds new information as necessary, as well as the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner. The system control unit ex407 is composed of, for example, a microprocessor, and executes these processes by executing a read / write program.
以上では、光ヘッドex401はレーザスポットを照射するとして説明したが、近接場光を用いてより高密度な記録を行う構成であってもよい。
In the above, the optical head ex401 has been described as irradiating a laser spot, but it may be configured to perform higher-density recording using near-field light.
図54に光ディスクである記録メディアex215の模式図を示す。記録メディアex215の記録面には案内溝(グルーブ)がスパイラル状に形成され、情報トラックex230には、予めグルーブの形状の変化によってディスク上の絶対位置を示す番地情報が記録されている。この番地情報はデータを記録する単位である記録ブロックex231の位置を特定するための情報を含み、記録や再生を行う装置において情報トラックex230を再生し番地情報を読み取ることで記録ブロックを特定することができる。また、記録メディアex215は、データ記録領域ex233、内周領域ex232、外周領域ex234を含んでいる。ユーザデータを記録するために用いる領域がデータ記録領域ex233であり、データ記録領域ex233より内周または外周に配置されている内周領域ex232と外周領域ex234は、ユーザデータの記録以外の特定用途に用いられる。情報再生/記録部ex400は、このような記録メディアex215のデータ記録領域ex233に対して、符号化された音声データ、映像データまたはそれらのデータを多重化した多重化データの読み書きを行う。
FIG. 54 shows a schematic diagram of a recording medium ex215 that is an optical disk. Guide grooves (grooves) are formed in a spiral shape on the recording surface of the recording medium ex215, and address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove. This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus. Can do. Further, the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234. The area used for recording the user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner circumference or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used. The information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
以上では、1層のDVD、BD等の光ディスクを例に挙げ説明したが、これらに限ったものではなく、多層構造であって表面以外にも記録可能な光ディスクであってもよい。また、ディスクの同じ場所にさまざまな異なる波長の色の光を用いて情報を記録したり、さまざまな角度から異なる情報の層を記録したりなど、多次元的な記録/再生を行う構造の光ディスクであってもよい。
In the above description, an optical disk such as a single-layer DVD or BD has been described as an example. However, the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used. Also, an optical disc with a multi-dimensional recording / reproducing structure, such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
また、デジタル放送用システムex200において、アンテナex205を有する車ex210で衛星ex202等からデータを受信し、車ex210が有するカーナビゲーションex211等の表示装置に動画を再生することも可能である。なお、カーナビゲーションex211の構成は例えば図52に示す構成のうち、GPS受信部を加えた構成が考えられ、同様なことがコンピュータex111や携帯電話ex114等でも考えられる。
Also, in the digital broadcasting system ex200, the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has. For example, the configuration of the car navigation ex211 may include a configuration in which a GPS receiving unit is added to the configuration illustrated in FIG.
図55Aは、上記実施の形態で説明した動画像復号化方法および動画像符号化方法を用いた携帯電話ex114を示す図である。携帯電話ex114は、基地局ex110との間で電波を送受信するためのアンテナex350、映像、静止画を撮ることが可能なカメラ部ex365、カメラ部ex365で撮像した映像、アンテナex350で受信した映像等が復号化されたデータを表示する液晶ディスプレイ等の表示部ex358を備える。携帯電話ex114は、さらに、操作キー部ex366を有する本体部、音声を出力するためのスピーカ等である音声出力部ex357、音声を入力するためのマイク等である音声入力部ex356、撮影した映像、静止画、録音した音声、または受信した映像、静止画、メール等の符号化されたデータもしくは復号化されたデータを保存するメモリ部ex367、又は同様にデータを保存する記録メディアとのインタフェース部であるスロット部ex364を備える。
FIG. 55A is a diagram showing the mobile phone ex114 using the moving picture decoding method and the moving picture encoding method described in the above embodiment. The mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of taking video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data. The mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio, In the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data A slot ex364 is provided.
さらに、携帯電話ex114の構成例について、図55Bを用いて説明する。携帯電話ex114は、表示部ex358及び操作キー部ex366を備えた本体部の各部を統括的に制御する主制御部ex360に対して、電源回路部ex361、操作入力制御部ex362、映像信号処理部ex355、カメラインタフェース部ex363、LCD(Liquid Crystal Display)制御部ex359、変調/復調部ex352、多重/分離部ex353、音声信号処理部ex354、スロット部ex364、メモリ部ex367がバスex370を介して互いに接続されている。
Furthermore, a configuration example of the mobile phone ex114 will be described with reference to FIG. 55B. The mobile phone ex114 has a power supply circuit unit ex361, an operation input control unit ex362, and a video signal processing unit ex355 with respect to a main control unit ex360 that comprehensively controls each unit of the main body including the display unit ex358 and the operation key unit ex366. , A camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
電源回路部ex361は、ユーザの操作により終話及び電源キーがオン状態にされると、バッテリパックから各部に対して電力を供給することにより携帯電話ex114を動作可能な状態に起動する。
When the end of call and the power key are turned on by a user operation, the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
携帯電話ex114は、CPU、ROM、RAM等を有する主制御部ex360の制御に基づいて、音声通話モード時に音声入力部ex356で収音した音声信号を音声信号処理部ex354でデジタル音声信号に変換し、これを変調/復調部ex352でスペクトラム拡散処理し、送信/受信部ex351でデジタルアナログ変換処理および周波数変換処理を施した後にアンテナex350を介して送信する。また携帯電話ex114は、音声通話モード時にアンテナex350を介して受信した受信データを増幅して周波数変換処理およびアナログデジタル変換処理を施し、変調/復調部ex352でスペクトラム逆拡散処理し、音声信号処理部ex354でアナログ音声信号に変換した後、これを音声出力部ex356から出力する。
The mobile phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. This is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350. Further, the mobile phone ex114 amplifies the received data received through the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing in the modulation / demodulation unit ex352, and performs voice signal processing unit After converting to an analog audio signal at ex354, this is output from the audio output unit ex356.
さらにデータ通信モード時に電子メールを送信する場合、本体部の操作キー部ex366等の操作によって入力された電子メールのテキストデータは操作入力制御部ex362を介して主制御部ex360に送出される。主制御部ex360は、テキストデータを変調/復調部ex352でスペクトラム拡散処理をし、送信/受信部ex351でデジタルアナログ変換処理および周波数変換処理を施した後にアンテナex350を介して基地局ex110へ送信する。電子メールを受信する場合は、受信したデータに対してこのほぼ逆の処理が行われ、表示部ex358に出力される。
Further, when an e-mail is transmitted in the data communication mode, the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362. The main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350. . When receiving an e-mail, almost the reverse process is performed on the received data and output to the display unit ex358.
データ通信モード時に映像、静止画、または映像と音声を送信する場合、映像信号処理部ex355は、カメラ部ex365から供給された映像信号を上記各実施の形態で示した動画像符号化方法によって圧縮符号化し、符号化された映像データを多重/分離部ex353に送出する。また、音声信号処理部ex354は、映像、静止画等をカメラ部ex365で撮像中に音声信号入力部ex356で収音した音声信号を符号化し、符号化された音声データを多重/分離部ex353に送出する。
When transmitting video, still image, or video and audio in the data communication mode, the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments. The encoded video data is sent to the multiplexing / demultiplexing unit ex353. The audio signal processing unit ex354 encodes the audio signal picked up by the audio signal input unit ex356 while the camera unit ex365 images a video, a still image, and the like, and the encoded audio data is sent to the multiplexing / demultiplexing unit ex353. Send it out.
多重/分離部ex353は、映像信号処理部ex355から供給された符号化された映像データと音声信号処理部ex354から供給された符号化された音声データを所定の方式で多重化し、その結果得られる多重化データを変調/復調部(変調/復調回路部)ex352でスペクトラム拡散処理をし、送信/受信部ex351でデジタルアナログ変換処理及び周波数変換処理を施した後にアンテナex350を介して送信する。
The multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result. The multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
データ通信モード時にホームページ等にリンクされた動画像ファイルのデータを受信する場合、または映像およびもしくは音声が添付された電子メールを受信する場合、アンテナex350を介して受信された多重化データを復号化するために、多重/分離部ex353は、多重化データを分離することにより映像データのビットストリームと音声データのビットストリームとに分け、同期バスex370を介して符号化された映像データを映像信号処理部ex355に供給するとともに、符号化された音声データを音声信号処理部ex354に供給する。映像信号処理部ex355は、上記各実施の形態で示した動画像符号化方法に対応した動画像復号化方法によって復号化することにより映像信号を復号し、LCD制御部ex359を介して表示部ex358から、例えばホームページにリンクされた動画像ファイルに含まれる映像、静止画が表示される。また音声信号処理部ex354は、音声信号を復号し、音声出力部ex357から音声が出力される。
Decode multiplexed data received via antenna ex350 when receiving video file data linked to a homepage, etc. in data communication mode, or when receiving e-mail with video and / or audio attached Therefore, the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370. The encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355. The video signal processing unit ex355 decodes the video signal by decoding using a video decoding method corresponding to the video encoding method shown in each of the above embodiments, and the display unit ex358 via the LCD control unit ex359. From, for example, video and still images included in a moving image file linked to a home page are displayed. The audio signal processing unit ex354 decodes the audio signal, and the audio output unit ex357 outputs the audio.
また、上記携帯電話ex114等の端末は、テレビex300と同様に、符号化器・復号化器を両方持つ送受信型端末の他に、符号化器のみの送信端末、復号化器のみの受信端末という3通りの実装形式が考えられる。さらに、デジタル放送用システムex200において、映像データに音楽データなどが多重化された多重化された多重化データを受信、送信するとして説明したが、音声データ以外に映像に関連する文字データなどが多重化されたデータであってもよいし、多重化データではなく映像データ自体であってもよい。
In addition to the transmission / reception terminal having both the encoder and the decoder, the terminal such as the mobile phone ex114 is referred to as a transmitting terminal having only an encoder and a receiving terminal having only a decoder. There are three possible mounting formats. Furthermore, in the digital broadcasting system ex200, it has been described that multiplexed data in which music data is multiplexed with video data is received and transmitted. However, in addition to audio data, character data related to video is multiplexed. It may be converted data, or may be video data itself instead of multiplexed data.
このように、上記各実施の形態で示した動画像符号化方法あるいは動画像復号化方法を上述したいずれの機器・システムに用いることは可能であり、そうすることで、上記各実施の形態で説明した効果を得ることができる。
As described above, the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
また、本発明はかかる上記実施形態に限定されるものではなく、本発明の範囲を逸脱することなく種々の変形または修正が可能である。
Further, the present invention is not limited to the above-described embodiment, and various changes and modifications can be made without departing from the scope of the present invention.
(実施の形態3)
上記各実施の形態で示した動画像符号化方法または装置と、MPEG-2、MPEG4-AVC、VC-1など異なる規格に準拠した動画像符号化方法または装置とを、必要に応じて適宜切替えることにより、映像データを生成することも可能である。 (Embodiment 3)
The moving picture coding method or apparatus shown in the above embodiments and the moving picture coding method or apparatus compliant with different standards such as MPEG-2, MPEG4-AVC, and VC-1 are appropriately switched as necessary. Thus, it is also possible to generate video data.
上記各実施の形態で示した動画像符号化方法または装置と、MPEG-2、MPEG4-AVC、VC-1など異なる規格に準拠した動画像符号化方法または装置とを、必要に応じて適宜切替えることにより、映像データを生成することも可能である。 (Embodiment 3)
The moving picture coding method or apparatus shown in the above embodiments and the moving picture coding method or apparatus compliant with different standards such as MPEG-2, MPEG4-AVC, and VC-1 are appropriately switched as necessary. Thus, it is also possible to generate video data.
ここで、それぞれ異なる規格に準拠する複数の映像データを生成した場合、復号する際に、それぞれの規格に対応した復号方法を選択する必要がある。しかしながら、復号する映像データが、どの規格に準拠するものであるか識別できないため、適切な復号方法を選択することができないという課題を生じる。
Here, when a plurality of pieces of video data conforming to different standards are generated, it is necessary to select a decoding method corresponding to each standard when decoding. However, since it is impossible to identify which standard the video data to be decoded complies with, there arises a problem that an appropriate decoding method cannot be selected.
この課題を解決するために、映像データに音声データなどを多重化した多重化データは、映像データがどの規格に準拠するものであるかを示す識別情報を含む構成とする。上記各実施の形態で示す動画像符号化方法または装置によって生成された映像データを含む多重化データの具体的な構成を以下説明する。多重化データは、MPEG-2トランスポートストリーム形式のデジタルストリームである。
In order to solve this problem, multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to. A specific configuration of multiplexed data including video data generated by the moving picture encoding method or apparatus shown in the above embodiments will be described below. The multiplexed data is a digital stream in the MPEG-2 transport stream format.
図56は、多重化データの構成を示す図である。図56に示すように多重化データは、ビデオストリーム、オーディオストリーム、プレゼンテーショングラフィックスストリーム(PG)、インタラクティブグラフィックスストリームのうち、1つ以上を多重化することで得られる。ビデオストリームは映画の主映像および副映像を、オーディオストリーム(IG)は映画の主音声部分とその主音声とミキシングする副音声を、プレゼンテーショングラフィックスストリームは、映画の字幕をそれぞれ示している。ここで主映像とは画面に表示される通常の映像を示し、副映像とは主映像の中に小さな画面で表示する映像のことである。また、インタラクティブグラフィックスストリームは、画面上にGUI部品を配置することにより作成される対話画面を示している。ビデオストリームは、上記各実施の形態で示した動画像符号化方法または装置、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠した動画像符号化方法または装置によって符号化されている。オーディオストリームは、ドルビーAC-3、Dolby Digital Plus、MLP、DTS、DTS-HD、または、リニアPCMのなどの方式で符号化されている。
FIG. 56 is a diagram showing a structure of multiplexed data. As shown in FIG. 56, multiplexed data is obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream. The video stream indicates the main video and sub-video of the movie, the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio, and the presentation graphics stream indicates the subtitles of the movie. Here, the main video indicates a normal video displayed on the screen, and the sub-video is a video displayed on a small screen in the main video. The interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen. The video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing. The audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
多重化データに含まれる各ストリームはPIDによって識別される。例えば、映画の映像に利用するビデオストリームには0x1011が、オーディオストリームには0x1100から0x111Fまでが、プレゼンテーショングラフィックスには0x1200から0x121Fまでが、インタラクティブグラフィックスストリームには0x1400から0x141Fまでが、映画の副映像に利用するビデオストリームには0x1B00から0x1B1Fまで、主音声とミキシングする副音声に利用するオーディオストリームには0x1A00から0x1A1Fが、それぞれ割り当てられている。
Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
図57は、多重化データがどのように多重化されるかを模式的に示す図である。まず、複数のビデオフレームからなるビデオストリームex235、複数のオーディオフレームからなるオーディオストリームex238を、それぞれPESパケット列ex236およびex239に変換し、TSパケットex237およびex240に変換する。同じくプレゼンテーショングラフィックスストリームex241およびインタラクティブグラフィックスex244のデータをそれぞれPESパケット列ex242およびex245に変換し、さらにTSパケットex243およびex246に変換する。多重化データex247はこれらのTSパケットを1本のストリームに多重化することで構成される。
FIG. 57 is a diagram schematically showing how multiplexed data is multiplexed. First, a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240. Similarly, the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246. The multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
図58は、PESパケット列に、ビデオストリームがどのように格納されるかをさらに詳しく示している。図58における第1段目はビデオストリームのビデオフレーム列を示す。第2段目は、PESパケット列を示す。図58の矢印yy1,yy2,yy3,yy4に示すように、ビデオストリームにおける複数のVideo Presentation UnitであるIピクチャ、Bピクチャ、Pピクチャは、ピクチャ毎に分割され、PESパケットのペイロードに格納される。各PESパケットはPESヘッダを持ち、PESヘッダには、ピクチャの表示時刻であるPTS(Presentation Time-Stamp)やピクチャの復号時刻であるDTS(Decoding Time-Stamp)が格納される。
FIG. 58 shows in more detail how the video stream is stored in the PES packet sequence. The first row in FIG. 58 shows a video frame sequence of the video stream. The second level shows a PES packet sequence. As shown by arrows yy1, yy2, yy3, and yy4 in FIG. 58, a plurality of Video Presentation Units in the video stream are divided into pictures, B pictures, and P pictures, and are stored in the payload of the PES packet. . Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
図59は、多重化データに最終的に書き込まれるTSパケットの形式を示している。TSパケットは、ストリームを識別するPIDなどの情報を持つ4ByteのTSヘッダとデータを格納する184ByteのTSペイロードから構成される188Byte固定長のパケットであり、上記PESパケットは分割されTSペイロードに格納される。BD-ROMの場合、TSパケットには、4ByteのTP_Extra_Headerが付与され、192Byteのソースパケットを構成し、多重化データに書き込まれる。TP_Extra_HeaderにはATS(Arrival_Time_Stamp)などの情報が記載される。ATSは当該TSパケットのデコーダのPIDフィルタへの転送開始時刻を示す。多重化データには図59下段に示すようにソースパケットが並ぶこととなり、多重化データの先頭からインクリメントする番号はSPN(ソースパケットナンバー)と呼ばれる。
FIG. 59 shows the format of TS packets that are finally written in the multiplexed data. The TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data. The PES packet is divided and stored in the TS payload. The In the case of a BD-ROM, a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data. In TP_Extra_Header, information such as ATS (Arrival_Time_Stamp) is described. ATS indicates the transfer start time of the TS packet to the PID filter of the decoder. As shown in the lower part of FIG. 59, source packets are arranged in the multiplexed data, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
また、多重化データに含まれるTSパケットには、映像・音声・字幕などの各ストリーム以外にもPAT(Program Association Table)、PMT(Program Map Table)、PCR(Program Clock Reference)などがある。PATは多重化データ中に利用されるPMTのPIDが何であるかを示し、PAT自身のPIDは0で登録される。PMTは、多重化データ中に含まれる映像・音声・字幕などの各ストリームのPIDと各PIDに対応するストリームの属性情報を持ち、また多重化データに関する各種ディスクリプタを持つ。ディスクリプタには多重化データのコピーを許可・不許可を指示するコピーコントロール情報などがある。PCRは、ATSの時間軸であるATC(Arrival Time Clock)とPTS・DTSの時間軸であるSTC(System Time Clock)の同期を取るために、そのPCRパケットがデコーダに転送されるATSに対応するSTC時間の情報を持つ。
In addition, TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption. PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0. The PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data. The descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data. In order to synchronize the ATC (Arrival Time Clock), which is the ATS time axis, and the STC (System Time Clock), which is the PTS / DTS time axis, the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
図60はPMTのデータ構造を詳しく説明する図である。PMTの先頭には、そのPMTに含まれるデータの長さなどを記したPMTヘッダが配置される。その後ろには、多重化データに関するディスクリプタが複数配置される。上記コピーコントロール情報などが、ディスクリプタとして記載される。ディスクリプタの後には、多重化データに含まれる各ストリームに関するストリーム情報が複数配置される。ストリーム情報は、ストリームの圧縮コーデックなどを識別するためストリームタイプ、ストリームのPID、ストリームの属性情報(フレームレート、アスペクト比など)が記載されたストリームディスクリプタから構成される。ストリームディスクリプタは多重化データに存在するストリームの数だけ存在する。
FIG. 60 is a diagram for explaining the data structure of the PMT in detail. A PMT header describing the length of data included in the PMT is arranged at the head of the PMT. After that, a plurality of descriptors related to multiplexed data are arranged. The copy control information and the like are described as descriptors. After the descriptor, a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged. The stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream. There are as many stream descriptors as the number of streams existing in the multiplexed data.
記録媒体などに記録する場合には、上記多重化データは、多重化データ情報ファイルと共に記録される。
When recording on a recording medium or the like, the multiplexed data is recorded together with the multiplexed data information file.
多重化データ情報ファイルは、図61に示すように多重化データの管理情報であり、多重化データと1対1に対応し、多重化データ情報、ストリーム属性情報とエントリマップから構成される。
As shown in FIG. 61, the multiplexed data information file is management information of multiplexed data, has one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
多重化データ情報は図61に示すようにシステムレート、再生開始時刻、再生終了時刻から構成されている。システムレートは多重化データの、後述するシステムターゲットデコーダのPIDフィルタへの最大転送レートを示す。多重化データ中に含まれるATSの間隔はシステムレート以下になるように設定されている。再生開始時刻は多重化データの先頭のビデオフレームのPTSであり、再生終了時刻は多重化データの終端のビデオフレームのPTSに1フレーム分の再生間隔を足したものが設定される。
As shown in FIG. 61, the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time. The system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later. The ATS interval included in the multiplexed data is set to be equal to or less than the system rate. The playback start time is the PTS of the first video frame of the multiplexed data, and the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
ストリーム属性情報は図62に示すように、多重化データに含まれる各ストリームについての属性情報が、PID毎に登録される。属性情報はビデオストリーム、オーディオストリーム、プレゼンテーショングラフィックスストリーム、インタラクティブグラフィックスストリーム毎に異なる情報を持つ。ビデオストリーム属性情報は、そのビデオストリームがどのような圧縮コーデックで圧縮されたか、ビデオストリームを構成する個々のピクチャデータの解像度がどれだけであるか、アスペクト比はどれだけであるか、フレームレートはどれだけであるかなどの情報を持つ。オーディオストリーム属性情報は、そのオーディオストリームがどのような圧縮コーデックで圧縮されたか、そのオーディオストリームに含まれるチャンネル数は何であるか、何の言語に対応するか、サンプリング周波数がどれだけであるかなどの情報を持つ。これらの情報は、プレーヤが再生する前のデコーダの初期化などに利用される。
As shown in FIG. 62, in the stream attribute information, attribute information about each stream included in the multiplexed data is registered for each PID. The attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream. The video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is. The audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
本実施の形態においては、上記多重化データのうち、PMTに含まれるストリームタイプを利用する。また、記録媒体に多重化データが記録されている場合には、多重化データ情報に含まれる、ビデオストリーム属性情報を利用する。具体的には、上記各実施の形態で示した動画像符号化方法または装置において、PMTに含まれるストリームタイプ、または、ビデオストリーム属性情報に対し、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示す固有の情報を設定するステップまたは手段を設ける。この構成により、上記各実施の形態で示した動画像符号化方法または装置によって生成した映像データと、他の規格に準拠する映像データとを識別することが可能になる。
In this embodiment, among the multiplexed data, the stream type included in the PMT is used. Also, when multiplexed data is recorded on the recording medium, video stream attribute information included in the multiplexed data information is used. Specifically, in the video encoding method or apparatus shown in each of the above embodiments, the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT. There is provided a step or means for setting unique information indicating that the video data is generated by the method or apparatus. With this configuration, it is possible to discriminate between video data generated by the moving picture encoding method or apparatus described in the above embodiments and video data compliant with other standards.
また、本実施の形態における動画像復号化方法のステップを図63に示す。ステップexS100において、多重化データからPMTに含まれるストリームタイプ、または、多重化データ情報に含まれるビデオストリーム属性情報を取得する。次に、ステップexS101において、ストリームタイプ、または、ビデオストリーム属性情報が上記各実施の形態で示した動画像符号化方法または装置によって生成された多重化データであることを示しているか否かを判断する。そして、ストリームタイプ、または、ビデオストリーム属性情報が上記各実施の形態で示した動画像符号化方法または装置によって生成されたものであると判断された場合には、ステップexS102において、上記各実施の形態で示した動画像復号方法により復号を行う。また、ストリームタイプ、または、ビデオストリーム属性情報が、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠するものであることを示している場合には、ステップexS103において、従来の規格に準拠した動画像復号方法により復号を行う。
FIG. 63 shows the steps of the moving picture decoding method according to the present embodiment. In step exS100, the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data. Next, in step exS101, it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do. When it is determined that the stream type or the video stream attribute information is generated by the moving image encoding method or apparatus described in each of the above embodiments, in step exS102, each of the above embodiments. Decoding is performed by the moving picture decoding method shown in the form. If the stream type or the video stream attribute information indicates that it conforms to a standard such as conventional MPEG-2, MPEG4-AVC, VC-1, etc., in step exS103, the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
このように、ストリームタイプ、または、ビデオストリーム属性情報に新たな固有値を設定することにより、復号する際に、上記各実施の形態で示した動画像復号化方法または装置で復号可能であるかを判断することができる。従って、異なる規格に準拠する多重化データが入力された場合であっても、適切な復号化方法または装置を選択することができるため、エラーを生じることなく復号することが可能となる。また、本実施の形態で示した動画像符号化方法または装置、または、動画像復号方法または装置を、上述したいずれの機器・システムに用いることも可能である。
In this way, by setting a new unique value in the stream type or video stream attribute information, whether or not decoding is possible with the moving picture decoding method or apparatus described in each of the above embodiments is performed. Judgment can be made. Therefore, even when multiplexed data conforming to different standards is input, an appropriate decoding method or apparatus can be selected, and therefore decoding can be performed without causing an error. In addition, the moving picture encoding method or apparatus or the moving picture decoding method or apparatus described in this embodiment can be used in any of the above-described devices and systems.
(実施の形態4)
上記各実施の形態で示した動画像符号化方法および装置、動画像復号化方法および装置は、典型的には集積回路であるLSIで実現される。一例として、図64に1チップ化されたLSIex500の構成を示す。LSIex500は、以下に説明する要素ex501、ex502、ex503、ex504、ex505、ex506、ex507、ex508、ex509を備え、各要素はバスex510を介して接続している。電源回路部ex505は電源がオン状態の場合に各部に対して電力を供給することで動作可能な状態に起動する。 (Embodiment 4)
The moving picture encoding method and apparatus and moving picture decoding method and apparatus described in the above embodiments are typically realized by an LSI that is an integrated circuit. As an example, FIG. 64 shows a configuration of an LSI ex500 that is made into one chip. The LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510. The power supply circuit unit ex505 starts up to an operable state by supplying power to each unit when the power supply is in an on state.
上記各実施の形態で示した動画像符号化方法および装置、動画像復号化方法および装置は、典型的には集積回路であるLSIで実現される。一例として、図64に1チップ化されたLSIex500の構成を示す。LSIex500は、以下に説明する要素ex501、ex502、ex503、ex504、ex505、ex506、ex507、ex508、ex509を備え、各要素はバスex510を介して接続している。電源回路部ex505は電源がオン状態の場合に各部に対して電力を供給することで動作可能な状態に起動する。 (Embodiment 4)
The moving picture encoding method and apparatus and moving picture decoding method and apparatus described in the above embodiments are typically realized by an LSI that is an integrated circuit. As an example, FIG. 64 shows a configuration of an LSI ex500 that is made into one chip. The LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510. The power supply circuit unit ex505 starts up to an operable state by supplying power to each unit when the power supply is in an on state.
例えば符号化処理を行う場合には、LSIex500は、CPUex502、メモリコントローラex503、ストリームコントローラex504、駆動周波数制御部ex512等を有する制御部ex501の制御に基づいて、AV I/Oex509によりマイクex117やカメラex113等からAV信号を入力する。入力されたAV信号は、一旦SDRAM等の外部のメモリex511に蓄積される。制御部ex501の制御に基づいて、蓄積したデータは処理量や処理速度に応じて適宜複数回に分けるなどされ信号処理部ex507に送られ、信号処理部ex507において音声信号の符号化および/または映像信号の符号化が行われる。ここで映像信号の符号化処理は上記各実施の形態で説明した符号化処理である。信号処理部ex507ではさらに、場合により符号化された音声データと符号化された映像データを多重化するなどの処理を行い、ストリームI/Oex506から外部に出力する。この出力された多重化データは、基地局ex107に向けて送信されたり、または記録メディアex215に書き込まれたりする。なお、多重化する際には同期するよう、一旦バッファex508にデータを蓄積するとよい。
For example, when performing the encoding process, the LSI ex500 uses the AV I / O ex509 to perform the microphone ex117 and the camera ex113 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like. The AV signal is input from the above. The input AV signal is temporarily stored in an external memory ex511 such as SDRAM. Based on the control of the control unit ex501, the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed. Here, the encoding process of the video signal is the encoding process described in the above embodiments. The signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside. The output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so that the data is synchronized when multiplexed.
なお、上記では、メモリex511がLSIex500の外部の構成として説明したが、LSIex500の内部に含まれる構成であってもよい。バッファex508も1つに限ったものではなく、複数のバッファを備えていてもよい。また、LSIex500は1チップ化されてもよいし、複数チップ化されてもよい。
In the above description, the memory ex511 has been described as an external configuration of the LSI ex500. However, a configuration included in the LSI ex500 may be used. The number of buffers ex508 is not limited to one, and a plurality of buffers may be provided. The LSI ex500 may be made into one chip or a plurality of chips.
また、上記では、制御部ex510が、CPUex502、メモリコントローラex503、ストリームコントローラex504、駆動周波数制御部ex512等を有するとしているが、制御部ex510の構成は、この構成に限らない。例えば、信号処理部ex507がさらにCPUを備える構成であってもよい。信号処理部ex507の内部にもCPUを設けることにより、処理速度をより向上させることが可能になる。また、他の例として、CPUex502が信号処理部ex507、または信号処理部ex507の一部である例えば音声信号処理部を備える構成であってもよい。このような場合には、制御部ex501は、信号処理部ex507、またはその一部を有するCPUex502を備える構成となる。
In the above description, the control unit ex510 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex510 is not limited to this configuration. For example, the signal processing unit ex507 may further include a CPU. By providing a CPU also in the signal processing unit ex507, the processing speed can be further improved. As another example, the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507. In such a case, the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
なお、ここでは、LSIとしたが、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。
In addition, although it was set as LSI here, it may be called IC, system LSI, super LSI, and ultra LSI depending on the degree of integration.
また、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。
Further, the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行ってもよい。バイオ技術の適応等が可能性としてありえる。
Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technology, it is naturally also possible to integrate functional blocks using this technology. Biotechnology can be applied.
(実施の形態5)
上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データを復号する場合、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データを復号する場合に比べ、処理量が増加することが考えられる。そのため、LSIex500において、従来の規格に準拠する映像データを復号する際のCPUex502の駆動周波数よりも高い駆動周波数に設定する必要がある。しかし、駆動周波数を高くすると、消費電力が高くなるという課題が生じる。 (Embodiment 5)
When decoding the video data generated by the moving picture encoding method or apparatus shown in the above embodiments, the video data conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1 is decoded. It is conceivable that the amount of processing increases compared to the case. Therefore, in LSI ex500, it is necessary to set a driving frequency higher than the driving frequency of CPU ex502 when decoding video data compliant with the conventional standard. However, when the drive frequency is increased, there is a problem that power consumption increases.
上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データを復号する場合、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データを復号する場合に比べ、処理量が増加することが考えられる。そのため、LSIex500において、従来の規格に準拠する映像データを復号する際のCPUex502の駆動周波数よりも高い駆動周波数に設定する必要がある。しかし、駆動周波数を高くすると、消費電力が高くなるという課題が生じる。 (Embodiment 5)
When decoding the video data generated by the moving picture encoding method or apparatus shown in the above embodiments, the video data conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1 is decoded. It is conceivable that the amount of processing increases compared to the case. Therefore, in LSI ex500, it is necessary to set a driving frequency higher than the driving frequency of CPU ex502 when decoding video data compliant with the conventional standard. However, when the drive frequency is increased, there is a problem that power consumption increases.
この課題を解決するために、テレビex300、LSIex500などの動画像復号化装置は、映像データがどの規格に準拠するものであるかを識別し、規格に応じて駆動周波数を切替える構成とする。図65は、本実施の形態における構成ex800を示している。駆動周波数切替え部ex803は、映像データが、上記各実施の形態で示した動画像符号化方法または装置によって生成されたものである場合には、駆動周波数を高く設定する。そして、上記各実施の形態で示した動画像復号化方法を実行する復号処理部ex801に対し、映像データを復号するよう指示する。一方、映像データが、従来の規格に準拠する映像データである場合には、映像データが、上記各実施の形態で示した動画像符号化方法または装置によって生成されたものである場合に比べ、駆動周波数を低く設定する。そして、従来の規格に準拠する復号処理部ex802に対し、映像データを復号するよう指示する。
In order to solve this problem, moving picture decoding apparatuses such as the television ex300 and the LSI ex500 are configured to identify which standard the video data conforms to and switch the driving frequency according to the standard. FIG. 65 shows a configuration ex800 in the present embodiment. The drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments. Then, it instructs the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments to decode the video data. On the other hand, when the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
より具体的には、駆動周波数切替え部ex803は、図64のCPUex502と駆動周波数制御部ex512から構成される。また、上記各実施の形態で示した動画像復号化方法を実行する復号処理部ex801、および、従来の規格に準拠する復号処理部ex802は、図64の信号処理部ex507に該当する。CPUex502は、映像データがどの規格に準拠するものであるかを識別する。そして、CPUex502からの信号に基づいて、駆動周波数制御部ex512は、駆動周波数を設定する。また、CPUex502からの信号に基づいて、信号処理部ex507は、映像データの復号を行う。ここで、映像データの識別には、例えば、実施の形態9で記載した識別情報を利用することが考えられる。識別情報に関しては、実施の形態9で記載したものに限られず、映像データがどの規格に準拠するか識別できる情報であればよい。例えば、映像データがテレビに利用されるものであるか、ディスクに利用されるものであるかなどを識別する外部信号に基づいて、映像データがどの規格に準拠するものであるか識別可能である場合には、このような外部信号に基づいて識別してもよい。また、CPUex502における駆動周波数の選択は、例えば、図67のような映像データの規格と、駆動周波数とを対応付けたルックアップテーブルに基づいて行うことが考えられる。ルックアップテーブルを、バッファex508や、LSIの内部メモリに格納しておき、CPUex502がこのルックアップテーブルを参照することにより、駆動周波数を選択することが可能である。
More specifically, the driving frequency switching unit ex803 is composed CPUex502 the driving frequency control unit ex512 in FIG. 64. Also, the decoding processing unit ex801 that executes the moving picture decoding method shown in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG. The CPU ex502 identifies which standard the video data conforms to. Then, based on the signal from the CPU ex502, the drive frequency control unit ex512 sets the drive frequency. Further, based on the signal from the CPU ex502, the signal processing unit ex507 decodes the video data. Here, the identification of the video data, for example, it is conceivable to use the identification information described in the ninth embodiment. The identification information is not limited to that described in Embodiment 9, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal. In addition, the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a lookup table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to this look-up table.
図66は、本実施の形態の方法を実施するステップを示している。まず、ステップexS200では、信号処理部ex507において、多重化データから識別情報を取得する。次に、ステップexS201では、CPUex502において、識別情報に基づいて映像データが上記各実施の形態で示した符号化方法または装置によって生成されたものであるか否かを識別する。映像データが上記各実施の形態で示した符号化方法または装置によって生成されたものである場合には、ステップexS202において、駆動周波数を高く設定する信号を、CPUex502が駆動周波数制御部ex512に送る。そして、駆動周波数制御部ex512において、高い駆動周波数に設定される。一方、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合には、ステップexS203において、駆動周波数を低く設定する信号を、CPUex502が駆動周波数制御部ex512に送る。そして、駆動周波数制御部ex512において、映像データが上記各実施の形態で示した符号化方法または装置によって生成されたものである場合に比べ、低い駆動周波数に設定される。
FIG. 66 shows steps for executing the method of the present embodiment. First, in step exS200, the signal processing unit ex507 acquires identification information from the multiplexed data. Next, in step exS201, the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information. When the video data is generated by the encoding method or apparatus shown in the above embodiments, in step exS202, the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency. On the other hand, if it indicates that the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1, in step exS203, the CPU ex502 drives a signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
さらに、駆動周波数の切替えに連動して、LSIex500またはLSIex500を含む装置に与える電圧を変更することにより、省電力効果をより高めることが可能である。例えば、駆動周波数を低く設定する場合には、これに伴い、駆動周波数を高く設定している場合に比べ、LSIex500またはLSIex500を含む装置に与える電圧を低く設定することが考えられる。
Furthermore, the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency. For example, when the drive frequency is set to be low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
また、駆動周波数の設定方法は、復号する際の処理量が大きい場合に、駆動周波数を高く設定し、復号する際の処理量が小さい場合に、駆動周波数を低く設定すればよく、上述した設定方法に限らない。例えば、MPEG4-AVC規格に準拠する映像データを復号する処理量の方が、上記各実施の形態で示した動画像符号化方法または装置により生成された映像データを復号する処理量よりも大きい場合には、駆動周波数の設定を上述した場合の逆にすることが考えられる。
In addition, the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method. For example, the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
さらに、駆動周波数の設定方法は、駆動周波数を低くする構成に限らない。例えば、識別情報が、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示している場合には、LSIex500またはLSIex500を含む装置に与える電圧を高く設定し、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合には、LSIex500またはLSIex500を含む装置に与える電圧を低く設定することも考えられる。また、他の例としては、識別情報が、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示している場合には、CPUex502の駆動を停止させることなく、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合には、処理に余裕があるため、CPUex502の駆動を一時停止させることも考えられる。識別情報が、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示している場合であっても、処理に余裕があれば、CPUex502の駆動を一時停止させることも考えられる。この場合は、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合に比べて、停止時間を短く設定することが考えられる。
Furthermore, the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered. For example, when the identification information indicates that the video data is generated by the moving picture encoding method or apparatus described in the above embodiments, the voltage applied to the LSI ex500 or the apparatus including the LSI ex500 is set high. However, when it is shown that the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, VC-1, etc., it may be considered to set the voltage applied to the LSI ex500 or the device including the LSI ex500 low. It is done. As another example, when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, the driving of the CPU ex502 is stopped. If the video data conforms to the standards such as MPEG-2, MPEG4-AVC, VC-1, etc., the CPU ex502 is temporarily stopped because there is enough processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is enough processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
このように、映像データが準拠する規格に応じて、駆動周波数を切替えることにより、省電力化を図ることが可能になる。また、電池を用いてLSIex500またはLSIex500を含む装置を駆動している場合には、省電力化に伴い、電池の寿命を長くすることが可能である。
Thus, it is possible to save power by switching the drive frequency according to the standard to which the video data conforms. In addition, when a battery is used to drive the LSI ex500 or the device including the LSI ex500, the life of the battery can be extended along with power saving.
(実施の形態6)
テレビや、携帯電話など、上述した機器・システムには、異なる規格に準拠する複数の映像データが入力される場合がある。このように、異なる規格に準拠する複数の映像データが入力された場合にも復号できるようにするために、LSIex500の信号処理部ex507が複数の規格に対応している必要がある。しかし、それぞれの規格に対応する信号処理部ex507を個別に用いると、LSIex500の回路規模が大きくなり、また、コストが増加するという課題が生じる。 (Embodiment 6)
A plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone. As described above, the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input. However, when the signal processing unit ex507 corresponding to each standard is individually used, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
テレビや、携帯電話など、上述した機器・システムには、異なる規格に準拠する複数の映像データが入力される場合がある。このように、異なる規格に準拠する複数の映像データが入力された場合にも復号できるようにするために、LSIex500の信号処理部ex507が複数の規格に対応している必要がある。しかし、それぞれの規格に対応する信号処理部ex507を個別に用いると、LSIex500の回路規模が大きくなり、また、コストが増加するという課題が生じる。 (Embodiment 6)
A plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone. As described above, the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input. However, when the signal processing unit ex507 corresponding to each standard is individually used, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
この課題を解決するために、上記各実施の形態で示した動画像復号方法を実行するための復号処理部と、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する復号処理部とを一部共有化する構成とする。この構成例を図68Aのex900に示す。例えば、上記各実施の形態で示した動画像復号方法と、MPEG4-AVC規格に準拠する動画像復号方法とは、エントロピー符号化、逆量子化、デブロッキング・フィルタ、動き補償などの処理において処理内容が一部共通する。共通する処理内容については、MPEG4-AVC規格に対応する復号処理部ex902を共有し、MPEG4-AVC規格に対応しない、本発明特有の他の処理内容については、専用の復号処理部ex901を用いるという構成が考えられる。特に、本発明は、変換部に特徴を有していることから、例えば、逆変換については専用の復号処理部ex901を用い、それ以外のエントロピー符号化、逆量子化、デブロッキング・フィルタ、動き補償のいずれか、または、全ての処理については、復号処理部を共有することが考えられる。復号処理部の共有化に関しては、共通する処理内容については、上記各実施の形態で示した動画像復号化方法を実行するための復号処理部を共有し、MPEG4-AVC規格に特有の処理内容については、専用の復号処理部を用いる構成であってもよい。
In order to solve this problem, a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1 The processing unit is partly shared. An example of this configuration is shown as ex900 in FIG. 68A. For example, the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common. For the common processing contents, the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for other processing contents unique to the present invention that do not correspond to the MPEG4-AVC standard, the dedicated decoding processing unit ex901 is used. Configuration is conceivable. In particular, since the present invention has a feature in the transform unit, for example, a dedicated decoding processing unit ex901 is used for inverse transform, and other entropy coding, inverse quantization, deblocking filter, motion, etc. It is conceivable to share the decoding processing unit for any or all of the compensation processes. Regarding the sharing of the decoding processing unit, regarding the common processing content, the decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
また、処理を一部共有化する他の例を図68Bのex1000に示す。この例では、本発明に特有の処理内容に対応した専用の復号処理部ex1001と、他の従来規格に特有の処理内容に対応した専用の復号処理部ex1002と、本発明の動画像復号方法と他の従来規格の動画像復号方法とに共通する処理内容に対応した共用の復号処理部ex1003とを用いる構成としている。ここで、専用の復号処理部ex1001、ex1002は、必ずしも本発明、または、他の従来規格に特有の処理内容に特化したものではなく、他の汎用処理を実行できるものであってもよい。また、本実施の形態の構成を、LSIex500で実装することも可能である。
Further, ex1000 in FIG. 68B shows another example in which processing is partially shared. In this example, a dedicated decoding processing unit ex1001 corresponding to processing content specific to the present invention, a dedicated decoding processing unit ex1002 corresponding to processing content specific to other conventional standards, and a moving picture decoding method of the present invention A common decoding processing unit ex1003 corresponding to processing contents common to other conventional video decoding methods is used. Here, the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in the processing content specific to the present invention or other conventional standards, and may be capable of executing other general-purpose processing. Further, the configuration of the present embodiment can be implemented by LSI ex500.
このように、本発明の動画像復号方法と、従来の規格の動画像復号方法とで共通する処理内容について、復号処理部を共有することにより、LSIの回路規模を小さくし、かつ、コストを低減することが可能である。
As described above, by sharing the decoding processing unit with respect to the processing contents common to the moving picture decoding method of the present invention and the moving picture decoding method of the conventional standard, the circuit scale of the LSI can be reduced and the cost can be reduced. It is possible to reduce.
本発明にかかる動画像符号化方法および動画像復号化方法は、幅のある圧縮率に柔軟に且つ簡単に対応することができるという効果を奏し、例えば、撮影および再生機能を有する携帯電話、パーソナルコンピュータ、および記録再生装置などに適用することができる。
The moving picture coding method and the moving picture decoding method according to the present invention have the effect of being able to flexibly and easily cope with a wide range of compression rates. The present invention can be applied to computers, recording / reproducing apparatuses, and the like.
300 減算部
302 変換部
304 量子化部
306 逆量子化部
308 逆変換部
310 加算部
312 フィルタリング部
314 メモリ圧縮部
316 メモリ部
318 メモリ復元部
320 動き検出部
322 動き補間部
324 エントロピー符号化部
400 エントロピー復号化部
402 逆量子化部
404 逆変換部
406 加算部
408 動き補間部
410 第1メモリ復元部
412 メモリ圧縮部
413 メモリ部
414 第2メモリ復元部
416 フィルタリング部 300Subtractor 302 Transformer 304 Quantizer 306 Inverse Quantizer 308 Inverse Transformer 310 Adder 312 Filtering Unit 314 Memory Compression Unit 316 Memory Unit 318 Memory Restoration Unit 320 Motion Detection Unit 322 Motion Interpolation Unit 324 Entropy Coding Unit 400 Entropy decoding unit 402 Inverse quantization unit 404 Inverse transform unit 406 Adder unit 408 Motion interpolation unit 410 First memory decompression unit 412 Memory compression unit 413 Memory unit 414 Second memory decompression unit 416 Filtering unit
302 変換部
304 量子化部
306 逆量子化部
308 逆変換部
310 加算部
312 フィルタリング部
314 メモリ圧縮部
316 メモリ部
318 メモリ復元部
320 動き検出部
322 動き補間部
324 エントロピー符号化部
400 エントロピー復号化部
402 逆量子化部
404 逆変換部
406 加算部
408 動き補間部
410 第1メモリ復元部
412 メモリ圧縮部
413 メモリ部
414 第2メモリ復元部
416 フィルタリング部 300
Claims (26)
- ブロックベースメモリ圧縮および復元方式の動画像符号化方法であって、
ピクチャ間予測を用いてピクチャを符号化するステップと、
前記符号化ピクチャのエントロピー符号化を行うステップと、
ピクチャ間予測を用いて前記符号化ピクチャを復号化するステップと、
メモリ圧縮パラメータセットを決定するステップと、
前記符号化ピクチャを含む圧縮ビデオストリームのヘッダに前記メモリ圧縮パラメータセットを書き込むステップと、
前記メモリ圧縮パラメータセットを用いて、固定サイズのデータブロックに前記復号化ピクチャを圧縮するステップと、
前記復号化ピクチャの前記データブロックを復元して、後のピクチャの符号化におけるピクチャ間予測に必要な画像サンプルを検索するステップ
とを含む動画像符号化方法。 A block-based memory compression and decompression video encoding method comprising:
Encoding a picture using inter-picture prediction;
Performing entropy coding of the coded picture;
Decoding the encoded picture using inter-picture prediction;
Determining a memory compression parameter set;
Writing the memory compression parameter set in a header of a compressed video stream including the encoded pictures;
Compressing the decoded picture into fixed-size data blocks using the memory compression parameter set;
Restoring the data block of the decoded picture, and searching for an image sample necessary for inter-picture prediction in subsequent picture encoding. - ブロックベースメモリ圧縮および復元方式の動画像復号化方法であって、
圧縮ビデオストリームのヘッダを解析して、メモリ圧縮処理を構成するメモリ圧縮パラメータセットを取得するステップと、
前記圧縮ビデオストリームのエントロピー復号化を行うステップと、
ピクチャ間予測を用いて前記圧縮ビデオストリームからピクチャを復号化するステップと、
解析された前記メモリ圧縮パラメータセットを用いて、固定サイズのデータブロックに前記復号化ピクチャを圧縮するステップと、
前記復号化ピクチャの前記データブロックを復元して、後のピクチャの復号化におけるピクチャ間予測に必要な画像サンプルを生成するステップと、
前記復号化ピクチャの前記データブロックを復元して、出力用の画像サンプルを生成するステップ
とを含む動画像復号化方法。 A block-based memory compression and decompression-type moving image decoding method,
Analyzing a header of the compressed video stream to obtain a memory compression parameter set constituting a memory compression process;
Performing entropy decoding of the compressed video stream;
Decoding a picture from the compressed video stream using inter-picture prediction;
Compressing the decoded picture into a fixed-size data block using the analyzed memory compression parameter set;
Reconstructing the data block of the decoded picture to generate image samples necessary for inter-picture prediction in later picture decoding;
Restoring the data block of the decoded picture to generate an image sample for output. - 前記復号化ピクチャを圧縮するステップは、
画像サンプルブロックを前記復号化ピクチャから抽出するステップと、
前記画像サンプルブロックを係数ブロックに変換するステップと、
所定の係数重み値セットにより、前記係数ブロック内の各係数をビットシフト処理するステップと、
DC予測値で、前記係数ブロックの第1係数を減算するステップと、
走査パターンにより、前記係数ブロックを走査するステップと、
ビットプレーン符号化方式を用いて、前記走査された係数を符号化するステップと
を含む請求項1または2に記載の動画像符号化方法または動画像復号化方法。 Compressing the decoded picture comprises:
Extracting an image sample block from the decoded picture;
Converting the image sample block to a coefficient block;
Bit-shifting each coefficient in the coefficient block according to a predetermined coefficient weight value set;
Subtracting a first coefficient of the coefficient block by a DC prediction value;
Scanning the coefficient block according to a scanning pattern;
The video encoding method or video decoding method according to claim 1, further comprising: encoding the scanned coefficients using a bit plane encoding method. - 前記データブロックを復元するステップは、
ビットプレーン復号化処理を用いて、係数を復号化するステップと、
オフセットビットを追加して、欠落した、または、復号化されなかったビットプレーンを補償するステップと、
走査パターンにより、前記復号化された係数の逆走査を行って、係数ブロックを生成するステップと、
DC値で、前記係数ブロックの第1係数を加算するステップと、
所定の係数シフト値セットにより、前記係数ブロック内の各係数をビットシフト処理するステップと、
前記係数ブロックを画像サンプルブロックに変換するステップと、
所定の画素シフト値分、前記画像サンプルブロック内の各画像サンプルをビットシフト処理するステップと、
前記画像サンプルブロック内の各画像サンプル値を最大値と最小値との間にクリッピングするステップと
を含む請求項1または2に記載の動画像符号化方法または動画像復号化方法。 Restoring the data block comprises:
Decoding the coefficients using a bit-plane decoding process;
Adding offset bits to compensate for missing or undecoded bitplanes;
Performing a reverse scan of the decoded coefficients according to a scan pattern to generate coefficient blocks;
Adding a first coefficient of the coefficient block with a DC value;
Bit-shifting each coefficient in the coefficient block according to a predetermined coefficient shift value set;
Converting the coefficient block to an image sample block;
Bit shifting each image sample in the image sample block by a predetermined pixel shift value;
3. The moving picture encoding method or the moving picture decoding method according to claim 1, further comprising: clipping each image sample value in the image sample block between a maximum value and a minimum value. - 前記メモリ圧縮パラメータセットは、メモリ圧縮用の画像ブロックサイズを表すパラメータを含む
請求項1または2に記載の動画像符号化方法または動画像復号化方法。 The moving image encoding method or the moving image decoding method according to claim 1, wherein the memory compression parameter set includes a parameter representing an image block size for memory compression. - 前記メモリ圧縮パラメータセットは、係数の明示型走査パターンを用いているかどうかを示すフラグと、前記フラグが1の場合には、走査パターン値とを含む
請求項1または2に記載の動画像符号化方法または動画像復号化方法。 The moving image coding according to claim 1, wherein the memory compression parameter set includes a flag indicating whether or not a coefficient explicit scanning pattern is used, and a scanning pattern value when the flag is 1. 4. Method or video decoding method. - 前記メモリ圧縮パラメータセットは、圧縮データブロックに前記復号化画像の1成分が含まれているのか、または3成分が含まれているのかを判断するクロマ成分パッキングモードを示すフラグを含む
請求項1または2に記載の動画像符号化方法または動画像復号化方法。 The memory compression parameter set includes a flag indicating a chroma component packing mode for determining whether a compressed data block includes one component or three components of the decoded image. 3. The moving image encoding method or moving image decoding method according to 2. - 前記メモリ圧縮パラメータセットは、係数の選択型重み付けを用いているかどうかを示すフラグと、前記フラグが1の場合には、係数重み付けフラグ値セットとを含む
請求項1または2に記載の動画像符号化方法または動画像復号化方法。 The moving image code according to claim 1 or 2, wherein the memory compression parameter set includes a flag indicating whether or not coefficient selective weighting is used and a coefficient weighting flag value set when the flag is 1. Method or video decoding method. - 前記メモリ圧縮パラメータセットは、前記メモリ圧縮方式の出力用の対象圧縮データサイズを示すパラメータを含む
請求項1または2に記載の動画像符号化方法または動画像復号化方法。 The moving image encoding method or the moving image decoding method according to claim 1, wherein the memory compression parameter set includes a parameter indicating a target compressed data size for output of the memory compression method. - 前記メモリ圧縮パラメータセットは、前記メモリ復元方式の復元画像出力ごとに対象ビット精度を示すパラメータを含む
請求項1または2に記載の動画像符号化方法または動画像復号化方法。 The moving image encoding method or the moving image decoding method according to claim 1, wherein the memory compression parameter set includes a parameter indicating target bit accuracy for each restored image output of the memory restoration method. - 適応型メモリ圧縮方式の動画像符号化方法であって、
複数のメモリ圧縮方式の中から、1つのメモリ圧縮方式を選択するステップと、
選択された前記メモリ圧縮方式に基づいて、メモリ圧縮方式選択パラメータを圧縮ビデオストリームのヘッダに書き込むステップと、
前記メモリ圧縮方式選択パラメータが予め定められた値を有するか否かを判別するステップとを含み、
前記メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、適応型ビット量子化方式を用いて再構成サンプルの圧縮を画素グループごとに行い、
前記メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、単純画素ビット右シフト方式を用いて再構成サンプルの圧縮を画素ごとに行い、
圧縮サンプルをメモリユニットに格納する
動画像符号化方法。 A video encoding method using an adaptive memory compression method,
Selecting one memory compression method from a plurality of memory compression methods;
Writing a memory compression method selection parameter in a header of the compressed video stream based on the selected memory compression method;
Determining whether the memory compression scheme selection parameter has a predetermined value,
When the memory compression method selection parameter has a predetermined value, compression of the reconstructed sample is performed for each pixel group using an adaptive bit quantization method,
If the memory compression method selection parameter does not have a predetermined value, the reconstructed sample is compressed pixel by pixel using a simple pixel bit right shift method,
A video encoding method in which compressed samples are stored in a memory unit. - 適応型メモリ圧縮方式の動画像復号化方法であって、
メモリ圧縮方式選択パラメータを圧縮ビデオストリームのヘッダから読み出すステップと、
前記メモリ圧縮方式選択パラメータが予め定められた値を有するか否かを判別するステップとを含み、
前記メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、適応型ビット量子化方式を用いて再構成サンプルの圧縮を画素グループごとに行い、
前記メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、単純画素ビット右シフト方式を用いて再構成サンプルの圧縮を画素ごとに行い、
圧縮サンプルをメモリユニットに格納する
動画像復号化方法。 A video decoding method using an adaptive memory compression method,
Reading the memory compression method selection parameter from the header of the compressed video stream;
Determining whether the memory compression scheme selection parameter has a predetermined value,
When the memory compression method selection parameter has a predetermined value, compression of the reconstructed sample is performed for each pixel group using an adaptive bit quantization method,
If the memory compression method selection parameter does not have a predetermined value, the reconstructed sample is compressed pixel by pixel using a simple pixel bit right shift method,
A video decoding method for storing compressed samples in a memory unit. - 適応型メモリ復元方式の動画像符号化復号化方法であって、
メモリ圧縮方式選択パラメータを決定するステップと、
メモリユニットから圧縮サンプルを検索するステップと、
前記メモリ圧縮方式選択パラメータが予め定められた値を有するか否かを判別するステップとを含み、
前記メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、適応型ビット逆量子化方式を用いて圧縮サンプルの復元を行うことにより、再構成サンプルを生成し、
前記メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、単純画素ビット左シフト方式を用いて再構成サンプルの復元を画素ごとに行うことにより、再構成サンプルを生成し、
前記再構成サンプルを用いてピクチャ間予測を行う
動画像符号化復号化方法。 A video encoding / decoding method using an adaptive memory restoration method,
Determining a memory compression method selection parameter;
Retrieving a compressed sample from the memory unit;
Determining whether the memory compression scheme selection parameter has a predetermined value,
When the memory compression method selection parameter has a predetermined value, a reconstructed sample is generated by decompressing the compressed sample using an adaptive bit inverse quantization method,
If the memory compression scheme selection parameter does not have a predetermined value, a reconstruction sample is generated by performing reconstruction sample reconstruction for each pixel using a simple pixel bit left shift scheme,
A moving picture coding / decoding method for performing inter-picture prediction using the reconstructed samples. - ブロックベースメモリ圧縮および復元方式の動画像符号化装置であって、
ピクチャ間予測を用いてピクチャを符号化する手段と、
前記符号化ピクチャのエントロピー符号化を行う手段と、
ピクチャ間予測を用いて前記符号化ピクチャを復号化する手段と、
メモリ圧縮パラメータセットを決定する手段と、
前記符号化ピクチャを含む圧縮ビデオストリームのヘッダに前記メモリ圧縮パラメータセットを書き込む手段と、
前記メモリ圧縮パラメータセットを用いて、固定サイズのデータブロックに前記復号化ピクチャを圧縮する手段と、
前記復号化ピクチャの前記データブロックを復元して、後のピクチャの符号化におけるピクチャ間予測に必要な画像サンプルを検索する手段と
を備える動画像符号化装置。 A block-based memory compression and decompression video encoding apparatus,
Means for encoding a picture using inter-picture prediction;
Means for entropy coding of the coded picture;
Means for decoding the encoded picture using inter-picture prediction;
Means for determining a memory compression parameter set;
Means for writing the memory compression parameter set to a header of a compressed video stream including the encoded picture;
Means for compressing the decoded picture into fixed-size data blocks using the memory compression parameter set;
Means for reconstructing the data block of the decoded picture and searching for image samples required for inter-picture prediction in later picture coding;
A video encoding device comprising: - ブロックベースメモリ圧縮および復元方式の動画像復号化装置であって、
圧縮ビデオストリームのヘッダを解析して、メモリ圧縮処理を構成するメモリ圧縮パラメータセットを取得する手段と、
前記圧縮ビデオストリームのエントロピー復号化を行う手段と、
ピクチャ間予測を用いて前記圧縮ビデオストリームからピクチャを復号化する手段と、
解析された前記メモリ圧縮パラメータセットを用いて、固定サイズのデータブロックに前記復号化ピクチャを圧縮する手段と、
前記復号化ピクチャの前記データブロックを復元して、後のピクチャの復号化におけるピクチャ間予測に必要な画像サンプルを生成する手段と、
前記復号化ピクチャの前記データブロックを復元して、出力用の画像サンプルを生成する手段と
を備える動画像復号化装置。 A block-based memory compression and decompression type video decoding device comprising:
Means for analyzing a header of the compressed video stream to obtain a memory compression parameter set constituting a memory compression process;
Means for performing entropy decoding of the compressed video stream;
Means for decoding a picture from the compressed video stream using inter-picture prediction;
Means for compressing the decoded picture into fixed-size data blocks using the analyzed memory compression parameter set;
Means for reconstructing the data block of the decoded picture and generating image samples necessary for inter-picture prediction in subsequent picture decoding;
A moving picture decoding apparatus comprising: means for restoring the data block of the decoded picture and generating an image sample for output. - 前記復号化ピクチャを圧縮する手段は、
画像サンプルブロックを前記復号化画像から抽出する手段と、
前記画像サンプルブロックを係数ブロックに変換する手段と、
所定の係数重み値セットにより、前記係数ブロック内の各係数をビットシフト処理する手段と、
DC予測値で、前記係数ブロックの第1係数を減算する手段と、
走査パターンにより、前記係数ブロックを走査する手段と、
ビットプレーン符号化方式を用いて、前記走査された係数を符号化する手段と
を備える請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The means for compressing the decoded picture comprises:
Means for extracting image sample blocks from the decoded image;
Means for converting the image sample block to a coefficient block;
Means for bit-shifting each coefficient in the coefficient block according to a predetermined coefficient weight value set;
Means for subtracting a first coefficient of the coefficient block by a DC prediction value;
Means for scanning the coefficient block according to a scanning pattern;
The moving picture coding apparatus or moving picture decoding apparatus according to claim 14 or 15, further comprising: means for coding the scanned coefficient using a bit plane coding method. - 前記データブロックを復元する手段は、
ビットプレーン復号化処理を用いて、係数を復号化する手段と、
オフセットビットを追加して、欠落した、または、復号化されなかったビットプレーンを補償する手段と、
走査パターンにより、前記復号化された係数の逆走査を行って、係数ブロックを生成する手段と、
DC値で、前記係数ブロックの第1係数を加算する手段と、
所定の係数シフト値セットにより、前記係数ブロック内の各係数をビットシフト処理する手段と、
前記係数ブロックを画像サンプルブロックに変換する手段と、
所定の画素シフト値分、前記画像サンプルブロック内の各画像サンプルをビットシフト処理する手段と、
前記画像サンプルブロック内の各画像サンプル値を最大値と最小値との間にクリッピングするステップと
を備える請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The means for restoring the data block is:
Means for decoding coefficients using a bit-plane decoding process;
Means to add offset bits to compensate for missing or undecoded bitplanes;
Means for performing a reverse scan of the decoded coefficients according to a scan pattern to generate coefficient blocks;
Means for adding a first coefficient of the coefficient block with a DC value;
Means for bit-shifting each coefficient in the coefficient block according to a predetermined coefficient shift value set;
Means for converting the coefficient block into an image sample block;
Means for bit-shifting each image sample in the image sample block by a predetermined pixel shift value;
The video encoding device or the video decoding device according to claim 14, further comprising: clipping each image sample value in the image sample block between a maximum value and a minimum value. - 前記メモリ圧縮パラメータセットは、メモリ圧縮用の画像ブロックサイズを表すパラメータを含む
請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The moving image encoding device or the moving image decoding device according to claim 14, wherein the memory compression parameter set includes a parameter representing an image block size for memory compression. - 前記メモリ圧縮パラメータセットは、係数の明示型走査パターンを用いているかどうかを示すフラグと、前記フラグが1の場合には、走査パターン値とを含む
請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The moving image coding according to claim 14 or 15, wherein the memory compression parameter set includes a flag indicating whether or not an explicit scanning pattern of coefficients is used, and a scanning pattern value when the flag is 1. Device or video decoding device. - 前記メモリ圧縮パラメータセットは、圧縮データブロックに前記復号化画像の1成分が含まれているのか、または3成分が含まれているのかを判断するクロマ成分パッキングモードを示すフラグを含む
請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The memory compression parameter set includes a flag indicating a chroma component packing mode for determining whether a compressed data block includes one component or three components of the decoded image. 15. The moving image encoding device or moving image decoding device according to 15. - 前記メモリ圧縮パラメータセットは、係数の選択型重み付けを用いているかどうかを示すフラグと、前記フラグが1の場合には、係数重み付けフラグ値セットとを含む
請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The moving image code according to claim 14 or 15, wherein the memory compression parameter set includes a flag indicating whether or not coefficient selective weighting is used, and, when the flag is 1, a coefficient weighting flag value set. Or moving picture decoding apparatus. - 前記メモリ圧縮パラメータセットは、前記メモリ圧縮方式の出力用の対象圧縮データサイズを示すパラメータを含む
請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The video encoding device or video decoding device according to claim 14 or 15, wherein the memory compression parameter set includes a parameter indicating a target compressed data size for output of the memory compression method. - 前記メモリ圧縮パラメータセットは、前記メモリ復元方式の復元画像出力ごとに対象ビット精度を示すパラメータを含む
請求項14または15に記載の動画像符号化装置または動画像復号化装置。 The moving image encoding device or the moving image decoding device according to claim 14, wherein the memory compression parameter set includes a parameter indicating target bit accuracy for each restored image output of the memory restoration method. - 適応型メモリ圧縮方式の動画像符号化装置であって、
複数のメモリ圧縮方式の中から、1つのメモリ圧縮方式を選択する手段と、
選択された前記メモリ圧縮方式に基づいて、メモリ圧縮方式選択パラメータを圧縮ビデオストリームのヘッダに書き込む手段と、
前記メモリ圧縮方式選択パラメータが予め定められた値を有するか否かを判別する手段とを備え、
前記メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、適応型ビット量子化方式を用いて再構成サンプルの圧縮を画素グループごとに行い、
前記メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、単純画素ビット右シフト方式を用いて再構成サンプルの圧縮を画素ごとに行い、
圧縮サンプルをメモリユニットに格納する
動画像符号化装置。 An adaptive memory compression type moving image encoding device,
Means for selecting one memory compression method from a plurality of memory compression methods;
Means for writing a memory compression method selection parameter to a header of the compressed video stream based on the selected memory compression method;
Means for determining whether or not the memory compression method selection parameter has a predetermined value;
When the memory compression method selection parameter has a predetermined value, compression of the reconstructed sample is performed for each pixel group using an adaptive bit quantization method,
If the memory compression method selection parameter does not have a predetermined value, the reconstructed sample is compressed pixel by pixel using a simple pixel bit right shift method,
A video encoding device that stores compressed samples in a memory unit. - 適応型メモリ圧縮方式の動画像復号化装置であって、
メモリ圧縮方式選択パラメータを圧縮ビデオストリームのヘッダから読み出す手段と、
前記メモリ圧縮方式選択パラメータが予め定められた値を有するか否かを判別する手段とを備え、
前記メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、適応型ビット量子化方式を用いて再構成サンプルの圧縮を画素グループごとに行い、
前記メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、単純画素ビット右シフト方式を用いて再構成サンプルの圧縮を画素ごとに行い、
圧縮サンプルをメモリユニットに格納する
動画像復号化装置。 An adaptive memory compression type video decoding device comprising:
Means for reading the memory compression method selection parameter from the header of the compressed video stream;
Means for determining whether or not the memory compression method selection parameter has a predetermined value;
When the memory compression method selection parameter has a predetermined value, compression of the reconstructed sample is performed for each pixel group using an adaptive bit quantization method,
If the memory compression method selection parameter does not have a predetermined value, the reconstructed sample is compressed pixel by pixel using a simple pixel bit right shift method,
A video decoding device that stores compressed samples in a memory unit. - 適応型メモリ復元方式の動画像符号化復号化装置であって、
メモリ圧縮方式選択パラメータを決定する手段と、
メモリユニットから圧縮サンプルを検索する手段と、
前記メモリ圧縮方式選択パラメータが予め定められた値を有するか否かを判別する手段とを含み、
前記メモリ圧縮方式選択パラメータが予め定められた値を有する場合には、適応型ビット逆量子化方式を用いて圧縮サンプルの復元を行うことにより、再構成サンプルを生成し、
前記メモリ圧縮方式選択パラメータが予め定められた値を有さない場合には、単純画素ビット左シフト方式を用いて再構成サンプルの復元を画素ごとに行うことにより、再構成サンプルを生成し、
前記再構成サンプルを用いてピクチャ間予測を行う
動画像符号化復号化装置。
A video encoding / decoding device of an adaptive memory restoration method,
Means for determining a memory compression method selection parameter;
Means for retrieving a compressed sample from a memory unit;
Means for determining whether or not the memory compression method selection parameter has a predetermined value;
When the memory compression method selection parameter has a predetermined value, a reconstructed sample is generated by decompressing the compressed sample using an adaptive bit inverse quantization method,
If the memory compression scheme selection parameter does not have a predetermined value, a reconstruction sample is generated by performing reconstruction sample reconstruction for each pixel using a simple pixel bit left shift scheme,
A moving picture coding / decoding apparatus that performs inter-picture prediction using the reconstructed samples.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010121023 | 2010-05-26 | ||
JP2010-121023 | 2010-05-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011148622A1 true WO2011148622A1 (en) | 2011-12-01 |
Family
ID=45003622
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/002895 WO2011148622A1 (en) | 2010-05-26 | 2011-05-24 | Video encoding method, video decoding method, video encoding device and video decoding device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2011148622A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015146473A (en) * | 2014-01-31 | 2015-08-13 | 株式会社アクセル | Encoding device and decoding device |
JP2015146474A (en) * | 2014-01-31 | 2015-08-13 | 株式会社アクセル | Encoding device and decoding device |
WO2018173432A1 (en) * | 2017-03-21 | 2018-09-27 | シャープ株式会社 | Prediction image generation device, moving image decoding device, and moving image encoding device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001204030A (en) * | 1999-11-11 | 2001-07-27 | Canon Inc | Image processor, image processing method and storage medium |
JP2003348592A (en) * | 2002-05-23 | 2003-12-05 | Matsushita Electric Ind Co Ltd | Image data storage device, encoding device, decoding device, and compression and expansion system |
JP2009130931A (en) * | 2007-11-19 | 2009-06-11 | Samsung Electronics Co Ltd | Method and apparatus for efficiently encoding and/or decoding moving image using image resolution adjustment |
JP2010098352A (en) * | 2008-10-14 | 2010-04-30 | Panasonic Corp | Image information encoder |
-
2011
- 2011-05-24 WO PCT/JP2011/002895 patent/WO2011148622A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001204030A (en) * | 1999-11-11 | 2001-07-27 | Canon Inc | Image processor, image processing method and storage medium |
JP2003348592A (en) * | 2002-05-23 | 2003-12-05 | Matsushita Electric Ind Co Ltd | Image data storage device, encoding device, decoding device, and compression and expansion system |
JP2009130931A (en) * | 2007-11-19 | 2009-06-11 | Samsung Electronics Co Ltd | Method and apparatus for efficiently encoding and/or decoding moving image using image resolution adjustment |
JP2010098352A (en) * | 2008-10-14 | 2010-04-30 | Panasonic Corp | Image information encoder |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2015146473A (en) * | 2014-01-31 | 2015-08-13 | 株式会社アクセル | Encoding device and decoding device |
JP2015146474A (en) * | 2014-01-31 | 2015-08-13 | 株式会社アクセル | Encoding device and decoding device |
WO2018173432A1 (en) * | 2017-03-21 | 2018-09-27 | シャープ株式会社 | Prediction image generation device, moving image decoding device, and moving image encoding device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6238215B2 (en) | Decoding method and decoding apparatus | |
JP6498811B2 (en) | Encoding method and encoding apparatus | |
JP6132270B2 (en) | Encoding / decoding device | |
JP6210396B2 (en) | Decoding method and decoding apparatus | |
JP6288423B2 (en) | Moving picture encoding method, moving picture encoding apparatus, moving picture decoding method, and moving picture decoding apparatus | |
JP6179813B2 (en) | Image encoding method, image encoding device, image decoding method, image decoding device, and image encoding / decoding device | |
JP6489337B2 (en) | Arithmetic decoding method and arithmetic coding method | |
WO2012098868A1 (en) | Image-encoding method, image-decoding method, image-encoding device, image-decoding device, and image-encoding/decoding device | |
WO2011148622A1 (en) | Video encoding method, video decoding method, video encoding device and video decoding device | |
WO2012014472A1 (en) | Moving image coding method, moving image coding device, moving image decoding method, and moving image decoding device | |
WO2012077347A1 (en) | Decoding method | |
WO2012035766A1 (en) | Image decoding method, image encoding method, image decoding device and image encoding device | |
WO2012105267A1 (en) | Image coding method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11786325 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11786325 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |