JP3681835B2 - Image encoding apparatus, image decoding apparatus, and encoding / decoding system - Google Patents

Image encoding apparatus, image decoding apparatus, and encoding / decoding system Download PDF

Info

Publication number
JP3681835B2
JP3681835B2 JP26966996A JP26966996A JP3681835B2 JP 3681835 B2 JP3681835 B2 JP 3681835B2 JP 26966996 A JP26966996 A JP 26966996A JP 26966996 A JP26966996 A JP 26966996A JP 3681835 B2 JP3681835 B2 JP 3681835B2
Authority
JP
Japan
Prior art keywords
image data
image
format
output
format conversion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP26966996A
Other languages
Japanese (ja)
Other versions
JPH09238366A (en
Inventor
篤道 村上
光太郎 浅井
隆浩 福原
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JP34087195 priority Critical
Priority to JP7-340871 priority
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP26966996A priority patent/JP3681835B2/en
Publication of JPH09238366A publication Critical patent/JPH09238366A/en
Application granted granted Critical
Publication of JP3681835B2 publication Critical patent/JP3681835B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/587Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to an image encoder and an image decoder that can be used in a system that performs high-efficiency encoding or decoding of images and efficiently transmits or stores images.
[0002]
[Prior art]
As a conventional representative high-efficiency encoding method, there is MPEG2, which is an international standard method studied in ISO / IEC JTC1 / SC29 / WG11. For example, in the April 1995 issue of “Journal of the Television Society of Image Information Engineering and Broadcasting Technology”, MPEG is described as a special theme. In the same magazine p. For 29-60, the MPEG2 encoding method is introduced as “3-2 video compression”. Hereinafter, a conventional high-efficiency encoding method will be described based on this explanation.
[0003]
FIG. 25 is an explanatory diagram of the image format shown in the above description, and shows the format of the sample density ratio of the luminance signal and the color difference component. MPEG2 has three formats, 4: 2: 0, 4: 2: 2, or 4: 4: 4, but these formats are not dynamically changed and are fixed to one of these formats. Encoding or decoding is performed at.
The 4: 4: 4 format is defined as a format in MPEG2 as of November 1995, but does not belong anywhere in a class called a profile and is a format that is not substantially used. Yes. In the 4: 2: 0 and 4: 2: 2 formats, the sample density of the color difference component is lower than the sample density of the luminance component. This is because an attempt is made to obtain an information compression effect by utilizing the fact that the human resolution discrimination capability is higher than the luminance.
[0004]
FIG. 26 is a basic configuration diagram of the MPEG encoder described above. In the figure, 1 is an A / D conversion unit, 28 is a format conversion unit, 29 is a screen rearrangement unit, 16 is an inter (intra-frame) / intra (inter-frame) switching selector, 4 is a DCT (Discrete Cosine Transform) unit, Reference numeral 5 denotes a quantization unit, 6 denotes a variable length encoding unit, 7 denotes a transmission buffer, and 8 denotes a rate control unit. Further, 11 is an inverse quantization unit, 12 is an inverse DCT unit, 17 is an adder, 18 is a frame memory, and 19 is a motion compensation unit, and these additional loops constitute prediction encoding means.
FIG. 27 is a basic configuration diagram of an MPEG decoder shown in the same explanation. In the figure, 9 is a reception buffer, 10 is a variable length decoding unit, 11 is an inverse quantization unit, 12 is an inverse DCT unit, 30 is a format conversion unit, and 14 is a D / A conversion unit. Reference numeral 18 denotes a frame memory, 24 denotes a motion compensation prediction unit, and 17 denotes an adder. These constitute prediction decoding means. Also, 104 is a transform coefficient by DCT, 105 is a quantization index of the transform coefficient, 107 is an encoded bit stream, 108 is a signal indicating the amount of information generated, 109 is a quantization index of transform coefficients that have been subjected to variable length decoding, 110 Is the inverse quantized transform coefficient, 116 is the input image data, 117 is the prediction error image data, 118 is the image data returned to the pixel space region by inverse DCT, 119 is the prediction image data, and 120 is the decoded image data , 125 are motion compensation prediction data, and 126 is motion vector information.
[0005]
The operation of the encoder will be described with reference to FIG.
The input image signal is digitized in the A / D conversion 1. The input screen is encoded by motion compensated prediction + DCT encoding. A difference between the input image data 116 and the motion compensated predicted image data 125 generated by motion prediction from the reference screen is obtained, and a prediction error signal 117 is obtained. The prediction error signal is converted into a transform coefficient 104 in the spatial frequency domain by DCT 4 in units of blocks of 8 pixels × 8 lines, and quantized by the quantization unit 5.
When performing intra coding without motion compensation prediction, the input image data 116 is DCT-coded as it is. This switching is performed by the selector 16. For later use as a reference screen for motion compensation prediction, the quantized information 105 is inversely quantized by the inverse quantization unit 11, converted to inverse DCT by the inverse DCT unit 12, and added by the motion compensated prediction signal 119 and the adder 17. That is, the image is decoded by local decoding and stored in the frame memory 18.
The quantized 8 × 8 DCT coefficients are scanned in order from low frequency components to become one-dimensional information, and then variable-length encoded by the variable-length encoding unit 6 together with other encoded information such as motion vectors. In order to keep the variable code generation amount constant, it is common to monitor the output buffer 7 to grasp the generated code amount 108 and perform quantization control by the rate control unit by feedback. The output of the buffer 7 is an encoded bit stream 107.
[0006]
The operation of the decoder will be described with reference to FIG.
The decoding process is basically the reverse operation of the encoder. First, the encoded bit stream 107 is stored in the buffer 9. Data in the buffer 9 is read and decoded by the variable length decoding unit 10. In this process, DCT coefficient information 109, motion vector information 126, and the like are decoded and separated. The decoded 8 × 8 quantized DCT coefficient 109 is restored to the DCT coefficient 110 by the inverse quantization unit 11 and converted into the pixel space data 118 by the inverse DCT unit 12. At the time of intra encoding, a decoded image is obtained at this stage.
When motion compensation prediction is performed, an image is decoded by addition with motion compensation prediction image data 119 generated by motion compensation prediction from the reference screen. The decoded image is stored in the frame memory 18 for use as a reference screen in the subsequent decoding process as necessary.
[0007]
[Problems to be solved by the invention]
The above example is representative of the conventional method. In this example, the coding of the input image is based on DCT in units of blocks, and the sample density ratio of luminance / color difference components is statically fixed such as 4: 2: 0 or 4: 2: 2. From this, the following problems arise. As a premise, degradation of image quality due to compression is observed in units of blocks. This is because noise generated in a specific transform coefficient by quantization is propagated to the entire block by inverse DCT. Furthermore, this deterioration is observed so that it appears remarkably in the color difference component. This is because the sample density of the color difference component is generally lower than the sample density of the luminance component. Increasing the sample density of the color difference component alleviates the phenomenon that color noise is particularly noticeable, but there is a dilemma that increases the number of samples to be encoded and is disadvantageous in increasing compression efficiency.
[0008]
The present invention has been made to solve the above-described problems. It reduces color noise that becomes noticeable when the compression rate is increased, and a higher-quality encoded image can be obtained without reducing the compression efficiency. It is an object to obtain an image encoder and an image decoder as described above.
[0009]
[Means for Solving the Problems]
  An image encoding apparatus according to the present invention includes a quantization unit that performs format conversion on a digitized input image, quantizes the format-converted image, and an encoding unit that encodes quantized quantized image data. In an image encoding device that outputs an encoded bitstream,
  A plurality of format conversion units for converting a plurality of predetermined luminance signals and color difference signals into spatial resolution image data upon format conversion;
  An image determination unit that outputs selection information indicating whether the selection result is the image data converted by the multiple format conversion unit or the original image data before conversion, and transmits the image data of the selection result to a subsequent stage;
  A second plurality of format converters for inversely quantizing the quantized quantized image data and adding motion prediction differences to convert the data into a plurality of image data having a predetermined spatial resolution; and the inverse quantization Selected image signal or the second multiple format converter output image signalStoring and storing the stored reference image dataA plurality of third multiple format converters for converting into image data having a predetermined spatial resolution in order to obtain a motion prediction difference using the reference image data or the third multiple format conversion. A feedback encoding unit that selects the image signal that is the output of the unit and performs feedback subtraction on the original image data or the output after format conversion,
  The encoding device selects either the image data converted by the multiple format conversion unit or the original image data according to the selection information output from the image determination unit, transmits the image, and the selection information is transmitted. SendIt is characterized by.
[0011]
  An image decoding apparatus according to the present invention includes a decoding unit that decodes an input encoded bitstream, and decoded decoded dataData corresponding to the conversion coefficientImage data that has been inversely quantized and inversely transformed, including an inverse quantization unit and an inverse transform unit that inversely quantize and inverselyByDigital image dataRestoreIn the configuration to
  Inversely converted image dataAs inputMultiple spatial resolutions with predetermined luminance and color difference signalsConvert to either outputMultiple format converter,
  The image decoding apparatus extracts selection information included in the encoded bitstream, and digitally outputs either the output of the multiple format conversion unit based on the selection information or the image data before conversion by the multiple format conversion unit. Restore image dataI did it.
[0012]
  AlsoMoreThe inverse transformed image data is stored as reference image data after a predetermined format conversion, and the motion prediction error is restored to the originalImage data orProvide feedback prediction means to add to the output after format conversion,
  The feedback prediction means isIn order to obtain the reference image data, a plurality of second multiple format conversion units that convert the reproduced image data into one of the spatial resolutions of a predetermined luminance signal and color difference signal, and the feedback prediction means output as the originalImage data orTo add to the output after format conversion, multiple, PredeterminedA third multi-format conversion unit for converting the prediction error of the spatial resolution of, Provided.
[0013]
Furthermore, the image state determination unit compares the state of the color difference signal in the input image data or the quantized image data with the setting reference, and selects the corresponding spatial resolution output from the multiple format conversion unit output.
[0014]
Furthermore, the image state determination unit compares the state of the luminance signal in the input image data or the quantized image data with the setting reference, and selects the corresponding spatial resolution output from the multiple format conversion unit output.
[0015]
Furthermore, the image state determination unit compares the value of the motion vector from the motion compensation prediction unit with the setting criterion, and selects the corresponding spatial resolution output from the multiple format conversion unit output.
[0016]
Still further, the image state determination unit compares a prediction error value, which is a difference between the image data having a predetermined spatial resolution and the motion compensated prediction signal, with a setting criterion, and outputs the corresponding spatial resolution from the output of the multiple format conversion unit. The output of was selected.
[0017]
Still further, the image state determination unit compares the quantization step size from the encoding amount generated based on the encoded bitstream with the setting criterion, and selects the output of the corresponding spatial resolution from the multiple format conversion unit output. I made it.
[0018]
Still further, the image state determination unit includes a state of a color difference signal or a luminance signal in the input image data or the quantized image data, a value of a motion vector from the motion compensation prediction unit, a prediction error value, a quantization step size, A plurality of these values are added, and compared with the setting standard, the corresponding spatial resolution output is selected from the multiple format conversion unit outputs.
[0019]
In addition to the basic configuration, an image state determination unit corresponding to detection of a color difference signal, a luminance signal, or a change in motion in the image encoding device on the transmission side is provided, and the input is encoded with the same setting standard as that on the transmission side. A change in the state of the bit stream is detected, and one of a plurality of spatial resolutions is selected to obtain a decoded image.
[0020]
  The image encoding / decoding system according to the present invention converts a digitized input image.To convert the format, quantize, encode, and output the encoded bitstream,
  (1) For format conversionA plurality of format converters for converting a plurality of predetermined luminance signals and color difference signals into image data having a spatial resolution;
  (2) an image determination unit that outputs selection information indicating whether the selection result is the image data converted by the multiple format conversion unit or the original image data before conversion, and transmits the image data of the selection result to a subsequent stage; ,
  (3) dequantizing the quantized quantized image data, adding a motion prediction difference, and converting to a plurality of image data having a predetermined spatial resolution; In order to select and store the inversely quantized image signal or the image signal output from the second multiple format converter, and to obtain a motion prediction difference using the stored reference image data, a plurality of predetermined A third multi-format converter that converts the image data to a spatial resolution, and selects the reference image data or the image signal that is the output of the third multi-format converter to select the original image data or A feedback encoding unit that performs feedback subtraction on the output after the format conversion,
  According to the selection information output from the image determination unit, either the image data converted by the multiple format conversion unit or the original image data is selected and transmitted, and the selection information is transmitted.An image encoding device;
  Sent from the image encoding deviceDecode the input encoded bitstream;Of the decoded data, the data corresponding to the transform coefficient isInverse quantizationAnd by inverse transform, inverse quantization and inverse transform image dataDigital image dataTo restore
  (4) Using the inversely transformed image data as input, a plurality of predetermined luminance signals and color difference signals.Spatial resolutionConvert to either outputMultiple format converter,
  Select information included in the encoded bitstream is extracted, and based on this selection information, digital image data is restored by either the output of the multiple format converter or the image data before conversion by the multiple format converter DoAn image decoding device;
Consists of.
[0023]
DETAILED DESCRIPTION OF THE INVENTION
Embodiment 1 FIG.
Specific examples of application of the apparatus of the present invention include a digital broadcasting system, a digital video disk, and the like performed via a satellite, terrestrial wave, and wired communication network.
Embodiments of a high-efficiency image encoder and decoder according to the present invention will be described below with reference to the drawings. FIG. 1 is a configuration diagram of a basic image coding apparatus when there is no predictive coding loop including motion compensation. In the figure, 2 is a local format conversion unit (multiple format conversion unit) and 3 is an (image state) determination unit as new elements. The other A / D conversion unit 1, DCT unit 4, quantization unit 5, variable length coding unit 6, buffer 7 and rate control unit 8 are the same elements as in the prior art. Also, 101 is digitized image data, 102 is locally formatted image data, 103 is dynamically switched image data, 104 is a DCT conversion coefficient, and 105 is a conversion coefficient quantization. An index (quantized image data), 106 is a signal indicating which format is used, 107 is an encoded bit stream, and 108 is a signal indicating the amount of information generated.
[0024]
Next, the operation will be described.
In this embodiment, a DCT encoding method is used. The input image signal is digitized in the A / D conversion 1 and then subjected to format conversion. Now, it is assumed that the format of the image data 101 is the 4: 4: 4 format shown in FIG. That is, the sample density of the luminance component and the color difference component are equal. This image data is converted into another format conversion, for example, 4: 2: 0 format image data shown in FIG. 25A by local format conversion (multiple format conversion) 2. Since the 4: 2: 0 format has a lower sample density of color difference components than the 4: 4: 4 format, the total number of samples to be encoded is reduced, and the compression efficiency is increased. On the other hand, color noise may spread over a wide area on the screen. These 4: 4: 4 format image data and 4: 2: 0 format image data are encoded while being switched dynamically in the selector 3 in units of blocks or a plurality of blocks, for example. FIG. 2 shows the configuration of a macro block composed of blocks of blue and red color difference components (Cb, Cr) that are positionally equal to four 8 pixel × 8 line blocks of luminance component (Y). A macroblock is an example of a unit for switching.
[0025]
FIG. 3 to FIG. 6 show actual configuration examples in the local format conversion unit 2. In FIG. 3, the input image data 101 which is a multiplexed luminance / color difference signal is separated into a luminance signal 132 and a color difference signal 133 by the luminance / color difference signal separator 31. The chrominance signal 133 is further downsampled in the chrominance signal downsampler 32 or upsampled in the chrominance signal upsampler. The color-difference signal 134 whose format has been converted by the above processing is multiplexed together with the luminance signal 132 by the luminance / color-difference signal multiplexer 34, and the multiple format conversion unit output signal 102, which is a multiplexed signal, is sent out. Therefore, in the case of the above embodiment, the 4: 4: 4 format is converted into the 4: 2: 0 format in the local format conversion 2, so that the operation of FIG. A downsampled signal will be output.
[0026]
Since the configurations of the luminance / chrominance signal separator 31 and the luminance / chrominance signal multiplexer 34 are well known, description thereof is omitted here. The detailed operation of the color difference signal downsampler 32 will be described.
The color difference signal 133 separated by the luminance / color difference signal separator 31 is separated to the pixel level. When down-sampling from the 4: 4: 4 format in FIG. 2C to the 4: 2: 0 format in FIG. 2A, the Cb and Cr signals are both 16 × 16 to 8 × 8 pixels. For example, if the color difference signal generation of each pixel is down-sampled in consideration of adjacent left and right pixel signals, the average value calculation unit indicated by M in FIG. 3 is down-sampled with two pixel values as inputs. A new pixel value of 8 × 8 pixels, which is halved, is output. For example, in the example of downsampling two pixels to one pixel, the average value is calculated by the average value calculation unit by multiplying the first pixel by the coefficient w1 and the second pixel by the coefficient w2. If the two pixels are p1 and p2,
Average value = (p1 × w1 + p2 × w2) / (w1 + w2)
Can be calculated.
Next, the average value output from each average value calculation unit is multiplexed in the color difference signal multiplexing unit to be 32 outputs.
In the above, it is possible to make the filter coefficient w variable, and it is not limited to 1/2 down-sampling, and arbitrary down-sampling such as 1/3 or 1/4 can be performed.
[0027]
The upsampling configuration in FIG. 3 is a configuration example in which upsampling is multiplied by 2/1. That is, the original 1 pixel is enlarged to 2 pixels.
First, since all the pixels separated in the luminance / color difference signal separator 31 are repeatedly used, two systems of outputs are performed (the same pixel may be used twice, so many other configurations are possible). Next, the color difference signal separated in the color difference signal separation unit is output from the black circle portion, and the average value is calculated in the average value calculation unit. As in the case of dotted line display, an averaged color difference signal value can be obtained by providing pixel values from a plurality of original pixels for the enlarged new pixel. In this case, the filter coefficient shown in the configuration example of the downsampler 32 is not multiplied, but it goes without saying that this can be used. The average pixel signal output from each average value calculation unit M is multiplexed in the luminance / color difference signal multiplexer 34 for each predetermined block, and becomes an output of the local format conversion 2.
Whether to select image data 102 having a different spatial resolution by up-sampling or down-sampling these color difference signals, or to select image data 101 of the original spatial resolution. The determination unit 3 to select and select does not describe the configuration and operation here. There are various cases of input to the determination unit 3 in the following embodiments, and the operation will be described.
In the above embodiment, both the input and output of the local format converter 2 are multiplexed with the luminance signal and the color difference signal. However, both signals are separated from each other and flow in the circuit. In this case, the luminance / color difference signal separator 31 and the color difference signal multiplexer 34 shown in FIG. 3 are unnecessary. In this case, it can be configured as shown in FIG. Furthermore, due to the nature of the circuit, there may be a case where only the luminance / color difference signal separator 31 is necessary or a case where only the color difference signal multiplexer 34 is necessary. 5 and 6 correspond to the above example. The configuration of the local format conversion unit 2 is exactly the same in the following embodiments of the present invention.
[0028]
Regardless of which format is selected, the image data is converted into the transform coefficient 104 in the spatial frequency domain by DCT 4 in units of blocks of 8 pixels × 8 lines, and the quantization unit 5 quantizes the transform coefficient. The quantized 8 × 8 DCT coefficient 105 is scanned in order from the low frequency component to become one-dimensional information, and is then variable-length encoded by the variable-length encoding unit 6. In each unit for switching the format, information 106 indicating which format is selected is multiplexed as a part of the encoded information. The encoded data is temporarily stored in the buffer 7 and then output as an encoded bit stream 107.
When the variable code generation amount is kept constant, the generated code amount 108 is grasped by monitoring the output buffer 7, and quantization control is performed by feedback.
[0029]
Although DCT encoding is used in the present embodiment, it goes without saying that the present invention can be applied to other encoding schemes such as subband encoding. Further, in the present embodiment, the configuration in which the data that has undergone local format conversion and the data that has not been performed are switched by the (image state) determination unit has been described, but the configuration in which local format conversion itself switches the processing content Of course, there is essentially no change.
[0030]
At least internally, the color difference signal is also processed with high accuracy, and if the processing capability or the bit rate is sufficient, the image signal 102 after the local format conversion is always upsampled with respect to the input image 101. (In this case, the selected image signal 103 is also equivalent).
Actually, at least one bit is required as the output selection bit of the image state determination unit 3, but in the case of the fixed output, the selection bit is not necessary. A highly accurate color difference signal is always obtained.
The configuration in this case is shown in FIG. According to the image encoding device shown in FIG. 7, the format conversion signal 130 is output from the local format conversion 2. This format conversion signal is input to the DCT 4 to encode the format. This configuration can be considered in the same manner in the following other embodiments, and can be configured as described above in addition to switching the format signal using a selector in the encoder / decoder.
[0031]
Embodiment 2. FIG.
FIG. 8 is a block diagram of an image coding apparatus provided with predictive coding means including motion compensation. In the figure, as new elements, there are 20 second local format conversion units, 21 selectors of output units of the determination unit 3, 22 third local format conversion units, 23 output units of the determination unit 3 There is a selector. The other subtractor 15, inter (intraframe) / intra (interframe) switching selector 16, adder 17, frame (image) memory 18, motion compensation vector estimation and motion compensation prediction unit 19 for performing motion compensation are equivalent to the conventional one. Elements.
Also, 116 is image data whose format has been adaptively changed, 117 is prediction error image data, 118 is image data returned to the pixel space region by inverse DCT, 119 is prediction image data, and 120 is decoded image data. , 121 is decoded image data subjected to local format conversion, 122 is decoded image data whose format is unified, 123 is image data after motion compensation read from the motion compensation prediction unit 19, and 124 is locally formatted. The converted image data, 125 is motion compensation prediction data, 126 is motion vector information, and 127 is a signal indicating which format is used locally. Other than the above, the number is the same as that already described.
[0032]
Next, the operation will be described.
In this embodiment, motion compensated prediction and DCT coding are used. The input image signal is digitized by A / D conversion 1 and then subjected to format conversion. Now, it is assumed that the format of the image data 101 is the 4: 4: 4 format shown in FIG. That is, the sample density of the luminance component and the color difference component are equal. The image data 101 is converted into another format, for example, the image data 102 in the 4: 2: 0 format shown in FIG. Since the 4: 2: 0 format has a lower sample density of color difference components than the 4: 4: 4 format, the total number of samples to be encoded is reduced, and the compression efficiency is increased. On the other hand, color noise may spread over a wide area on the screen. Basically, the image data in the 4: 4: 4 format and the image data in the 4: 2: 0 format are encoded dynamically while being switched in the determination unit 3 in units of blocks or a plurality of blocks, for example. .
[0033]
In order to obtain the prediction error signal 117 by taking the difference between the input image data 116 after selection by the determination unit 3 and the motion compensated prediction image data 125 generated by motion prediction from the reference screen, the input image data 116 The format and the format of the motion compensated prediction image data 125 must be the same. Therefore, the format of the motion compensated prediction image data read from the frame memory 18 is also aligned using the third local format conversion 22 and the selector 23. For example, if the format of the reference image stored in the frame memory 18 is a 4: 2: 0 format with emphasis on efficiency, the third local format conversion 22 is a conversion in the direction of increasing the sample density.
[0034]
After the formats match and the prediction error signal 117 is obtained, the image data 103 is converted into the transform coefficient 104 in the spatial frequency domain by DCT4 in units of blocks of 8 pixels × 8 lines. To do. In order to be used later as a reference screen for motion compensation prediction, the quantized information 105 is inversely quantized by the inverse quantization unit 11, and inverse DCT is performed to obtain image data 118, and predicted image data (motion compensated prediction signal) 119 is obtained. Are added by the adder 17. That is, the image is decoded by local decoding and stored in the frame memory 18. At this time, in order to unify the format stored in the frame memory 18, the local format conversion is performed by the second local format conversion 20 and the selector 21 as necessary. The quantized 8 × 8 DCT coefficient 105 is scanned in order from the low frequency component to become one-dimensional information, and is then variable-length encoded by the variable-length encoding unit 6. Also, information 127 indicating which format is selected in each unit for switching the format is multiplexed as a part of the encoded information.
When the variable code generation amount is kept constant, the generated code amount 108 is grasped by monitoring the output buffer 7, and quantization control is performed by feedback from the rate control unit 8.
[0035]
Although DCT encoding is used in the present embodiment, it goes without saying that the present invention can be applied to other encoding schemes such as subband encoding. Further, in the present embodiment, the configuration in which the determination unit 3 switches between data that has been subjected to local format conversion and data that has not been performed has been described. However, even if the configuration in which local format conversion itself switches processing content is essential. Of course, there is no change.
[0036]
Embodiment 3 FIG.
FIG. 9 is a configuration diagram of a basic image decoding apparatus when there is no predictive decoding loop. In the figure, there are 13 local format conversion (multiple format conversion) units as new elements. Also, the determination unit 3a in the decoding apparatus in the figure does not determine the image state, and selects the output with the selection information 113 as input. The other buffer 9, variable length decoding unit 10, inverse quantization unit 11, inverse DCT unit 12, and D / A conversion unit 14 are the same elements as in the prior art.
Also, 109 is variable length decoded quantization index information, 110 is inverse quantized transform coefficient, 111 is image data returned to the pixel space region by inverse DCT, 112 is image data subjected to local format conversion, Reference numeral 113 denotes information indicating which format is selected, which corresponds to the selection information 106 on the encoding device side, 114 is image data with a unified format, and 115 is a reproduced image signal. Other than the above, the number is the same as that already described.
[0037]
Next, the operation will be described.
The decoding apparatus according to this embodiment is a decoding apparatus corresponding to the image encoding apparatus described in the first embodiment. First, the encoded bit stream 107 is stored in the buffer 9. Data in the buffer 9 is read, and variable length decoding is performed in the variable length decoding unit 10. In this process, DCT coefficient information 109, information 113 indicating what format is selected in each unit formed by a block or a plurality of blocks, and the like are decoded and separated. The decoded 8 × 8 quantized DCT coefficient 109 is restored to the DCT coefficient 110 by the inverse quantization unit 11 and converted to the pixel space data 111 by the inverse DCT unit 12.
[0038]
Before outputting as a decoded image, the determination unit 3a performs local format conversion 13 for making the screen format uniform according to, for example, 1-bit information 113 indicating the local format conversion side or the original image signal format side. The decoded image 114 is obtained by performing dynamic switching in step (b). Finally, a reproduced image signal is obtained by the D / A conversion 14.
[0039]
Embodiment 4 FIG.
FIG. 10 is a block diagram of an image decoding apparatus provided with predictive decoding means including motion compensation. In the figure, the new elements are the second local format conversion unit 20, the third local format conversion unit 22, and the selectors 21 and 23 shown in the previous embodiment. Reference numeral 24 denotes a motion compensation prediction unit. Reference numeral 128 denotes decoded image data whose format has been locally converted. Other than the above, the number is the same as that already described.
[0040]
Next, the operation will be described.
The decoding apparatus according to this embodiment is a decoding apparatus corresponding to the image encoding apparatus described in the second embodiment. First, the input encoded bit stream 107 is stored in the buffer 9. Data in the buffer 9 is read, and variable length decoding is performed by the variable length decoding unit 10. In this process, DCT coefficient information 109, motion vector information 126, information 127 indicating what format is selected in each unit formed by a block or a plurality of blocks are decoded and separated. The decoded 8 × 8 quantized DCT coefficient 109 is restored to the DCT coefficient 110 by the inverse quantization unit 11 and converted into the pixel space data 118 by the inverse DCT unit 12.
When motion compensated prediction is performed, an image is decoded by addition by the adder 17 of the pixel space data 118 and the motion compensated predicted image data 117 generated by motion compensated prediction from the reference screen. The image 120 is basically stored in the frame memory 18 for use as a reference screen in the subsequent decoding process as necessary.
[0041]
In order to add the decoded difference pixel space data 118 and the motion compensated predicted image data 117 generated by motion prediction from the reference screen, the format of the decoded difference pixel space data 118 and the format of the motion compensated predicted image data 117 are as follows. Must be the same. Therefore, the motion compensated prediction image data read from the frame memory 18 is subjected to local format conversion by the third local format conversion unit 22 and the selector 23 as necessary to prepare the format. Whether or not this local format conversion is necessary (which one is selected by the selector) is known from the format selection information 127 previously separated.
[0042]
Before outputting as a decoded image, according to the information 127 indicating the selected format, the local format conversion unit 13 for making the screen format uniform is dynamically switched in the determination unit 3a to obtain the decoded image 114. .
When the decoded image 114 is stored in the frame memory 18, local format conversion is performed by the second local format conversion unit 20 and the selector 21 as necessary in order to unify the format.
[0043]
Embodiment 5. FIG.
FIG. 11 is a configuration diagram of a basic encoding device showing an example of specific criteria for switching which format conversion is selected in local format (plural format) conversion. In the figure, a new element has a format determination unit with 25 color difference components, and specifies an input signal of the determination unit 3 in FIG. Other than the above, the number is the same as that already described.
[0044]
Next, the operation will be described.
In the present embodiment, a mechanism for providing a criterion for determining whether or not to perform local format conversion and what format to select is described. Now, the range for selecting the format is a block or a unit of a plurality of blocks. In the present embodiment, the format determination unit 25 based on color difference components selects a format based on the color difference components of image data included in the same unit. For example, the color noise can be significantly detected in a portion with high color activity, such as a dark color portion or a portion where the color value changes drastically. Further, the periphery of the human skin color including the face and lips is a portion where the color noise can be remarkably detected. By utilizing this, it is possible to select a format with a high sample density of the color difference component at a location where the color noise becomes conspicuous.
[0045]
As a specific configuration example, a format determination unit 25 is shown in FIG. As shown in the figure, when the luminance / chrominance signal is multiplexed in the image input signal 101, the luminance / chrominance signal separator 31 separates each signal and outputs a chrominance signal 136. On the other hand, when the input signal 101 is originally separated into color difference signals, it becomes the direct color difference signal 136 and is input to the color difference average value detector 35 and the color difference variance value calculator 36. The color difference average value detector 35 calculates an average value 137 of color differences for an image area for each unit in which a block or a plurality of blocks are collected. Using the color difference average value 137 and the color difference signal 136, the color difference dispersion value calculator 36 calculates a color difference dispersion value 138. The format determination unit 37 compares the color difference variance value 138 with a predetermined threshold to obtain format switching information 106 indicating whether or not to perform local format conversion and which format to convert. In the case of local format conversion, the signal 103 selects the signal 102 as shown in FIG.
[0046]
Here, for example, considering the case where two threshold values (Th1, Th2) are prepared and local format conversion is performed, the variance value Dev of the color difference component is considered, and the magnitude relation between this and the threshold values Th1, Th2 is considered. When the original signal 101 is 4: 4: 4, or conversely 4: 2: 0, the result is as follows.
1) When the original signal 101 = 4: 4: 4
1-1) if (Dev <Th1) (where Th1 <Th2)
This is considered to be a flat color change. Therefore,
{4: 4: 4⇒4: 2: 0 Down-Sampling}
1-2) if (Dev ≧ Th1 & Dev2 <Th2)
This changes from the above, but the rate of change is not considered to be severe. Therefore,
{4: 4: 4⇒4: 2: 2 Down-Sampling}
1-3) else
{No Change}
2) When the original signal 101 = 4: 2: 0,
2-1) if (Dev> Th1) (where Th1> Th2)
{4: 2: 0⇒Up to 4: 4: 4 Up-Sampling}
2-2) if (Dev ≦ Th1 & Dev> Th2)
{4: 2: 0⇒Up to 4: 2: 2 Up-Sampling}
2-3) else
{No Change}
Further, as described above, for example, a variance value of pixel values (color difference values in the above example) is used as a determination criterion. This means that a large variance value means that the amplitude of the pixel value is large. On the contrary, when the variance value is small, the pixel value is flat, which means that the whole is close to the average value.
[0047]
In this embodiment, the selection of the format switching standard of the image coding apparatus of the first embodiment has been described, but this is a specific example of a mechanism that provides a judgment standard for switching the sample density ratio of the luminance component and the color difference component. Needless to say, the present invention can also be applied to the image coding apparatus shown in the second embodiment in which a predictive coding means is added. A configuration example in this case is shown in FIG. In FIG. 13, the format determination unit 25 and its selector unit 3 b are displayed as separate elements. However, similarly to FIG. 11, 3 b represents the selector part and is in the format determination unit 25.
When the present invention is applied to the image coding apparatus according to the second embodiment, it is also possible to use the color difference component activity between frames in a unit for selecting a format as a reference.
In the present embodiment, an example in which the color difference signal portion of the input image signal 101 is used as an input to the format determination unit 25 based on the color difference component has been described. However, the quantum input is different from other reference inputs in the following embodiment. The quantization index 105 that is the output of the conversion unit 5 may be used.
[0048]
Embodiment 6 FIG.
FIG. 14 is a configuration diagram of an encoding apparatus with predictive encoding means using other specific criteria for switching which format conversion is selected in local format conversion. In the figure, a new element is a format determination unit based on 26 movements, which specifies an input signal of the determination unit 3 in FIG. In addition, the determination part and the selector part 3b are displayed in separate frames as in FIG. Other than the above, the number is the same as that already described.
[0049]
Next, the operation will be described.
In the present embodiment, another mechanism for obtaining a selection criterion for local format conversion will be described. Now, the range for selecting the format is a block or a unit of a plurality of blocks. In this embodiment, the format is selected based on the motion vector 126 for motion compensation prediction in the same unit. For example, the color noise can be detected remarkably in a portion where there is a movement between frames and compression must be performed when a large amount of information is generated. By utilizing this, it is possible to determine a location where color noise is conspicuous based on the value of the motion vector, and to select a format having a high sample density of color difference components. Further, since the motion vector is information that should be given to the decoding side as a part of the encoded information, there is an advantage that it is not necessary to give the format selection information to the decoding side again. Similarly to the fifth embodiment, the presence / absence of format conversion and the format to be converted are determined by determining the magnitude of the threshold value of the luminance component.
[0050]
FIG. 15 specifically shows a configuration example of the format determination unit 26 based on the above movement. In the figure, the motion vector absolute value calculator 38 to which the motion vector 126 is input calculates the absolute value of the motion vector, that is, the sum of the absolute values of the horizontal and vertical components. The format determination unit 39 determines the intensity of the motion by comparing the value of the absolute value 139 of the obtained motion vector with, for example, a predetermined threshold, and determines whether to use local format conversion. The format switching information 106 is output.
In the above description, the absolute value of the motion vector is used. However, the sum of the squares of the horizontal and vertical components of the motion vector is still effective.
[0051]
Embodiment 7 FIG.
FIG. 16 is a configuration diagram of an image decoding apparatus including a receiving side predictive decoding unit corresponding to the encoding apparatus of the sixth embodiment. Each numbered element in the figure is equivalent to that already described. However, 3b is the selector part of the format determination part 26 by a motion.
[0052]
Next, the operation will be described.
First, the input encoded bit stream 107 is stored in the buffer 9. Data in the buffer 9 is read, and variable length decoding is performed by the variable length decoding unit 10. In this process, DCT coefficient information 109, motion vector information 126, and the like are decoded and separated. The decoded 8 × 8 quantized DCT coefficient 109 is restored to the DCT coefficient 110 by the inverse quantization unit 11 and converted into the pixel space data 118 by the inverse DCT unit 12. When motion compensated prediction is performed, an image is decoded by addition with motion compensated prediction image data 117 generated by motion compensated prediction from the reference screen, and the decoded image 120 is referred to in subsequent decoding processing as necessary. It is basically stored in a frame memory for use as a screen.
[0053]
The motion compensation read out from the frame memory 18 for the matching of the format required when the decoded difference pixel space data 118 and the motion compensated prediction image data 117 generated by motion prediction from the reference screen are added. A third local format conversion 22 and a selector 23 are used for the predicted image data. For the selection by the selector of the local format conversion, the format information 106 obtained by the motion format determination unit 26 using the motion vector information 126 previously separated as an input is used.
Actually, the motion vector information is always transmitted when motion compensation is performed. Therefore, the corresponding selection information 127 from the encoding device side is not required to be transmitted, and the transmission bits can be reduced.
Before output as a decoded image, the local format conversion unit 13 output for making the screen format uniform is dynamically switched by the selector portion 3b of the motion format determination unit 26 according to the information 106 indicating the selected format. The decoded image 114 is obtained. When the decoded image is stored in the frame memory 18, the local format conversion is performed by the second local format conversion unit 20 and the selector 21 in order to unify the format.
[0054]
Embodiment 8 FIG.
FIG. 17 is a configuration diagram of a basic image encoding apparatus in which another specific example is described as a reference for switching which format conversion is selected in local format conversion. In the figure, a new element is a format determination unit with 27 luminance components, which specifies an input signal of the determination unit 3 in FIG. Further, the determination part and the selector part are displayed in separate frames as in FIG. The specific configuration example is the same as that of the color difference signal of FIG. The other elements are the same as those already described.
[0055]
Next, the operation will be described.
The format selection range is also in units of blocks or a plurality of blocks, as in the other embodiments. In the present embodiment, based on the luminance component of the image data included in the same unit, the format determination unit 27 based on the luminance component selects whether to use the local format conversion output. For example, the color noise can be significantly detected in a portion with high luminance, that is, a bright portion. In dark areas, color noise is less noticeable because the sensitivity to color is reduced. By utilizing this, for example, a format having a high sample density of color difference components can be selected at a location where the color noise becomes conspicuous by using a circuit equivalent to FIG. If the sample density of the luminance component is kept constant and the sample density of the chrominance component is variable, the luminance component can be decoded on the decoding side regardless of the format selection information. Format selection can be performed with the same algorithm. For this reason, there is no need to give format selection information to the decoding side again. Further, as described in the fifth embodiment, the presence / absence of format conversion, and which format is to be converted are determined by determining the magnitude with the threshold value of the luminance component value.
[0056]
In the present embodiment, the other case of giving the format switching reference in the encoding apparatus having the basic configuration equivalent to that of the first embodiment has been described. Basically, the judgment for switching the sample density ratio of the luminance component and the color difference component is performed. Since this is a specific example that gives a reference, it goes without saying that the present invention can also be applied to the image coding apparatus according to the second embodiment, which is a format to which predictive coding means is added. A configuration example in this case is shown in FIG. The separate display of the format determination part 27 and its selector part 3b is the same as in FIG. In the case of FIG. 18 having a predictive feedback loop, the activity between frames of the luminance component in the unit for selecting the format can be used as a selection criterion for selecting whether or not to output the local format conversion. It is.
As the configuration of the format determination unit 27 based on the luminance component, the format determination unit 27 has a portion for determining the magnitude of the quantized luminance value 105, and the output is selected by the selector portion 3b based on the result. The format switching information (selection information) 106 as a result of the determination may be output.
In the present embodiment, the luminance value 105 that is the quantization index that is the output of the quantization unit 5 is used as the input to the format determination unit 27 by the luminance component. However, according to the color difference component of the previous embodiment, Similarly, the luminance signal portion of the input image signal 101 may be used.
[0057]
Embodiment 9 FIG.
FIG. 19 is a block diagram of an image decoding apparatus with predictive encoding means on the receiving side corresponding to the image encoding apparatus with predictive encoding means of the eighth embodiment. The numbers in the figure are equivalent to those already described.
Next, the operation will be described.
First, the encoded bit stream 107 stored in the buffer 9 is read and subjected to variable length decoding. In this process, DCT coefficient information 109, motion vector information 126, and the like are decoded and separated. The decoded 8 × 8 quantized DCT coefficient 109 is inversely quantized and restored to the DCT coefficient 110, and converted to pixel space data 118 by inverse DCT. In addition, when motion compensated prediction is being performed, the image is decoded by being added to the motion compensated predicted image data 117, and the decoded image 120 is basically stored in the frame memory 18 as necessary.
[0058]
As described in the previous embodiment, in order to match the format required when the decoded difference pixel space data 118 and the motion compensated prediction image data 117 generated by motion prediction from the reference screen are added. The third local format converter 22 and the selector 23 are used for the motion compensated prediction image data read from the frame memory 18. For the selection by the selector of the local format conversion, the quantized value 109 of the luminance component is used. For example, the color noise can be significantly detected in a portion with high luminance, that is, a bright portion. In dark areas, color noise is not noticeable because the sensitivity to color drops. If the format is selected by the same algorithm as that on the encoding side, the format selection information is unnecessary. Before outputting as a decoded image, the local format conversion unit 13 output for making the screen format uniform is dynamically switched by the selector portion 3b of the format determination unit 27 in accordance with the information 106 indicating the selected format. The decoded image 114 is obtained. When the decoded image is stored in the frame memory 18, the local format conversion is performed by the second local format conversion unit 20 and the selector 21 in order to unify the format.
[0059]
Embodiment 10 FIG.
FIG. 20 is a configuration diagram of an image encoding apparatus using another specific criterion for switching which format conversion is selected in local format conversion.
In the figure, a new element is a format determination unit based on 40 prediction errors, and specifies an input signal of the determination unit 3 in FIG. The determination part 40 and the selector part 3b are displayed in separate frames as in FIG. The other elements are the same as those already described.
[0060]
Next, the operation will be described. It should be noted that the operations other than the portion that gives the reference for switching whether or not to select the output of the local format conversion are omitted to avoid duplication. In the configuration of the present embodiment, it is not necessary to give the format selection information 106 to the decoding side.
In FIG. 20, the format is selected based on the prediction error data 117 after the motion compensation prediction in the format conversion selection unit. For example, the color noise can be detected significantly when the energy of the prediction error is large. In the configuration of FIG. 20, the format determination based on the prediction error value is compared with the set threshold value, and it can be seen that the energy of the prediction error is large by exceeding the threshold value. Therefore, it is efficient to increase the number of color difference component samples only at this time, and to reduce it in other cases. In the above description, the format determination unit 40 is configured to input the prediction error data 117 and determine the format. However, the format determination unit 40 receives the quantization index 105 that is the output of the quantization unit 5 and determines the format. It is good also as composition which performs. Further, as described in the fifth embodiment, the format conversion is performed based on the size determination between the prediction error value and the threshold value, and further, the format to be converted is determined.
[0061]
Embodiment 11 FIG.
FIG. 21 is a configuration diagram of a receiving side image decoding apparatus corresponding to the encoding apparatus of the tenth embodiment. The numbers in the figure are equivalent to those already described.
It should be noted that the operations other than the portion related to the switching criterion are omitted to avoid duplication. The encoded bit stream 107 in the buffer 9 is read and variable length decoding is performed. In this process, DCT coefficient information 109, motion vector information 126, and the like are decoded and separated.
As described above, the motion compensated prediction image data read from the frame memory 18 is added to the format required for the addition of the decoded difference pixel space data 118 and the motion compensated prediction image data 117. On the other hand, the third local format converter 22 and the selector 23 are used. For selection by the selector of the local format conversion, output information from the format determination unit 40 based on a prediction error value using the motion vector information 126 previously separated as an input is used.
[0062]
Embodiment 12 FIG.
FIG. 22 is a configuration diagram of an image encoding apparatus using another specific criterion for switching which format conversion is selected in local format conversion.
In the figure, a new element is a format determination unit with a quantization step size of 41, which specifies an input signal of the determination unit 3 in FIG. The determination part 41 and the selector part 3b are displayed in separate frames as in FIG. Other than the above, the number is the same as that already described.
[0063]
Next, the operation will be described.
Even with the configuration of the present embodiment, there is no need to provide format selection information to the decoding side. In addition, description of operations other than the portion related to the switching reference is omitted. In the present embodiment, a format is selected based on a quantization step size 140 that quantizes a coding coefficient or a coded image. This utilizes the fact that image degradation becomes significant when the quantization step size 140, which is the output of the rate control unit 8, is large. When the quantization step size 140 is large, a format with a high sample density of the color difference component is set. Conversely, when the quantization step size 140 is small, it is effective to set the sample density of the color difference component low. That is, the format determination 41 based on the quantization step size in the configuration of FIG. 22 selects the selector portion 3 as compared with the set threshold value. Here, the presence / absence of format conversion and the format to be converted are determined by determining the size of the quantization step size and the threshold as in the fifth embodiment.
[0064]
Since the quantization step size 140 can be decoded on the decoding side regardless of the format selection information, the decoding side can also select the format with the same algorithm based on the luminance component. For this reason, there is no need to give format selection information to the decoding side again.
In the present embodiment, the basic image coding apparatus having no predictive coding means has been described. However, the format determination based on the quantization step size provides a determination criterion for switching the sample density ratio between the luminance component and the color difference component. It is needless to say that the present invention can also be applied to an image coding apparatus provided with predictive coding means. A configuration example at this time is shown in FIG.
[0065]
Embodiment 13 FIG.
FIG. 24 is a block diagram of an image decoding apparatus with a predictive decoding means on the receiving side corresponding to the image encoding apparatus with a predictive encoding means of the twelfth embodiment. The numbers in the figure are equivalent to those already described.
Next, the operation will be described.
In the same figure, as described above, because the format difference required when adding the decoded difference pixel space data 118 and the motion compensated prediction image data 117 generated by motion prediction is the same as the frame, A third local format conversion unit 22 and a selector 23 are used for the motion compensated prediction image data read from the memory 18. For the selection by the selector of the local format conversion, the quantized value step size 140 of the separated signal obtained in the process of variable length decoding is used.
Before output as a decoded image, local format conversion 13 for making the screen format uniform is performed by the selector portion 3b of the format determination unit 41 while dynamically switching according to the information 106 indicating the selected format. An image 114 is obtained. When the decoded image is stored in the frame memory 18, the local format conversion is performed by the second local format conversion unit 20 and the selector 21.
[0066]
In each of the above-described embodiments, a case where a single component, for example, a color difference component or the like is used as a signal for the determination unit to determine whether or not to select the output of multiple format conversion in local format conversion is described. did. However, as a decision input, of course, a plurality of components instead of a single component are used as input, and a weighted signal is used, or final selection output information 106 is given as a result of logical operation after selection of each component. It may be.
In addition, the above-described image encoding device and image decoding device constitute an image encoding / decoding system via a transmission path or a recording medium as a set.
[0067]
【The invention's effect】
As described above, according to the present invention, the encoding device includes a plurality of format conversion units and various image state determination units, and detects a change in the state of predetermined image data and selects a format. There is an effect of reducing color noise without significantly increasing the number of samples of color difference components.
[0068]
Also in the decoding apparatus, since a plurality of format conversion units are provided and a format is selected based on a value obtained in the decoding process, a decoded image with reduced color noise can be obtained.
[0069]
In addition, the encoding device switches the format conversion by comparing the color difference signal, the luminance signal, the value of the motion vector, the prediction error value between the image data and the motion compensated prediction signal, and the quantization step size with the set value. In this case, the sample density of the color difference component is increased in a portion where the color noise is conspicuous, and in some cases, it is not necessary to send selection information to the decoding side.
[0070]
Also, since the decoding apparatus has a configuration corresponding to the encoding side, color noise is reduced, and in some cases, there is an effect that a further reduction due to high encoding efficiency can be obtained.
[Brief description of the drawings]
FIG. 1 is a configuration block diagram of a basic image encoding device in Embodiment 1 of the present invention.
FIG. 2 is a diagram illustrating a sample format of luminance components and color difference components of an 8 × 8 block.
FIG. 3 is a diagram illustrating a configuration example of a local format conversion unit in FIG. 1;
4 is a diagram illustrating a configuration example of a local format conversion unit in FIG. 1; FIG.
FIG. 5 is a diagram illustrating a configuration example of a local format conversion unit in FIG. 1;
6 is a diagram illustrating a configuration example of a local format conversion unit in FIG. 1. FIG.
FIG. 7 is a block diagram showing another configuration of the basic image coding apparatus according to Embodiment 1 of the present invention.
FIG. 8 is a block diagram showing the configuration of an image coding apparatus including predictive coding means according to Embodiment 2 of the present invention.
FIG. 9 is a configuration block diagram of a basic image decoding apparatus according to Embodiment 3 of the present invention.
FIG. 10 is a configuration block diagram of an image decoding apparatus provided with predictive decoding means in Embodiment 4 of the present invention.
FIG. 11 is a configuration block diagram of a basic image encoding device according to a fifth embodiment of the present invention.
12 is a diagram illustrating a configuration example of a format determination unit using color difference components in FIG. 11. FIG.
FIG. 13 is a block diagram showing the configuration of an image coding apparatus including predictive coding means according to Embodiment 5 of the present invention.
FIG. 14 is a block diagram showing the configuration of an image coding apparatus including predictive coding means according to Embodiment 6 of the present invention.
15 is a diagram illustrating a configuration example of a format determination unit based on movement in FIG. 14;
FIG. 16 is a configuration block diagram of an image decoding apparatus provided with predictive decoding means in Embodiment 7 of the present invention.
FIG. 17 is a configuration block diagram of a basic image encoding device according to an eighth embodiment of the present invention.
FIG. 18 is a block diagram showing the configuration of an image coding apparatus including predictive coding means according to Embodiment 8 of the present invention.
FIG. 19 is a configuration block diagram of an image decoding apparatus provided with predictive decoding means in Embodiment 9 of the present invention.
FIG. 20 is a block diagram showing the configuration of an image coding apparatus including predictive coding means according to Embodiment 10 of the present invention.
FIG. 21 is a configuration block diagram of an image decoding apparatus provided with predictive decoding means in Embodiment 11 of the present invention.
FIG. 22 is a configuration block diagram of a basic image coding apparatus according to a twelfth embodiment of the present invention.
FIG. 23 is a block diagram showing the configuration of an image coding apparatus including predictive coding means in Embodiment 12 of the present invention.
FIG. 24 is a configuration block diagram of an image decoding apparatus provided with predictive decoding means in Embodiment 13 of the present invention.
FIG. 25 is an explanatory diagram of an image format in a video compression image coding method;
FIG. 26 is a configuration block diagram of a conventional image encoder.
FIG. 27 is a configuration block diagram of a conventional image decoder.
[Explanation of symbols]
1 A / D conversion unit, 2 local format conversion unit, 3, 3a (image state) determination unit, 4 DCT unit, 5 quantization unit, 6 variable length coding unit, 7 transmission buffer, 8 rate control unit, 9 Receiving buffer, 10 variable length decoding unit, 11 inverse quantization unit, 12 inverse DCT unit, 13 local format conversion unit, 14 D / A conversion unit, 15 subtractor, 16 selector, 17 adder, 18 frame memory, 19 motion compensation vector estimation and motion compensation prediction unit, 20 second local format conversion unit, 21 selector, 22 third local format conversion unit, 23 selector, 24 motion compensation prediction unit, 25 format determination unit based on color difference component , 26 format determination unit based on motion, 27 format determination unit based on luminance component, 30 format determination unit, 31 luminance Color difference signal separator, 32 color difference signal downsampler, 33 color difference signal upsampler, 34 luminance / color difference signal multiplexer, 35 color difference average value detector, 36 color difference variance value calculator, 37 format determination unit, 38 motion vector Absolute value calculator, 39 format determiner, S4 format determination unit based on prediction error, 41 format determination unit based on quantization step size, 101 digitized image data, 102 locally formatted image data, 103 dynamic Image data whose format has been switched to, 104 DCT transform coefficient, 105 DCT transform coefficient quantization index, 106 format switching information, 107 encoded bit stream, 108 information generation amount signal, 109 variable length decoded variable Coefficient quantization index, 110 inverse quantized transform coefficient, 111 pixel space region data obtained by inverse DCT, 112 locally formatted pixel space region data, 113 format switching information, 114 decoding Image data, 115 Reproduced image signal, 116 Dynamically switched image data, 117 Prediction error data, 118 Differential image data of pixel space region obtained by inverse DCT, 119 Motion compensation prediction data, 120 Decoded image data , 121 locally decoded image data, 122 decoded image data with a unified format, 123 image data read from the frame memory for motion compensation prediction, 124 locally formatted image data , 125 Predicted signal, 126 motion vector information, 127 format switching information, 128 locally formatted decoded image data, 130 local format switching signal, 131 multiplexed luminance / chrominance signal input, 132 separated luminance Signal, 133 separated color difference signal, 134 format converted color difference signal, 135 multiplexed luminance / color difference signal output, 136 separated color difference signal, 137 color difference average value, 138 color difference variance value, 139 motion vector absolute value 140 Quantization step size.

Claims (11)

  1. A digitizing input image, comprising: a quantizing unit for quantizing the format-converted image; and an encoding unit for encoding the quantized quantized image data. In an image encoding device that outputs an encoded bit stream,
    A plurality of format conversion units for converting a plurality of predetermined luminance signals and color difference signals into spatial resolution image data upon format conversion;
    An image determination unit that outputs selection information indicating whether the selection result is the image data converted by the multiple format conversion unit or the original image data before conversion, and transmits the image data of the selection result to a subsequent stage;
    A second plurality of format converters for inversely quantizing the quantized quantized image data and adding motion prediction differences to convert the data into a plurality of image data having a predetermined spatial resolution; and the inverse quantization In order to obtain a motion prediction difference using the stored reference image data, the selected image signal or the image signal output from the second multiple format converter is selected and stored . A third multiple format converter for converting to image data, and selecting the reference image data or the image signal that is the output of the third multiple format converter to select the original image data or after format conversion A feedback encoding unit that performs feedback subtraction on the output of
    The encoding device selects either the image data converted by the multiple format conversion unit or the original image data according to the selection information output from the image determination unit, transmits the image, and the selection information is transmitted. An image encoding device that transmits the image.
  2.   The image determination unit compares the state of the color difference signal in the input image data or the quantized image data with a setting reference, and selects the output of the corresponding spatial resolution from the output of the multiple format conversion unit. The image encoding device according to claim 1.
  3.   The image determination unit is characterized in that the state of the luminance signal in the input image data or the quantized image data is compared with a setting reference, and the corresponding spatial resolution output is selected from the output of the multiple format conversion unit. The image encoding device according to claim 1.
  4.   The image determination unit compares the value of the motion vector from the motion compensation prediction unit with a setting criterion, and selects an output with a corresponding spatial resolution from the output of the multiple format conversion unit. 1. The image encoding device according to 1.
  5.   The image determination unit compares the prediction error value, which is the difference between the image data with a predetermined spatial resolution and the prediction signal after motion compensation, with the setting reference, and outputs the corresponding spatial resolution from the output of the multiple format conversion unit. 2. The image encoding apparatus according to claim 1, wherein the image encoding apparatus is selected.
  6. Image determination unit, the quantization step size by coding amount generated based on the encoded bit stream, that in comparison with the set reference, and to select the output of the spatial resolution corresponding from the output of the plurality format converter The image encoding device according to claim 1.
  7. Image determining unit, in the input image data or quantized image data, and status of the color difference signal or luminance signal, respectively, and the value of the motion vector from the motion compensation prediction unit, the prediction error value, and the quantization step size, the 2. The image encoding apparatus according to claim 1, wherein any one of a plurality of values is added and compared with a setting criterion, and an output having a corresponding spatial resolution is selected from outputs of the plurality of format conversion units. .
  8. A decoding unit that decodes an input encoded bitstream; and an inverse quantization unit and an inverse transform unit that inversely quantize and inversely transform data corresponding to transform coefficients among the decoded data, in the image decoding apparatus for restoring the digital image data by inverse quantization and inverse transformed image data,
    A plurality of format conversion units that convert the image data that has been inversely converted into one of a plurality of predetermined luminance signals and color difference signals and output the spatial resolution are provided .
    The image decoding apparatus extracts selection information included in the encoded bitstream, and outputs either the output of the multiple format conversion unit or the image data before conversion by the multiple format conversion unit based on the selection information. An image decoding apparatus characterized by restoring digital image data .
  9. Inversely converted image data is stored as reference image data after a predetermined format conversion, and provided with feedback prediction means for adding a motion prediction error to the original image data or the output after format conversion,
    The feedback prediction means includes a second plurality of format converters for converting the reproduction image data at any one of a plurality of spatial resolutions of a predetermined luminance signal and color difference signal to obtain the reference image data, and the feedback prediction. to adding means output or the original image data or the output after the format conversion, and characterized by providing a third plurality format converter for converting a plurality of, the prediction error of a predetermined spatial resolution, a The image decoding apparatus according to claim 8 .
  10. An image state determination unit corresponding to detection of a color difference signal, luminance signal, or motion change in the image encoding device on the transmission side is provided, and the state of the input encoded bitstream is detected based on the same setting standard as that on the transmission side. 9. The image decoding apparatus according to claim 8 , wherein a decoded image is obtained by selecting one of a plurality of spatial resolutions.
  11. Digitized input image format conversion, and quantization, and encoding to output the encoded bit stream,
    (1) a plurality of format conversion units that convert a plurality of predetermined luminance signals and color difference signals into spatial resolution image data upon format conversion;
    (2) an image determination unit that outputs selection information indicating whether the selection result is the image data converted by the multiple format conversion unit or the original image data before conversion, and transmits the image data of the selection result to a subsequent stage; ,
    (3) dequantizing the quantized quantized image data, adding a motion prediction difference, and converting to a plurality of image data having a predetermined spatial resolution; In order to select and store an inversely quantized image signal or an image signal output from the second multiple format converter , and to obtain a motion prediction difference using the stored reference image data , a plurality of predetermined A third multi-format converter for converting to spatial resolution image data, and selecting the reference image data or the image signal output from the third multi-format converter to select the original image data or format A feedback encoding unit that performs feedback subtraction on the converted output,
    An image encoding device for selecting either the image data converted by the plurality of format conversion units or the original image data in accordance with the selection information output from the image determination unit and transmitting the image, and transmitting the selection information When,
    In order to decode the input encoded bit stream sent from the image encoding device, dequantize and inverse transform, and reproduce the digital image data by the inverse quantized and inverse transformed image data ,
    (4) In order to determine the state change of the input encoded bit stream from the inversely transformed image data and obtain the digital image data, any one of a plurality of spatial resolutions based on a predetermined luminance signal and color difference signal is obtained. a plurality format conversion unit for reproducing the digital image data in either
    (5) Extraction of selection information included in the encoded bitstream, and based on the extracted selection information, either the output of the multiple format conversion unit or the image data before conversion by the multiple format conversion unit A determination unit that selects and outputs the image decoding device , and
    An image encoding / decoding system comprising:
JP26966996A 1995-12-27 1996-10-11 Image encoding apparatus, image decoding apparatus, and encoding / decoding system Expired - Fee Related JP3681835B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP34087195 1995-12-27
JP7-340871 1995-12-27
JP26966996A JP3681835B2 (en) 1995-12-27 1996-10-11 Image encoding apparatus, image decoding apparatus, and encoding / decoding system

Applications Claiming Priority (7)

Application Number Priority Date Filing Date Title
JP26966996A JP3681835B2 (en) 1995-12-27 1996-10-11 Image encoding apparatus, image decoding apparatus, and encoding / decoding system
CA 2191271 CA2191271C (en) 1995-12-27 1996-11-26 Video coding and decoding system and method
AU74034/96A AU693726B2 (en) 1995-12-27 1996-11-27 Video coding and decoding system and method
DE1996624669 DE69624669T2 (en) 1995-12-27 1996-11-29 Video encoder and decoder system and methods
EP19960119162 EP0782342B1 (en) 1995-12-27 1996-11-29 Video coding and decoding system and methods
US08/766,179 US6018366A (en) 1995-12-27 1996-12-12 Video coding and decoding system and method
KR1019960072357A KR100263627B1 (en) 1995-12-27 1996-12-26 Video coding and decoding system and method

Publications (2)

Publication Number Publication Date
JPH09238366A JPH09238366A (en) 1997-09-09
JP3681835B2 true JP3681835B2 (en) 2005-08-10

Family

ID=26548871

Family Applications (1)

Application Number Title Priority Date Filing Date
JP26966996A Expired - Fee Related JP3681835B2 (en) 1995-12-27 1996-10-11 Image encoding apparatus, image decoding apparatus, and encoding / decoding system

Country Status (7)

Country Link
US (1) US6018366A (en)
EP (1) EP0782342B1 (en)
JP (1) JP3681835B2 (en)
KR (1) KR100263627B1 (en)
AU (1) AU693726B2 (en)
CA (1) CA2191271C (en)
DE (1) DE69624669T2 (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2768006B1 (en) * 1997-09-04 1999-10-15 Alsthom Cge Alcatel Method for compressing a signal with multidimensional samples
JPH11122495A (en) * 1997-10-09 1999-04-30 Olympus Optical Co Ltd Picture encoding device
US6519288B1 (en) * 1998-03-06 2003-02-11 Mitsubishi Electric Research Laboratories, Inc. Three-layer scaleable decoder and method of decoding
US6188730B1 (en) * 1998-03-23 2001-02-13 Internatonal Business Machines Corporation Highly programmable chrominance filter for 4:2:2 to 4:2:0 conversion during MPEG2 video encoding
KR100418874B1 (en) * 1998-10-10 2004-04-17 엘지전자 주식회사 Method of Buffering Video in the Video Compression System
US6229852B1 (en) * 1998-10-26 2001-05-08 Sony Corporation Reduced-memory video decoder for compressed high-definition video data
JP3623679B2 (en) * 1999-01-06 2005-02-23 日本電気株式会社 Video encoding device
TW515192B (en) * 2000-06-06 2002-12-21 Noa Kk Off Compression method of motion picture image data and system there for
US7085424B2 (en) * 2000-06-06 2006-08-01 Kobushiki Kaisha Office Noa Method and system for compressing motion image information
US7039241B1 (en) * 2000-08-11 2006-05-02 Ati Technologies, Inc. Method and apparatus for compression and decompression of color data
JP4834917B2 (en) * 2001-05-14 2011-12-14 株式会社ニコン Image encoding apparatus and image server apparatus
US7649947B2 (en) * 2001-06-05 2010-01-19 Qualcomm Incorporated Selective chrominance decimation for digital images
US6803969B2 (en) * 2001-06-07 2004-10-12 Oki Electric Industry Co., Ltd. Video signal processing apparatus for digital video decoder
US20030220280A1 (en) * 2002-02-07 2003-11-27 Bunge Mary Bartlett Schwann cell bridge implants and phosphodiesterase inhibitors to stimulate CNS nerve regeneration
JP4015934B2 (en) * 2002-04-18 2007-11-28 株式会社東芝 Video coding method and apparatus
US6930776B2 (en) * 2002-07-25 2005-08-16 Exfo Electro-Optical Engineering Inc. High optical rejection optical spectrum analyzer/monochromator
BRPI0600823B1 (en) * 2006-03-14 2018-02-14 Whirlpool S.A. Programming electric household programming system and assembly programmable household programming method
JP2008193627A (en) * 2007-01-12 2008-08-21 Mitsubishi Electric Corp Image encoding device, image decoding device, image encoding method, and image decoding method
US8139632B2 (en) * 2007-03-23 2012-03-20 Advanced Micro Devices, Inc. Video decoder with adaptive outputs
WO2009034486A2 (en) * 2007-09-10 2009-03-19 Nxp B.V. Method and apparatus for line-based motion estimation in video image data
US20090202165A1 (en) * 2008-02-13 2009-08-13 Kabushiki Kaisha Toshiba Image decoding method and image decoding apparatus
EP2177299B1 (en) * 2008-10-17 2013-07-31 PRIMA INDUSTRIE S.p.A. laser machine
CA2650102C (en) * 2009-01-09 2013-01-22 Michael D. Zulak Earth drilling reamer with replaceable blades
JP5624576B2 (en) * 2012-03-14 2014-11-12 株式会社東芝 Image compression controller and image compression apparatus
JP6074181B2 (en) * 2012-07-09 2017-02-01 キヤノン株式会社 Image processing apparatus and method
US9258517B2 (en) * 2012-12-31 2016-02-09 Magnum Semiconductor, Inc. Methods and apparatuses for adaptively filtering video signals

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4903317A (en) * 1986-06-24 1990-02-20 Kabushiki Kaisha Toshiba Image processing apparatus
DE3735349C2 (en) * 1986-10-18 1992-04-09 Kabushiki Kaisha Toshiba, Kawasaki, Kanagawa, Jp
JP2826321B2 (en) * 1988-07-23 1998-11-18 日本電気株式会社 Orthogonal transform coding device
US5253078A (en) * 1990-03-14 1993-10-12 C-Cube Microsystems, Inc. System for compression and decompression of video data using discrete cosine transform and coding techniques
US5428693A (en) * 1991-04-12 1995-06-27 Mitsubishi Denki Kabushiki Kaisha Motion compensation predicting coding method and apparatus
JP2507204B2 (en) * 1991-08-30 1996-06-12 松下電器産業株式会社 Video signal encoder
JPH0595540A (en) * 1991-09-30 1993-04-16 Sony Corp Dynamic picture encoder
DE69225859T2 (en) * 1991-10-02 1998-12-03 Matsushita Electric Ind Co Ltd Orthogonal transform encoder
US5220410A (en) * 1991-10-02 1993-06-15 Tandy Corporation Method and apparaus for decoding encoded video data
US5262854A (en) * 1992-02-21 1993-11-16 Rca Thomson Licensing Corporation Lower resolution HDTV receivers
US5363213A (en) * 1992-06-08 1994-11-08 Xerox Corporation Unquantized resolution conversion of bitmap images using error diffusion
JPH06205439A (en) * 1992-12-29 1994-07-22 Sony Corp Video signal recording device
JPH06233277A (en) * 1993-02-05 1994-08-19 Sharp Corp Picture encoder
KR940020832A (en) * 1993-02-25 1994-09-16 김주용 Adaptive quantization method of high-definition television and system coder using the same
TW301098B (en) * 1993-03-31 1997-03-21 Sony Co Ltd
JP3277418B2 (en) * 1993-09-09 2002-04-22 ソニー株式会社 Apparatus and method for detecting motion vector
JP2576771B2 (en) * 1993-09-28 1997-01-29 日本電気株式会社 Motion compensation prediction device
US5453787A (en) * 1993-12-10 1995-09-26 International Business Machines Corporation Variable spatial frequency chrominance encoding in software motion video compression

Also Published As

Publication number Publication date
DE69624669D1 (en) 2002-12-12
EP0782342B1 (en) 2002-11-06
US6018366A (en) 2000-01-25
AU7403496A (en) 1997-08-07
EP0782342A3 (en) 1998-02-11
DE69624669T2 (en) 2003-07-17
EP0782342A2 (en) 1997-07-02
AU693726B2 (en) 1998-07-02
KR100263627B1 (en) 2000-08-01
CA2191271C (en) 2000-07-25
JPH09238366A (en) 1997-09-09
KR970057951A (en) 1997-07-31
CA2191271A1 (en) 1997-06-28

Similar Documents

Publication Publication Date Title
EP1096801B1 (en) Device for predicting and decoding images
KR100244827B1 (en) Apparatus for adaptive compression of digital video data and method for selecting the compression mode
US7146056B2 (en) Efficient spatial scalable compression schemes
CA2478691C (en) Method for coding motion in a video sequence
KR100606588B1 (en) Picture processing device and picture processing method
JP4673758B2 (en) Image data decoding method and computer-readable medium having recorded program therefor
EP0538667B1 (en) Adaptive motion compensation using a plurality of motion compensators
KR0166716B1 (en) Encoding and decoding method and apparatus by using block dpcm
JP3163830B2 (en) Image signal transmission method and apparatus
KR100714696B1 (en) Method and apparatus for coding video using weighted prediction based on multi-layer
US5453799A (en) Unified motion estimation architecture
US6005623A (en) Image conversion apparatus for transforming compressed image data of different resolutions wherein side information is scaled
KR100484333B1 (en) Memory Management for Image Signal Processors
US5473377A (en) Method for quantizing intra-block DC transform coefficients using the human visual characteristics
US7526030B2 (en) Digital signal conversion method and digital signal conversion device
US5796434A (en) System and method for performing motion estimation in the DCT domain with improved efficiency
JP3085024B2 (en) Image recompressor and image recording device
KR100231186B1 (en) Method and device for decoding image data
EP0608618B1 (en) Method and system for encoding and/or decoding of the colour signal components of a picture signal
US6721359B1 (en) Method and apparatus for motion compensated video coding
EP1120972B1 (en) Video decoding apparatus for decoding shape and texture signals using inter/intra modes
JP3268306B2 (en) Image coding method
US5946043A (en) Video coding using adaptive coding of block parameters for coded/uncoded blocks
US6173013B1 (en) Method and apparatus for encoding enhancement and base layer image signals using a predicted image signal
EP1195993B1 (en) Transcoding of video signal

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20041007

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20041214

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20050210

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20050517

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20050519

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20080527

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090527

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100527

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100527

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110527

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110527

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120527

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120527

Year of fee payment: 7

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130527

Year of fee payment: 8

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140527

Year of fee payment: 9

LAPS Cancellation because of no payment of annual fees