WO2006001490A1 - Moving image encoding apparatus and moving image encoding method - Google Patents

Moving image encoding apparatus and moving image encoding method Download PDF

Info

Publication number
WO2006001490A1
WO2006001490A1 PCT/JP2005/012008 JP2005012008W WO2006001490A1 WO 2006001490 A1 WO2006001490 A1 WO 2006001490A1 JP 2005012008 W JP2005012008 W JP 2005012008W WO 2006001490 A1 WO2006001490 A1 WO 2006001490A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
encoding
region
encoded
data
Prior art date
Application number
PCT/JP2005/012008
Other languages
French (fr)
Inventor
Hiroki Kishi
Hiroshi Kajiwara
Original Assignee
Canon Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Kabushiki Kaisha filed Critical Canon Kabushiki Kaisha
Priority to US11/571,187 priority Critical patent/US20080089413A1/en
Publication of WO2006001490A1 publication Critical patent/WO2006001490A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/64Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
    • H04N19/645Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission by grouping of coefficients into blocks after the transform
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/63Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
    • H04N19/64Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
    • H04N19/647Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission using significance based coding, e.g. Embedded Zerotrees of Wavelets [EZW] or Set Partitioning in Hierarchical Trees [SPIHT]

Definitions

  • the present invention relates to a moving image encoding apparatus and method and, more particularly, to a moving image encoding apparatus and method, which encode a moving image using motion prediction.
  • PCs mainly used as transmitting/receiving side devices have great performance gains of CPU performance, graphics performance, and the like, while various devices with different processing performances such as a PDA, portable phone, TV, hard disk recorder, and the like have a network connection function. For this reason, a function called scalability in which single data can cope with a changing communication line capacity and the processing performance of a receiving side device has received a lot of attention.
  • a JPEG2000 coding scheme is well known. This scheme is internationally standardized, and its details are described in ISO/IEC15444-1 (Information technology - JPEG2000 image coding system - Part 1: Core coding system).
  • JPEG2000 is characterized by using the discrete wavelet transform (DWT) to divide input image data by a plurality of frequency bands.
  • the coefficients of the divided data are quantized, and the quantized values undergo arithmetic encoding for respective bitplanes.
  • DWT discrete wavelet transform
  • a technique called ROI which relatively improves the image quality of a region of interest in an image, and is not available in the conventional encoding techniques is realized.
  • Fig. 23 shows an encoding unit based on the JPEG2000 coding scheme.
  • a tile segmentation unit 9001 segments an input image into a plurality of regions (tiles). This function is an option.
  • a DWT unit 9002 divides respective tiles by frequency bands using the discrete wavelet transform.
  • a quantizer 9003 quantizes respective coefficients.
  • An ROI designation unit 9007 can set a region, such as an important region and a region of interest, to be coded with a higher quality than the other regions. At this time, the quantizer 9003 performs a shift-up process.
  • An entropy encoder 9004 performs entropy encoding by an EBCOT scheme (Embedded Block Coding with Optimized Truncation) . The lower bits of the encoded data are discarded by a bit truncating unit 9005 as needed for rate control.
  • EBCOT scheme Embedded Block Coding with Optimized Truncation
  • a code forming unit 9006 appends header information to the encoded data, selects various scalability functions, and outputs the encoded data.
  • Fig. 24 shows a decoding unit based on the JPEG2000 coding scheme.
  • a code analysis unit 9020 analyzes a header to obtain information required to form a hierarchy.
  • a bit truncating unit 9021 discards the lower bits of input encoded data in correspondence with an internal buffer size and decoding processing performance.
  • An entropy decoder 9022 decodes the encoded data based on the EBCOT coding scheme to obtain quantized wavelet transformation coefficients.
  • An inverse quantizer 9023 inversely quantizes the quantized wavelet transformation coefficients.
  • An inverse DWT unit 9024 performs the inverse discrete wavelet transform to reclaim image data from the wavelet transformation coefficients.
  • a tile composition unit 9025 composites a plurality of tiles to reconstruct image data.
  • a Motion JPEG2000 scheme that encodes a moving image by applying the JPEG2000 coding scheme to respective frames of the moving image has been recommended (for example, see ISO/IEC15444-3 (Information technology - JPEG2000 image coding system Part 3: Motion JPEG2000)).
  • encoding processes are independently done for respective frames. Since encoding using time correlation is not performed, redundancy remains between adjacent frames. For this reason, it is difficult to effectively reduce the code size compared to a moving image coding scheme using time correlation.
  • an MPEG coding scheme performs motion compensation to improve coding efficiency (see, e.g., "Latest MPEG Text", p. 76, etc., ASCII Publishing Division, 1994).
  • Fig. 25 shows the arrangement of that encoding unit.
  • a block segmentation unit 9031 divides data into blocks of 8 x 8 pixels, a difference unit 9032 obtains the differences between the data of the respective blocks and predicted data obtained by motion compensation.
  • a DCT unit 9033 performs discrete cosine transformation, and a quantizer 9034 performs quantization.
  • the quantization result is encoded by an entropy encoder 9035.
  • a code forming unit 9036 appends header information to the encoded data, and outputs the encoded data.
  • an inverse quantizer 9037 performs inverse quantization in parallel with the process of the entropy encoder 9035, an inverse DCT unit 9038 applies inverse transformation of the discrete cosine transformation, and an adder 9039 adds predicted data and stores the sum data in a frame memory 9040.
  • a motion compensation unit 9041 calculates motion vectors with reference to an input image and reference frames stored in the frame memory 9040, thus generating predicted data. For the purpose of improving the efficiency of the JPEG2000 coding, a compression scheme obtained by adding motion compensation to JPEG2000 is available.
  • reference data for prediction (to be referred to as "reference data” hereinafter) is partially discarded by, e.g., truncation of the lower bitplanes, predictive errors accumulate, thus considerably deteriorating the inter-frame image quality.
  • Fig. 26 shows a concept of reference data between inter-frame images.
  • a moving image encoding apparatus for encoding a moving image using inter-frame motion prediction, comprising: a segmentation unit that segments each frame into a plurality of segmented regions; a determination unit that determines a region of interest from a frame to be encoded; an inter-frame prediction unit that retrieves a pixel set, from the region of interest of a previous or succeeding frame, having high correlation to each segmented region of a frame to be encoded, calculates a difference between the data of each segmented region and data of the retrieved pixel set, and outputs difference data; and an encoding unit that encodes the difference data.
  • a moving image encoding apparatus for encoding a moving image using inter-frame motion prediction, comprising: a segmentation unit that segments each frame into a plurality of segmented regions; a determination unit that determines a region of interest from a frame to be encoded; a transformation unit that performs data transformation for each segmented region to generate transformation coefficients; an inter-frame prediction unit that retrieves transformation coefficients, from transformation coefficients corresponding to the region of interest of a previous or succeeding frame, having high correlation to transformation coefficients of each segmented region of a frame to be encoded, calculates a difference between the transformation coefficients of each segmented region and the retrieved transformation coefficients, and outputs difference data; and an encoding unit that encodes the difference data.
  • a moving image encoding method for encoding a moving image using inter-frame motion prediction comprising: segmenting each frame into a plurality of segmented regions; determining a region of interest from a frame to be encoded; retrieving a pixel set, from the region of interest of a previous or succeeding frame, having high correlation to each segmented region of a frame to be encoded, calculating a difference between the data of each segmented region and data of the retrieved pixel set, and outputting difference data; and encoding the difference data.
  • a moving image encoding method for encoding a moving image using inter-frame motion prediction comprising: segmenting each frame into a plurality of segmented, regions; determining a region of interest from a frame to be encoded; performing data transformation for each segmented region to generate transformation coefficients; retrieving transformation coefficients, from transformation coefficients corresponding to the region of interest of a previous or succeeding frame, having high correlation to transformation coefficients of each segmented region of a frame to be encoded, calculating a difference between the transformation coefficients of each segmented region and the retrieved transformation coefficients, and outputting difference data; and encoding the difference data.
  • Fig. 1 is a view showing the concept of a moving image to be encoded in an embodiment of the present invention
  • Fig. 2 is a block diagram showing the arrangement of a moving image processing apparatus according to the embodiment of the present invention
  • Fig. 3 is a block diagram showing the arrangement of an encoding unit according to a first embodiment of the present invention
  • Fig. 4 is a flowchart showing the encoding process according to the first embodiment of the present invention
  • Fig. 5 is an explanatory view of tile segmentation
  • Fig. 1 is a view showing the concept of a moving image to be encoded in an embodiment of the present invention
  • Fig. 2 is a block diagram showing the arrangement of a moving image processing apparatus according to the embodiment of the present invention
  • Fig. 3 is a block diagram showing the arrangement of an encoding unit according to a first embodiment of the present invention
  • Fig. 4 is a flowchart showing the encoding process according to the first embodiment of the present invention
  • Fig. 5 is an explan
  • FIG. 6 is a view showing an example of ROI tiles;
  • Fig. 7 is an explanatory view of linear discrete wavelet transform;
  • Fig. 8A is a view for decomposing data into four subbands,
  • Fig. 8B is a view for further decomposing an LL subband in Fig. 8A into four subbands, and
  • Fig. 8C is a view for further decomposing an LL subband in Fig. 8B into four subbands;
  • Fig. 9 is an explanatory view of quantization steps;
  • Fig. 10 is an explanatory view of code block segmentation;
  • Fig. 11 is an explanatory view of bitplane segmentation;
  • Fig. 12 is an explanatory view of coding passes;
  • Fig. 13 is an explanatory view of layer generation;
  • Fig. 10 is an explanatory view of code block segmentation;
  • Fig. 11 is an explanatory view of bitplane segmentation;
  • Fig. 12 is an explan
  • Fig. 14 is an explanatory view of layer generation;
  • Fig. 15 is an explanatory view of the format of encoded tile data;
  • Fig. 16 is an explanatory view of the format of encoded frame data;
  • Fig. 17 is a view showing the concept of reference data for MC prediction according to the first embodiment of the present invention;
  • Fig. 18 is a view showing the concept of reference data for MC prediction according to a second embodiment of the present invention;
  • Fig. 19 is a block diagram showing the arrangement of an encoding unit according to a third embodiment of the present invention;
  • Fig. 20 is a flowchart showing the encoding process according to the third embodiment of the present invention;
  • Fig. 21A shows an ROI and non-ROI in respective subbands, and Figs.
  • Fig. 22 is a view showing the concept of reference data for MC prediction in the third embodiment of the present invention
  • Fig. 23 is a block diagram showing an encoding unit based on the JPEG2000 coding scheme
  • Fig. 24 is a block diagram showing a decoding unit based on the JPEG2000 coding scheme
  • Fig. 25 is a block diagram showing an encoding unit based on the MPEG coding scheme
  • Fig. 26 is a view showing the concept of conventional reference data for MC prediction.
  • moving image data to be processed in the present invention is formed of image data and audio data, and the image data is formed of frames indicating information at consecutive moments.
  • Fig. 2 is a block diagram showing the arrangement of a moving image processing apparatus according to the first embodiment.
  • reference numeral 200 denotes a CPU; 201, a memory; 202, a terminal; 203, a storage unit; 204, an image sensing unit; 205, a display unit; and 206, an encoding unit.
  • the present invention can be applied to an image which is expressed by the number of bits other than 8 bits (e.g., 4 bits, 10 bits, or 12 bits per pixel). Further, the present invention can be applied to not only a monochrome image but also a color image (RGB/Lab/YCrCb). Also, the present invention can be applied to multi-valued information which represents the states and the like of each pixel that forms an image. An example of the multi-valued information is a multi-valued index value which represents the color of each pixel. In these applications, each kind of multi-valued information can be considered as monochrome frame data to be described later.
  • Pixel data which form each frame data of an image to be encoded are input from the image sensing unit 204 to a frame data input unit 301 in a raster scan order, and are then output to a tile segmentation unit 302.
  • the tile segmentation unit 302 segments one image input from the frame data input unit 301 into N tiles, as shown in Fig. 5 (step S401), and assigns tile numbers 0, 1, 2,..., N-I to the N tiles in a raster scan order in the first embodiment so as to identify respective tiles.
  • Data that represents each tile will be referred to as "tile data" hereinafter.
  • An ROI tile determination unit 317 determines a tile (ROI tile) or tiles of, e.g., an important area and an area of interest, to be encoded with higher image quality than other tiles (step S402).
  • Fig. 6 shows an example of the determined ROI tiles. Note that the ROI tile determination unit 317 determines a region which includes a preferred region designated by an input device (not shown) by the user as an ROI tile or tiles.
  • a frame attribute checking unit 316 checks if the frame to be encoded is an I-frame (Intra frame) or a P-frame (Predictive frame) (step S404). If the frame to be encoded is an I-frame, tile data are output to the discrete wavelet transformer 303 without being processed by a subtractor 314. On the other hand, if the frame to be encoded is a P-frame, frame data is copied to a motion compensation (MC) prediction unit 310.
  • MC motion compensation
  • the discrete wavelet transformer 303 computes the discrete wavelet transform using data of a plurality of pixels (reference pixels) (to be referred to as "reference pixel data” hereinafter) in one tile data x(n) in frame data of one frame image, which is input from the tile segmentation unit 302 (step S405).
  • Y(2n) and Y(2n+1) are discrete wavelet transformation coefficient sequences; Y(2n) indicates a low-frequency subband, and Y(2n+1) indicates a high-frequency subband.
  • floor ⁇ X ⁇ in transformation formulas (1) indicates a maximum integer which does not exceed X.
  • Transformation formulas (1) correspond to one-dimensional data.
  • data can be broken up into four subbands LL, HL, LH, and HH, as shown in Fig. 8A.
  • L indicates a low-frequency subband
  • H indicates a high-frequency subband
  • the first letter of the combinations of L and H expresses the type of a subband in the horizontal direction
  • the second letter of the combinations of L and H expresses the type of the subband in the vertical direction.
  • the LL subband is similarly broken up into four subbands (Fig.
  • LL subband is a subband of level 0. Since there is only one LL subband, no suffix is appended.
  • a decoded image obtained by decoding subbands from level 0 to level n will be referred to as a decoded image of level n hereinafter.
  • the decoded image has higher resolution with increasing level.
  • the transformation coefficients of the 10 subbands are temporarily stored in a buffer 304, and are output to a coefficient quantizer 305 in the order of LL, HLl, LHl, HHl, HL2, LH2, HH2, HL3, LH3, and HH3, i.e., in turn from a subband of lower level to that of higher level.
  • the coefficient quantizer 305 quantizes the transformation coefficients of the subbands output from the buffer 304 by quantization steps which are determined for respective frequency components (step S406), and outputs quantized values (quantized coefficient values) to an entropy encoder 306 and an inverse coefficient quantizer 312.
  • X be a coefficient value
  • q be a quantization step value corresponding to a frequency component to which this coefficient belongs.
  • Fig. 9 shows the correspondence between frequency components and quantization steps in this embodiment. As shown in Fig. 9, a larger quantization step is given to a subband of higher level in this embodiment.
  • the quantization steps for respective subbands are stored in advance in a memory such as a RAM, ROM, or the like (not shown) .
  • these quantized coefficient values are output to the entropy encoder 306 and the inverse coefficient quantizer 312.
  • the obtained decoded pixel is recorded in a frame memory 311 without being processed by an adder 315 (step S409).
  • the entropy encoder 306 entropy-encodes the input quantized coefficient values (step S410).
  • each subband as a set of input quantized coefficient values is segmented into rectangles (to be referred to as "code blocks” hereinafter), as shown in Fig. 10.
  • the code block is set to have a size of 2m x 2n (m and n are integers equal to or larger than 2) or the like.
  • the code block is broken up into bitplanes, as shown in Fig. 11. Bits on the respective bitplanes are categorized into three groups on the basis of predetermined categorizing rules to generate three different coding passes as sets of bits of identical types, as shown in Fig. 12.
  • the three different coding passes include a significance propagation pass as a coding pass of insignificant coefficients around which significant coefficients exist, a magnitude refinement pass as a coding pass of significant coefficients, and a cleanup pass as a coding pass of remaining coefficient information.
  • the input quantized coefficient values undergo binary arithmetic encoding as entropy encoding using the obtained coding passes as units, thereby generating entropy encoded values.
  • entropy encoding of one code block is done in the order from upper to lower bitplanes, and a given bitplane of that code block is encoded in turn from the upper one of the three different passes shown in Fig. 12.
  • Fig. 12 shows the classification of the coding passes of the fourth bitplane shown in Fig.
  • the entropy-encoded coding passes are output to an encoded tile data generator 307.
  • the encoded tile data generator 307 forms one or a plurality of layers based on the plurality of input coding passes, and generates encoded tile data using these layers as data units (step S411). The format of layers will be described below.
  • the encoded tile data generator 307 forms layers after it collects the entropy-encoded coding passes from the plurality of code blocks in the plurality of subbands, as shown in Fig. 13.
  • Fig. 13 shows a case wherein five layers are to be generated.
  • coding passes are always selected in turn from the uppermost one in that code block, as shown in Fig. 14.
  • the encoded tile data generator 307 arranges the generated layers in turn from an upper one, and appends a tile header to the head of these layers, thus generating encoded tile data, as shown in Fig. 15.
  • This header carries information used to identify a tile, the code length of the encoded tile data, various parameters used in compression, and the like.
  • the encoded tile data generated in this way is output to an encoded frame data generator 308. Whether or not tile data to be encoded still remain is determined in step S412 by comparing the value of counter i and the number of tiles.
  • the encoded frame data generator 308 arranges the encoded tile data shown in Fig. 15 in a predetermined order (e.g., ascending order of tile number), as shown in Fig. 16, and appends a header to the head of these encoded tile data, thus generating encoded frame data (step S426).
  • This header carries information such as the vertical x horizontal sizes of the input image and each tile, various parameters used in compression, and the like.
  • the encoded frame data generated in this way is output from an encoded frame data output unit 309 to the storage unit 203 shown in Fig. 2.
  • the processes in steps S407 to S409 are done prior to those in steps S410 and S411. However, these processes may be done in the reverse order or parallelly.
  • frame to be encoded is P-frame
  • the processing to be executed when the frame to be encoded is a P-frame will be explained below.
  • the tile segmentation unit 302 copies the frame data to the MC prediction unit 310, which performs MC prediction between the frame (previous frame) recorded in the frame memory 311 and the frame to be encoded (step S414).
  • the reference data for MC prediction is limited to the ROI tile or tiles of the previous frame, as shown in Fig. 17. This is to avoid the image quality drop of non-ROI tiles due to accumulation of discarded data in the encoded tile data generator.
  • a subtractor 314 calculates the difference between the previous frame and the frame to be encoded on the basis of the predicted result (step S415).
  • the subtraction result (difference data) obtained by the subtractor 314 undergoes discrete wavelet transform (step S416), quantization (step S417), inverse quantization (step S418), inverse discrete wavelet transform (step S419), entropy encoding (step S422), encoded tile data generation (step S423), tile number check (step S424), and encoded frame data generation (step S426), in the same manner as in the processes for the I-frame.
  • processes for calculating the sum of the difference data and previous frame by the adder 315 to reclaim the frame to be encoded (step S420), and recording the obtained decoded frame in the frame memory 311 (step S421) are added.
  • step S414 MC prediction is made using the decoded frame recorded in this process.
  • the processes in steps S414 to S423 are repeated via the process for incrementing counter i one by one in step S425, until it is determined in step S424 that no tile data to be encoded remains.
  • a data unit used in prediction may- adopt, inter alia, a tile, a block obtained by further segmenting a tile, and the like.
  • an ROI tile or tiles of the previous frame is used as reference data for MC prediction in the above explanation, however, an ROI tile or tiles of any frame may be used as long as it can be used for MC prediction.
  • the processes in steps S418 to S421 are executed prior to those in steps S422 and S423.
  • the first embodiment has explained the method of avoiding image quality drop of P-frames due to accumulation of discarded data in the encoded tile data generator by limiting the reference data for prediction to the ROI tile or tiles.
  • the user sets a given object as an ROI, and a tile or tiles including that object is determined as an ROI tile or tiles. For this reason, neighboring frames have similar pixel distributions and characteristics of ROI tiles.
  • MC prediction is done between only ROI tiles.
  • the second embodiment is substantially the same as the first embodiment, except for the process in step S415 in the encoding processing shown in Fig. 4. Therefore, only a difference will be explained below.
  • Fig. 18 shows the process of the MC prediction unit 310, which is executed in step S415 in the second embodiment. As shown in Fig. 18, MC prediction is executed between only ROI tiles, and that of non-ROI tiles is skipped.
  • CMOS complementary metal-oxide-semiconductor
  • the image quality drop of P-frames can be avoided by skipping wasteful operations.
  • Fig. 19 is a block diagram of the encoding unit 206 according to the third embodiment. Assume that the moving image processing apparatus has the same arrangement as that shown in Fig. 2. In the arrangement shown in Fig. 19, the ROI tile determination unit 317 is replaced by an ROI determination unit 417 compared to the block diagram of the encoding unit 206 in the first embodiment.
  • the former ROI tile determination unit 317 determines a tile or tiles including a region extracted by an object extraction unit (not shown) as an ROI tile or tiles, while the latter ROI determination unit 417 determines an extracted region as an ROI region by pixels.
  • FIG. 2IA shows an ROI and non-ROI in respective subbands
  • Figs. 2IB and 21C are conceptual views showing changes in quantized coefficient values due to shift-up.
  • the inverse ROI unit 419 converts coefficients from Fig. 21C to Fig. 2IB.
  • Fig. 20 is a flowchart showing the encoding process of the third embodiment.
  • the same reference numbers denote the same processes as in the flowchart of Fig. 4, and a description thereof will be omitted.
  • a bit shift-up process is done so that bits which form a source quantized coefficient value of Q 1 never exist at the same digit positions as those which form a source quantized coefficient value of Q"- With the above process, only the quantized coefficient values associated with the ROI are shifted to higher bits by B bits.
  • the inverse ROI unit 419 executes a process for shifting down the ROI whose bits are shifted up by the ROI unit 418 (step S507).
  • the discrete wavelet transformer 303 performs discrete wavelet transform (step S514).
  • MC prediction unit 310 performs MC prediction on the discrete wavelet transformation coefficient space (step S515).
  • the MC prediction unit 310 limits reference data for prediction to only DWT coefficients associated with ROI coefficients, as shown in Fig. 22.
  • the subtractor 314 calculates the difference (difference data) between the previous frame and the frame to be encoded on the basis of the predicted result (step S516).
  • the coefficient quantizer 305 quantizes this difference data (step S417).
  • the ROI unit 418 changes the quantized coefficient values of the difference data depending on whether or not the value is of ROI using the formulas (5) above (step S517) .
  • the inverse ROI unit 419 executes a process for shifting down the ROI whose bits are shifted up by the ROI unit 418 (step S518).
  • MC prediction is executed using only coefficients associated with the ROI, thus avoiding the image quality drop of P-frames.
  • the inventions have been explained using the discrete wavelet . transform.
  • the scope of the present invention includes embodiments that adopt discrete cosine transformation.
  • the present invention may be applied to either a part of a system constituted by a plurality of devices (e.g., a host computer, interface device, reader, printer, and the like), or a part of an apparatus including a single equipment (e.g., a copying machine, digital camera, or the like) .
  • the invention can be implemented by supplying a software program, which implements the functions of the foregoing embodiments, directly or indirectly to a system or apparatus, reading the supplied program code with a computer of the system or apparatus, and then executing the program code.
  • the program code installed in the computer also implements the present invention.
  • the claims of the present invention also cover a computer program for the purpose of implementing the functions of the present invention.
  • the program may be executed in any form, such as an object code, a program executed by an interpreter, or scrip data supplied to an operating system.
  • Example of storage media that can be used for supplying the program are a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a CD-R, a CD-RW, a magnetic tape, a non-volatile type memory card, a ROM, and a DVD (DVD-ROM and a DVD-R) .
  • a client computer can be connected to a website on the Internet using a browser of the client computer, and the computer program of the present invention or an automatically-installable compressed file of the program can be downloaded to a recording medium such as a hard disk.
  • the program of the present invention can be supplied by dividing the program code constituting the program into a plurality of files and downloading the files from different websites.
  • a WWW World Wide Web
  • a storage medium such as a CD-ROM
  • distribute the storage medium to users, allow users who meet certain requirements to download decryption key information from a website via the Internet, and allow these users to decrypt the encrypted program by using the key information, whereby the program is installed in the user computer.
  • an operating system or the like running on the computer may perform all or a part of the actual processing so that the functions of the foregoing embodiments can be implemented by this processing.

Abstract

An encoding unit that encodes a moving image using inter-frame motion prediction segments each frame into a plurality of segmented regions (302), and determines a region of interest from a frame to be decoded (317). The encoding unit (310) retrieves a pixel set, from the region of interest of the previous or succeeding frame, having high correlation to each segmented region of the frame to be encoded, calculates the difference between the data of each segmented region and data of the retrieved pixel set, and outputs difference data (314). Then, the encoding unit encodes the difference data (303, 306).

Description

DESCRIPTION MOVING IMAGE ENCODING APPARATUS AND MOVING IMAGE ENCODING METHOD
TECHNICAL FIELD The present invention relates to a moving image encoding apparatus and method and, more particularly, to a moving image encoding apparatus and method, which encode a moving image using motion prediction.
BACKGROUND ART In recent years, the contents which flow via a network are developing in the direction of large-capacity and diversification features, i.e., from text information to still image information and also to moving image information. An encoding technique that compresses an information size has been developed, and the developed encoding technique has prevailed by international standardization. On the other hand, networks themselves are also developing in the direction of large-capacity and diversification features, and one content passes through various environments from the transmitting side to the receiving side. Also, the processing performance of the transmitting/receiving side devices is diversified. PCs mainly used as transmitting/receiving side devices have great performance gains of CPU performance, graphics performance, and the like, while various devices with different processing performances such as a PDA, portable phone, TV, hard disk recorder, and the like have a network connection function. For this reason, a function called scalability in which single data can cope with a changing communication line capacity and the processing performance of a receiving side device has received a lot of attention. As a still image encoding method having this scalability function, a JPEG2000 coding scheme is well known. This scheme is internationally standardized, and its details are described in ISO/IEC15444-1 (Information technology - JPEG2000 image coding system - Part 1: Core coding system). JPEG2000 is characterized by using the discrete wavelet transform (DWT) to divide input image data by a plurality of frequency bands. The coefficients of the divided data are quantized, and the quantized values undergo arithmetic encoding for respective bitplanes. By encoding or decoding a required number of bitplanes, detailed hierarchy control is realized. In the JPEG2000 coding scheme, a technique called ROI (Region Of Interest) which relatively improves the image quality of a region of interest in an image, and is not available in the conventional encoding techniques is realized. Fig. 23 shows an encoding unit based on the JPEG2000 coding scheme. A tile segmentation unit 9001 segments an input image into a plurality of regions (tiles). This function is an option. A DWT unit 9002 divides respective tiles by frequency bands using the discrete wavelet transform. A quantizer 9003 quantizes respective coefficients. An ROI designation unit 9007 can set a region, such as an important region and a region of interest, to be coded with a higher quality than the other regions. At this time, the quantizer 9003 performs a shift-up process. An entropy encoder 9004 performs entropy encoding by an EBCOT scheme (Embedded Block Coding with Optimized Truncation) . The lower bits of the encoded data are discarded by a bit truncating unit 9005 as needed for rate control. A code forming unit 9006 appends header information to the encoded data, selects various scalability functions, and outputs the encoded data. Fig. 24 shows a decoding unit based on the JPEG2000 coding scheme. A code analysis unit 9020 analyzes a header to obtain information required to form a hierarchy. A bit truncating unit 9021 discards the lower bits of input encoded data in correspondence with an internal buffer size and decoding processing performance. An entropy decoder 9022 decodes the encoded data based on the EBCOT coding scheme to obtain quantized wavelet transformation coefficients. An inverse quantizer 9023 inversely quantizes the quantized wavelet transformation coefficients. An inverse DWT unit 9024 performs the inverse discrete wavelet transform to reclaim image data from the wavelet transformation coefficients. A tile composition unit 9025 composites a plurality of tiles to reconstruct image data. Also, a Motion JPEG2000 scheme that encodes a moving image by applying the JPEG2000 coding scheme to respective frames of the moving image has been recommended (for example, see ISO/IEC15444-3 (Information technology - JPEG2000 image coding system Part 3: Motion JPEG2000)). In this scheme, encoding processes are independently done for respective frames. Since encoding using time correlation is not performed, redundancy remains between adjacent frames. For this reason, it is difficult to effectively reduce the code size compared to a moving image coding scheme using time correlation. On the other hand, an MPEG coding scheme performs motion compensation to improve coding efficiency (see, e.g., "Latest MPEG Text", p. 76, etc., ASCII Publishing Division, 1994). Fig. 25 shows the arrangement of that encoding unit. A block segmentation unit 9031 divides data into blocks of 8 x 8 pixels, a difference unit 9032 obtains the differences between the data of the respective blocks and predicted data obtained by motion compensation. A DCT unit 9033 performs discrete cosine transformation, and a quantizer 9034 performs quantization. The quantization result is encoded by an entropy encoder 9035. A code forming unit 9036 appends header information to the encoded data, and outputs the encoded data. On the other hand, an inverse quantizer 9037 performs inverse quantization in parallel with the process of the entropy encoder 9035, an inverse DCT unit 9038 applies inverse transformation of the discrete cosine transformation, and an adder 9039 adds predicted data and stores the sum data in a frame memory 9040. A motion compensation unit 9041 calculates motion vectors with reference to an input image and reference frames stored in the frame memory 9040, thus generating predicted data. For the purpose of improving the efficiency of the JPEG2000 coding, a compression scheme obtained by adding motion compensation to JPEG2000 is available. However, in such moving image compression scheme, when reference data for prediction (to be referred to as "reference data" hereinafter) is partially discarded by, e.g., truncation of the lower bitplanes, predictive errors accumulate, thus considerably deteriorating the inter-frame image quality. Fig. 26 shows a concept of reference data between inter-frame images. DISCLOSURE OF INVENTION The present invention has been made in consideration of the above situation, and has as its object to suppress inter-frame image quality deterioration upon encoding a moving image using motion prediction. According to the present invention, the foregoing object is attained by providing a moving image encoding apparatus for encoding a moving image using inter-frame motion prediction, comprising: a segmentation unit that segments each frame into a plurality of segmented regions; a determination unit that determines a region of interest from a frame to be encoded; an inter-frame prediction unit that retrieves a pixel set, from the region of interest of a previous or succeeding frame, having high correlation to each segmented region of a frame to be encoded, calculates a difference between the data of each segmented region and data of the retrieved pixel set, and outputs difference data; and an encoding unit that encodes the difference data. According to the present invention, the foregoing object is also attained by providing a moving image encoding apparatus for encoding a moving image using inter-frame motion prediction, comprising: a segmentation unit that segments each frame into a plurality of segmented regions; a determination unit that determines a region of interest from a frame to be encoded; a transformation unit that performs data transformation for each segmented region to generate transformation coefficients; an inter-frame prediction unit that retrieves transformation coefficients, from transformation coefficients corresponding to the region of interest of a previous or succeeding frame, having high correlation to transformation coefficients of each segmented region of a frame to be encoded, calculates a difference between the transformation coefficients of each segmented region and the retrieved transformation coefficients, and outputs difference data; and an encoding unit that encodes the difference data. Further, the foregoing object is also attained by providing a moving image encoding method for encoding a moving image using inter-frame motion prediction, comprising: segmenting each frame into a plurality of segmented regions; determining a region of interest from a frame to be encoded; retrieving a pixel set, from the region of interest of a previous or succeeding frame, having high correlation to each segmented region of a frame to be encoded, calculating a difference between the data of each segmented region and data of the retrieved pixel set, and outputting difference data; and encoding the difference data. Furthermore, the foregoing object is also attained by providing a moving image encoding method for encoding a moving image using inter-frame motion prediction, comprising: segmenting each frame into a plurality of segmented, regions; determining a region of interest from a frame to be encoded; performing data transformation for each segmented region to generate transformation coefficients; retrieving transformation coefficients, from transformation coefficients corresponding to the region of interest of a previous or succeeding frame, having high correlation to transformation coefficients of each segmented region of a frame to be encoded, calculating a difference between the transformation coefficients of each segmented region and the retrieved transformation coefficients, and outputting difference data; and encoding the difference data. Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof.
BRIEF DESCRIPTION OF DRAWINGS The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention. Fig. 1 is a view showing the concept of a moving image to be encoded in an embodiment of the present invention; Fig. 2 is a block diagram showing the arrangement of a moving image processing apparatus according to the embodiment of the present invention; Fig. 3 is a block diagram showing the arrangement of an encoding unit according to a first embodiment of the present invention; Fig. 4 is a flowchart showing the encoding process according to the first embodiment of the present invention; Fig. 5 is an explanatory view of tile segmentation; Fig. 6 is a view showing an example of ROI tiles; Fig. 7 is an explanatory view of linear discrete wavelet transform; Fig. 8A is a view for decomposing data into four subbands, Fig. 8B is a view for further decomposing an LL subband in Fig. 8A into four subbands, and Fig. 8C is a view for further decomposing an LL subband in Fig. 8B into four subbands; Fig. 9 is an explanatory view of quantization steps; Fig. 10 is an explanatory view of code block segmentation; Fig. 11 is an explanatory view of bitplane segmentation; Fig. 12 is an explanatory view of coding passes; Fig. 13 is an explanatory view of layer generation; Fig. 14 is an explanatory view of layer generation; Fig. 15 is an explanatory view of the format of encoded tile data; Fig. 16 is an explanatory view of the format of encoded frame data; Fig. 17 is a view showing the concept of reference data for MC prediction according to the first embodiment of the present invention; Fig. 18 is a view showing the concept of reference data for MC prediction according to a second embodiment of the present invention; Fig. 19 is a block diagram showing the arrangement of an encoding unit according to a third embodiment of the present invention; Fig. 20 is a flowchart showing the encoding process according to the third embodiment of the present invention; Fig. 21A shows an ROI and non-ROI in respective subbands, and Figs. 21B and 21C show changes in quantized coefficient values by shift up; Fig. 22 is a view showing the concept of reference data for MC prediction in the third embodiment of the present invention; Fig. 23 is a block diagram showing an encoding unit based on the JPEG2000 coding scheme; Fig. 24 is a block diagram showing a decoding unit based on the JPEG2000 coding scheme; Fig. 25 is a block diagram showing an encoding unit based on the MPEG coding scheme; and Fig. 26 is a view showing the concept of conventional reference data for MC prediction.
BEST MODE FOR CARRYING OUT THE INVENTION Preferred embodiments of the present invention will be described in detail in accordance with the accompanying drawings. (First Embodiment) As shown in Fig. 1, moving image data to be processed in the present invention is formed of image data and audio data, and the image data is formed of frames indicating information at consecutive moments. Fig. 2 is a block diagram showing the arrangement of a moving image processing apparatus according to the first embodiment. Referring to Fig. 2, reference numeral 200 denotes a CPU; 201, a memory; 202, a terminal; 203, a storage unit; 204, an image sensing unit; 205, a display unit; and 206, an encoding unit. <Processing of Encoding Unit 206> The frame data encoding process of the encoding unit 206 will be described below with reference to the block diagram showing the arrangement of the encoder 206 shown in Fig. 3 according to the first embodiment, and the flowchart of Fig. 4 showing the encoding process according to the first embodiment. Note that details such as a header generation method and the like are as described in the ISO/IEC recommendation, and a. description thereof will be omitted. In the following description, assume that frame data to be encoded is 8-bit monochrome frame data. However, the present invention is not limited to such specific frame data format. For example, the present invention can be applied to an image which is expressed by the number of bits other than 8 bits (e.g., 4 bits, 10 bits, or 12 bits per pixel). Further, the present invention can be applied to not only a monochrome image but also a color image (RGB/Lab/YCrCb). Also, the present invention can be applied to multi-valued information which represents the states and the like of each pixel that forms an image. An example of the multi-valued information is a multi-valued index value which represents the color of each pixel. In these applications, each kind of multi-valued information can be considered as monochrome frame data to be described later. Pixel data which form each frame data of an image to be encoded are input from the image sensing unit 204 to a frame data input unit 301 in a raster scan order, and are then output to a tile segmentation unit 302. The tile segmentation unit 302 segments one image input from the frame data input unit 301 into N tiles, as shown in Fig. 5 (step S401), and assigns tile numbers 0, 1, 2,..., N-I to the N tiles in a raster scan order in the first embodiment so as to identify respective tiles. Data that represents each tile will be referred to as "tile data" hereinafter. Fig. 5 shows an example in which an image is broken up into 48 tiles (= 8 (horizontal) x 6 (vertical)), but the number of segmented tiles can be changed as needed. These generated tile data are sent in turn to a discrete wavelet transformer 303. In the processes of the discrete wavelet transformer 303 and subsequent units, encoding is done for each tile data. An ROI tile determination unit 317 determines a tile (ROI tile) or tiles of, e.g., an important area and an area of interest, to be encoded with higher image quality than other tiles (step S402). Fig. 6 shows an example of the determined ROI tiles. Note that the ROI tile determination unit 317 determines a region which includes a preferred region designated by an input device (not shown) by the user as an ROI tile or tiles. In step S403, a counter used to recognize a tile to be processed is set to i = 0. A frame attribute checking unit 316 checks if the frame to be encoded is an I-frame (Intra frame) or a P-frame (Predictive frame) (step S404). If the frame to be encoded is an I-frame, tile data are output to the discrete wavelet transformer 303 without being processed by a subtractor 314. On the other hand, if the frame to be encoded is a P-frame, frame data is copied to a motion compensation (MC) prediction unit 310. [When frame to be encoded is I-frame] When the frame to be encoded is an I-frame, the discrete wavelet transformer 303 computes the discrete wavelet transform using data of a plurality of pixels (reference pixels) (to be referred to as "reference pixel data" hereinafter) in one tile data x(n) in frame data of one frame image, which is input from the tile segmentation unit 302 (step S405). Note that frame data after undergone the discrete wavelet transform (discrete wavelet transformation coefficients) is given by: Y(2n) = X(2n)+floor{(Y(2n-l)+Y(2n+l)+2)/4} Y(2n+1) = X(2n+l)-floor{(X(2n)+X(2n+2))/2} •-. (1) where Y(2n) and Y(2n+1) are discrete wavelet transformation coefficient sequences; Y(2n) indicates a low-frequency subband, and Y(2n+1) indicates a high-frequency subband. Also, floor{X} in transformation formulas (1) indicates a maximum integer which does not exceed X. Fig. 7 illustrates this discrete wavelet transform process. Transformation formulas (1) correspond to one-dimensional data. When two-dimensional transformation is attained by applying this transformation in turn in the horizontal and vertical directions, data can be broken up into four subbands LL, HL, LH, and HH, as shown in Fig. 8A. Note that L indicates a low-frequency subband, and H indicates a high-frequency subband, and the first letter of the combinations of L and H expresses the type of a subband in the horizontal direction, and the second letter of the combinations of L and H expresses the type of the subband in the vertical direction. Then, the LL subband is similarly broken up into four subbands (Fig. 8B), and an LL subband of these subbands is further broken up into four subbands (Fig. 8C). In this way, a total of 10 subbands are formed. The 10 subbands are respectively named HHl, HLl,..., as shown in Fig. 8C. A suffix in each subband name indicates the level of a subband. That is, the subbands of level 1 are HLl, HHl, and LHl, those of level 2 are HL2, HH2, and LH2, and those of level 3 are HL3, HH3, and LH3. Note that the LL subband is a subband of level 0. Since there is only one LL subband, no suffix is appended. A decoded image obtained by decoding subbands from level 0 to level n will be referred to as a decoded image of level n hereinafter. The decoded image has higher resolution with increasing level. The transformation coefficients of the 10 subbands are temporarily stored in a buffer 304, and are output to a coefficient quantizer 305 in the order of LL, HLl, LHl, HHl, HL2, LH2, HH2, HL3, LH3, and HH3, i.e., in turn from a subband of lower level to that of higher level. The coefficient quantizer 305 quantizes the transformation coefficients of the subbands output from the buffer 304 by quantization steps which are determined for respective frequency components (step S406), and outputs quantized values (quantized coefficient values) to an entropy encoder 306 and an inverse coefficient quantizer 312. Let X be a coefficient value, and q be a quantization step value corresponding to a frequency component to which this coefficient belongs. Then, quantized coefficient value Q(X) is given by: Q(X) = floor{(X/q)+0.5} ...(2) Fig. 9 shows the correspondence between frequency components and quantization steps in this embodiment. As shown in Fig. 9, a larger quantization step is given to a subband of higher level in this embodiment. Note that the quantization steps for respective subbands are stored in advance in a memory such as a RAM, ROM, or the like (not shown) . After all transformation coefficients in one subband are quantized, these quantized coefficient values are output to the entropy encoder 306 and the inverse coefficient quantizer 312. The inverse coefficient quantizer 312 inversely quantizes, using the quantization steps shown in Fig. 9, the quantized coefficient values (step S407) based on: . Y = q*Q ...(3) where q is the quantization step, Q is the quantized coefficient value, and Y is the inverse quantized value. An inverse discrete wavelet transformer 313 computes the inverse discrete wavelet transforms of the inverse quantized values (step S408) using: X(2n) = Y(2n)-floor{(Y(2n-l)+Y(2n+l)+2)/4> X(2n+1) = Y(2n+l)+floor{(X(2n)+X(2n+2))/2} ..•(4) The obtained decoded pixel is recorded in a frame memory 311 without being processed by an adder 315 (step S409). On the other hand, the entropy encoder 306 entropy-encodes the input quantized coefficient values (step S410). In this process, each subband as a set of input quantized coefficient values is segmented into rectangles (to be referred to as "code blocks" hereinafter), as shown in Fig. 10. Note that the code block is set to have a size of 2m x 2n (m and n are integers equal to or larger than 2) or the like. Furthermore, the code block is broken up into bitplanes, as shown in Fig. 11. Bits on the respective bitplanes are categorized into three groups on the basis of predetermined categorizing rules to generate three different coding passes as sets of bits of identical types, as shown in Fig. 12. The three different coding passes include a significance propagation pass as a coding pass of insignificant coefficients around which significant coefficients exist, a magnitude refinement pass as a coding pass of significant coefficients, and a cleanup pass as a coding pass of remaining coefficient information. The input quantized coefficient values undergo binary arithmetic encoding as entropy encoding using the obtained coding passes as units, thereby generating entropy encoded values. Note that entropy encoding of one code block is done in the order from upper to lower bitplanes, and a given bitplane of that code block is encoded in turn from the upper one of the three different passes shown in Fig. 12. Note that Fig. 12 shows the classification of the coding passes of the fourth bitplane shown in Fig. 11. The entropy-encoded coding passes are output to an encoded tile data generator 307. The encoded tile data generator 307 forms one or a plurality of layers based on the plurality of input coding passes, and generates encoded tile data using these layers as data units (step S411). The format of layers will be described below. The encoded tile data generator 307 forms layers after it collects the entropy-encoded coding passes from the plurality of code blocks in the plurality of subbands, as shown in Fig. 13. Fig. 13 shows a case wherein five layers are to be generated. Upon acquiring coding passes from an arbitrary code block, coding passes are always selected in turn from the uppermost one in that code block, as shown in Fig. 14. After that, the encoded tile data generator 307 arranges the generated layers in turn from an upper one, and appends a tile header to the head of these layers, thus generating encoded tile data, as shown in Fig. 15. This header carries information used to identify a tile, the code length of the encoded tile data, various parameters used in compression, and the like. The encoded tile data generated in this way is output to an encoded frame data generator 308. Whether or not tile data to be encoded still remain is determined in step S412 by comparing the value of counter i and the number of tiles. If tile data to be encoded still remain (i.e., i < N-I), counter i is incremented by 1 in step S413, and the flow returns to step S405 to repeat the processes up to step S412 for the next tile. If no tile data to be encoded remains (i.e., i = N-I), the flow advances to step S426. The encoded frame data generator 308 arranges the encoded tile data shown in Fig. 15 in a predetermined order (e.g., ascending order of tile number), as shown in Fig. 16, and appends a header to the head of these encoded tile data, thus generating encoded frame data (step S426). This header carries information such as the vertical x horizontal sizes of the input image and each tile, various parameters used in compression, and the like. The encoded frame data generated in this way is output from an encoded frame data output unit 309 to the storage unit 203 shown in Fig. 2. In the above description, the processes in steps S407 to S409 are done prior to those in steps S410 and S411. However, these processes may be done in the reverse order or parallelly. [When frame to be encoded is P-frame] The processing to be executed when the frame to be encoded is a P-frame will be explained below. In this case, as described above, the tile segmentation unit 302 copies the frame data to the MC prediction unit 310, which performs MC prediction between the frame (previous frame) recorded in the frame memory 311 and the frame to be encoded (step S414). Note that the reference data for MC prediction is limited to the ROI tile or tiles of the previous frame, as shown in Fig. 17. This is to avoid the image quality drop of non-ROI tiles due to accumulation of discarded data in the encoded tile data generator. A subtractor 314 calculates the difference between the previous frame and the frame to be encoded on the basis of the predicted result (step S415). The subtraction result (difference data) obtained by the subtractor 314 undergoes discrete wavelet transform (step S416), quantization (step S417), inverse quantization (step S418), inverse discrete wavelet transform (step S419), entropy encoding (step S422), encoded tile data generation (step S423), tile number check (step S424), and encoded frame data generation (step S426), in the same manner as in the processes for the I-frame. Unlike in the I-frame processes, processes for calculating the sum of the difference data and previous frame by the adder 315 to reclaim the frame to be encoded (step S420), and recording the obtained decoded frame in the frame memory 311 (step S421) are added. In step S414 above, MC prediction is made using the decoded frame recorded in this process. The processes in steps S414 to S423 are repeated via the process for incrementing counter i one by one in step S425, until it is determined in step S424 that no tile data to be encoded remains. Note that a data unit used in prediction may- adopt, inter alia, a tile, a block obtained by further segmenting a tile, and the like. Further, an ROI tile or tiles of the previous frame is used as reference data for MC prediction in the above explanation, however, an ROI tile or tiles of any frame may be used as long as it can be used for MC prediction. In the description of Fig. 4, the processes in steps S418 to S421 are executed prior to those in steps S422 and S423. However, these processes may be done in the reverse order or parallelly. As described above, according to the first embodiment, since only the ROI tile or tiles of the previous frame is set as reference data for MC prediction, the image quality drop of P-frames due to accumulation of discarded data in the encoded tile data generator can be avoided. (Second Embodiment) The first embodiment has explained the method of avoiding image quality drop of P-frames due to accumulation of discarded data in the encoded tile data generator by limiting the reference data for prediction to the ROI tile or tiles. In general, the user sets a given object as an ROI, and a tile or tiles including that object is determined as an ROI tile or tiles. For this reason, neighboring frames have similar pixel distributions and characteristics of ROI tiles. For this reason, prediction between neighboring ROI tiles can realize high encoding efficiency. However, prediction between ROI and non-ROI tiles cannot often realize high encoding efficiency. If high encoding efficiency cannot be realized, the MC prediction process is wasted. Hence, in the second embodiment, MC prediction is done between only ROI tiles. Note that the second embodiment is substantially the same as the first embodiment, except for the process in step S415 in the encoding processing shown in Fig. 4. Therefore, only a difference will be explained below. Fig. 18 shows the process of the MC prediction unit 310, which is executed in step S415 in the second embodiment. As shown in Fig. 18, MC prediction is executed between only ROI tiles, and that of non-ROI tiles is skipped. As described above, according to the second embodiment, since MC prediction is executed between only ROI tiles, the image quality drop of P-frames can be avoided by skipping wasteful operations. (Third Embodiment) In the third embodiment, an ROI region is set on the discrete wavelet transformation coefficient space without setting an ROI region by tiles. By limiting reference data for prediction to ROI coefficients, the image quality drop of P-frames is avoided. Fig. 19 is a block diagram of the encoding unit 206 according to the third embodiment. Assume that the moving image processing apparatus has the same arrangement as that shown in Fig. 2. In the arrangement shown in Fig. 19, the ROI tile determination unit 317 is replaced by an ROI determination unit 417 compared to the block diagram of the encoding unit 206 in the first embodiment. A difference lies in that the ROI tile determination unit 317 determines a region by tiles, but the ROI determination unit 417 determines a region by pixels. For example, the former ROI tile determination unit 317 determines a tile or tiles including a region extracted by an object extraction unit (not shown) as an ROI tile or tiles, while the latter ROI determination unit 417 determines an extracted region as an ROI region by pixels. Also, differences are that the position of the subtractor 314 is changed since data which is to undergo prediction is changed from a pixel to a discrete wavelet transformation coefficient, an ROI unit 418 and inverse ROI unit 419 are added, and the need for the inverse discrete wavelet transformer 313 is obviated. Fig. 2IA shows an ROI and non-ROI in respective subbands, and Figs. 2IB and 21C are conceptual views showing changes in quantized coefficient values due to shift-up. Three quantized coefficient values exist for respective three subbands in Fig. 2IB, and the hatched quantized coefficient values are those configuring an ROI. The values are changed as those shown in Fig. 21C after the shift-up process. The inverse ROI unit 419 converts coefficients from Fig. 21C to Fig. 2IB. Fig. 20 is a flowchart showing the encoding process of the third embodiment. The same reference numbers denote the same processes as in the flowchart of Fig. 4, and a description thereof will be omitted. [When frame to be encoded is I-frame] In the third embodiment, when the frame to be encoded is an I-frame, after transformation coefficients computed by the discrete wavelet transformer 303 are quantized (step S406), the ROI unit 418 changes a quantized coefficient value (step S506) depending on whether or not the value is of ROI on the basis of: Q" = Q*2B; (Q: the absolute value of the quantized coefficient value obtained from a pixel in the ROI) Q1 = Q; (Q: the absolute value of the quantized coefficient value other than the above value) ... (5) where B is given for each subband. In a subband of interest, each Q1 is set to be larger than every Q". A bit shift-up process is done so that bits which form a source quantized coefficient value of Q1 never exist at the same digit positions as those which form a source quantized coefficient value of Q"- With the above process, only the quantized coefficient values associated with the ROI are shifted to higher bits by B bits. The inverse ROI unit 419 executes a process for shifting down the ROI whose bits are shifted up by the ROI unit 418 (step S507). [When frame to be encoded is P-frame] When the frame to be encoded is a P-frame, in the third embodiment, the discrete wavelet transformer 303 performs discrete wavelet transform (step S514). After that, MC prediction unit 310 performs MC prediction on the discrete wavelet transformation coefficient space (step S515). Note that the MC prediction unit 310 limits reference data for prediction to only DWT coefficients associated with ROI coefficients, as shown in Fig. 22. The subtractor 314 calculates the difference (difference data) between the previous frame and the frame to be encoded on the basis of the predicted result (step S516). The coefficient quantizer 305 quantizes this difference data (step S417). After that, the ROI unit 418 changes the quantized coefficient values of the difference data depending on whether or not the value is of ROI using the formulas (5) above (step S517) . The inverse ROI unit 419 executes a process for shifting down the ROI whose bits are shifted up by the ROI unit 418 (step S518). As described above, according to the third embodiment, MC prediction is executed using only coefficients associated with the ROI, thus avoiding the image quality drop of P-frames. <Other Embodiments> In the first to third embodiments, the inventions have been explained using the discrete wavelet . transform. Also, the scope of the present invention includes embodiments that adopt discrete cosine transformation. The present invention may be applied to either a part of a system constituted by a plurality of devices (e.g., a host computer, interface device, reader, printer, and the like), or a part of an apparatus including a single equipment (e.g., a copying machine, digital camera, or the like) . Furthermore, the invention can be implemented by supplying a software program, which implements the functions of the foregoing embodiments, directly or indirectly to a system or apparatus, reading the supplied program code with a computer of the system or apparatus, and then executing the program code. In this case, so long as the system or apparatus has the functions of the program, the mode of implementation need not rely upon a program. Accordingly, since the functions of the present invention are implemented by computer, the program code installed in the computer also implements the present invention. In other words, the claims of the present invention also cover a computer program for the purpose of implementing the functions of the present invention. In this case, so long as the system or apparatus has the functions of the program, the program may be executed in any form, such as an object code, a program executed by an interpreter, or scrip data supplied to an operating system. Example of storage media that can be used for supplying the program are a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a CD-ROM, a CD-R, a CD-RW, a magnetic tape, a non-volatile type memory card, a ROM, and a DVD (DVD-ROM and a DVD-R) . As for the method of supplying the program, a client computer can be connected to a website on the Internet using a browser of the client computer, and the computer program of the present invention or an automatically-installable compressed file of the program can be downloaded to a recording medium such as a hard disk. Further, the program of the present invention can be supplied by dividing the program code constituting the program into a plurality of files and downloading the files from different websites. In other words, a WWW (World Wide Web) server that downloads, to multiple users, the program files that implement the functions of the present invention by computer is also covered by the claims of the present invention. It is also possible to encrypt and store the program of the present invention on a storage medium such as a CD-ROM, distribute the storage medium to users, allow users who meet certain requirements to download decryption key information from a website via the Internet, and allow these users to decrypt the encrypted program by using the key information, whereby the program is installed in the user computer. Besides the cases where the aforementioned functions according to the embodiments are implemented by executing the read program by computer, an operating system or the like running on the computer may perform all or a part of the actual processing so that the functions of the foregoing embodiments can be implemented by this processing. Furthermore, after the program read from the storage medium is written to a function expansion board inserted into the computer or to a memory provided in a function expansion unit connected to the computer, a CPU or the like mounted on the function expansion board or function expansion unit performs all or a part of the actual processing so that the functions of the foregoing embodiments can be implemented by this processing. As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.
CLAIM OF PRIORITY This application claims priority from Japanese Patent Application No. 2004-190305 filed on June 28, 2004, which is hereby incorporated herein by reference herein.

Claims

CLAIMS 1. A moving image encoding apparatus for encoding a moving image using inter-frame motion prediction, comprising: a segmentation unit that segments each frame into a plurality of segmented regions; a determination unit that determines a region of interest from a frame to be encoded; an inter-frame prediction unit that retrieves a pixel set, from the region of interest of a previous or succeeding frame, having high correlation to each segmented region of a frame to be encoded, calculates a difference between the data of each segmented region and data of the retrieved pixel set, and outputs difference data; and an encoding unit that encodes the difference data. 2. The apparatus according to claim 1, wherein said encoding unit preferentially discards data from a region other than the region of interest so as to adjust a code size. 3. The apparatus according to claim 1 or 2 further comprising a checking unit that checks if the frame to be encoded is a frame which is to undergo intra-frame encoding or a frame which is to undergo inter-frame encoding, wherein, when said checking unit determines that the frame to be encoded is the frame which is to undergo intra-frame encoding, a process by said inter-frame prediction unit is skipped, and said encoding unit encodes data of each segmented region of the frame to be encoded. 4. The apparatus according to claim 1 or 2, wherein said inter-frame prediction unit executes a process for only the region of interest determined by said determination unit of the segmented regions of the frame to be encoded. 5. The apparatus according to claim 1 or 2, wherein said encoding unit performs discrete wavelet transform. 6. The apparatus according to claim 5, wherein said encoding unit performs encoding by a JPEG2000 encoding scheme. 7. The apparatus according to claim 1 or 2, wherein said encoding unit performs discrete cosine transformation. 8. A moving image encoding apparatus for encoding a moving image using inter-frame motion prediction, comprising: a segmentation unit that segments each frame into a plurality of segmented regions; a determination unit that determines a region of interest from a frame to be encoded; a transformation unit that performs data transformation for each segmented region to generate transformation coefficients; an inter-frame prediction unit that retrieves transformation coefficients, from transformation coefficients corresponding to the region of interest of a previous or succeeding frame, having high correlation to transformation coefficients of each segmented region of a frame to be encoded, calculates a difference between the transformation coefficients of each segmented region and the retrieved transformation coefficients, and outputs difference data; and an encoding unit that encodes the difference data. 9. The apparatus according to claim 8, wherein said encoding unit preferentially discards data from a region other than the region of interest so as to adjust a code size. 10. The apparatus according to claim 8 or 9 further comprising a checking unit that checks if the frame to be encoded is a frame which is to undergo intra-frame encoding or a frame which is to undergo inter-frame encoding, wherein, when said checking unit determines that the frame to be encoded is the frame which is to undergo intra-frame encoding, a process by said inter-frame prediction unit is skipped, and said encoding unit encodes transformation coefficients of each segmented region of the frame to be encoded. 11. The apparatus according to claim 8 or 9, wherein said inter-frame prediction unit executes a process for only transformation coefficients of the region of interest determined by said determination unit of the segmented regions of the frame to be encoded. 12. The apparatus according to claim 8 or 9, wherein said transformation unit performs discrete wavelet transform. 13. The apparatus according to claim 8 or 9, wherein said transformation unit performs discrete cosine transformation. 14. A moving image encoding method for encoding a moving image using inter-frame motion prediction, comprising: segmenting each frame into a plurality of segmented regions; determining a region of interest from a frame to be encoded; retrieving a pixel set, from the region of interest of a previous or succeeding frame, having high correlation to each segmented region of a frame to be encoded, calculating a difference between the data of each segmented region and data of the retrieved pixel set, and outputting difference data; and encoding the difference data. 15. A moving image encoding method for encoding a moving image using inter-frame motion prediction, comprising: segmenting each frame into a plurality of segmented regions; determining a region of interest from a frame to be encoded; performing data transformation for each segmented region to generate transformation coefficients; retrieving transformation coefficients, from transformation coefficients corresponding to the region of interest of a previous or succeeding frame, having high correlation to transformation coefficients of each segmented region of a frame to be encoded, calculating a difference between the transformation coefficients of each segmented region and the retrieved transformation coefficients, and outputting difference data; and encoding the difference data. 16. A program executable by an information processing apparatus, characterized by having a program for implementing a moving image encoding method of claim 14 or 15. 17. A storage medium readable by an information processing apparatus, characterized by storing a program of claim 16.
PCT/JP2005/012008 2004-06-28 2005-06-23 Moving image encoding apparatus and moving image encoding method WO2006001490A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/571,187 US20080089413A1 (en) 2004-06-28 2005-06-23 Moving Image Encoding Apparatus And Moving Image Encoding Method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004-190305 2004-06-28
JP2004190305A JP2006014086A (en) 2004-06-28 2004-06-28 Moving image encoding apparatus and moving image encoding method

Publications (1)

Publication Number Publication Date
WO2006001490A1 true WO2006001490A1 (en) 2006-01-05

Family

ID=34971519

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2005/012008 WO2006001490A1 (en) 2004-06-28 2005-06-23 Moving image encoding apparatus and moving image encoding method

Country Status (2)

Country Link
JP (1) JP2006014086A (en)
WO (1) WO2006001490A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009158428A1 (en) * 2008-06-25 2009-12-30 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
CN103997785A (en) * 2014-05-22 2014-08-20 无锡爱维特信息技术有限公司 Data retrieval method for target object motion prediction based on base station information window
US8948270B2 (en) 2008-08-19 2015-02-03 Qualcomm Incorporated Power and computational load management techniques in video processing
US8948822B2 (en) 2008-04-23 2015-02-03 Qualcomm Incorporated Coordinating power management functions in a multi-media device
US8964828B2 (en) 2008-08-19 2015-02-24 Qualcomm Incorporated Power and computational load management techniques in video processing
CN114363548A (en) * 2022-01-10 2022-04-15 浙江齐安信息科技有限公司 Method and system for recording screen video of electronic equipment

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
HUE045215T2 (en) * 2013-03-21 2019-12-30 Sony Corp Device and method for decoding image
CN105075271A (en) 2013-04-08 2015-11-18 索尼公司 Region of interest scalability with SHVC
JP6331882B2 (en) 2014-08-28 2018-05-30 ソニー株式会社 Transmitting apparatus, transmitting method, receiving apparatus, and receiving method
JP6652153B2 (en) * 2018-04-26 2020-02-19 ソニー株式会社 Transmitting device, transmitting method, receiving device and receiving method
JP2020074571A (en) * 2020-01-16 2020-05-14 ソニー株式会社 Transmitting device, transmitting method, receiving device, and receiving method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0650298A1 (en) * 1993-03-25 1995-04-26 Sony Corporation Method for coding or decoding time-varying image, and apparatuses for coding/decoding
EP0912063A2 (en) * 1997-10-24 1999-04-28 Matsushita Electric Industrial Co., Ltd. A method for computational graceful degradation in an audiovisual compression system
EP1061749A1 (en) * 1998-01-27 2000-12-20 Sharp Kabushiki Kaisha Moving picture coder and moving picture decoder
US6498816B1 (en) * 1999-09-03 2002-12-24 Equator Technologies, Inc. Circuit and method for formatting each of a series of encoded video images into respective regions
JP2004166124A (en) * 2002-11-15 2004-06-10 Ricoh Co Ltd Image processor, program, storage medium, and image processing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0650298A1 (en) * 1993-03-25 1995-04-26 Sony Corporation Method for coding or decoding time-varying image, and apparatuses for coding/decoding
US20020001346A1 (en) * 1993-03-25 2002-01-03 Motoki Kato Moving picture coding method, moving picture decoding method, and apparatus therefor
EP0912063A2 (en) * 1997-10-24 1999-04-28 Matsushita Electric Industrial Co., Ltd. A method for computational graceful degradation in an audiovisual compression system
EP1061749A1 (en) * 1998-01-27 2000-12-20 Sharp Kabushiki Kaisha Moving picture coder and moving picture decoder
US6498816B1 (en) * 1999-09-03 2002-12-24 Equator Technologies, Inc. Circuit and method for formatting each of a series of encoded video images into respective regions
JP2004166124A (en) * 2002-11-15 2004-06-10 Ricoh Co Ltd Image processor, program, storage medium, and image processing method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHI-WAH WONG A., YU-KWONG K.: "On a Region of Interest Based Approach to Robust Wireless Video Transmission", PROCEEDINGS OF 7TH INTERNATIONAL SYMPOSIUM ON PARALLEL ARCHITECTURES, ALGORITHMS AND NETWORKS, 10 May 2004 (2004-05-10) - 12 May 2004 (2004-05-12), pages 385 - 390, XP002343807 *
PATENT ABSTRACTS OF JAPAN vol. 2003, no. 12 5 December 2003 (2003-12-05) *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8948822B2 (en) 2008-04-23 2015-02-03 Qualcomm Incorporated Coordinating power management functions in a multi-media device
WO2009158428A1 (en) * 2008-06-25 2009-12-30 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
RU2485712C2 (en) * 2008-06-25 2013-06-20 Квэлкомм Инкорпорейтед Fragmented link in time compression for video coding
US8908763B2 (en) 2008-06-25 2014-12-09 Qualcomm Incorporated Fragmented reference in temporal compression for video coding
US8948270B2 (en) 2008-08-19 2015-02-03 Qualcomm Incorporated Power and computational load management techniques in video processing
US8964828B2 (en) 2008-08-19 2015-02-24 Qualcomm Incorporated Power and computational load management techniques in video processing
US9462326B2 (en) 2008-08-19 2016-10-04 Qualcomm Incorporated Power and computational load management techniques in video processing
US9565467B2 (en) 2008-08-19 2017-02-07 Qualcomm Incorporated Power and computational load management techniques in video processing
CN103997785A (en) * 2014-05-22 2014-08-20 无锡爱维特信息技术有限公司 Data retrieval method for target object motion prediction based on base station information window
CN103997785B (en) * 2014-05-22 2017-03-01 无锡爱维特信息技术有限公司 Target object motion prediction data retrieval method based on base station information window
CN114363548A (en) * 2022-01-10 2022-04-15 浙江齐安信息科技有限公司 Method and system for recording screen video of electronic equipment
CN114363548B (en) * 2022-01-10 2024-01-30 浙江齐安信息科技有限公司 Screen video recording method and system for electronic equipment

Also Published As

Publication number Publication date
JP2006014086A (en) 2006-01-12

Similar Documents

Publication Publication Date Title
US20080089413A1 (en) Moving Image Encoding Apparatus And Moving Image Encoding Method
WO2006001490A1 (en) Moving image encoding apparatus and moving image encoding method
Marcellin et al. An overview of JPEG-2000
JP4480119B2 (en) Image processing apparatus and image processing method
JP4702928B2 (en) Moving picture encoding apparatus and decoding apparatus, control method therefor, computer program, and computer-readable storage medium
TWI436286B (en) Method and apparatus for decoding image
JP3970521B2 (en) Embedded quadtree wavelet in image compression
JP5258664B2 (en) Image coding apparatus, method and program, and image decoding apparatus, method and program
US7440624B2 (en) Image compression apparatus, image decompression apparatus, image compression method, image decompression method, program, and recording medium
WO2004015998A1 (en) System and method for rate-distortion optimized data partitioning for video coding using backward adaptation
US20060269151A1 (en) Encoding method and encoding apparatus
US7406202B2 (en) Image processing apparatus, image compression apparatus, image processing method, image compression method, program, and recording medium
JP2006523991A (en) System and method for performing data division with rate distortion optimized for video coding using parametric rate distortion model
JP2004242290A (en) Image processing apparatus and image processing method, image edit processing system, image processing program, and storage medium
JP2005519543A (en) Method and system for layer video coding
US7409095B2 (en) Image processing apparatus and method for scalable encoded image data
JP2007005844A (en) Coding processor, coding processing method, program and information recording medium
JP2004254300A (en) Image processing apparatus, program and storage medium
US20040057514A1 (en) Image processing apparatus and method thereof
US20060056714A1 (en) Image process device, image processing program, and recording medium
US8081093B2 (en) Code transforming apparatus and code transforming method
JP2006295561A (en) Coding processing apparatus, coding processing method, program, and recording medium
JP4054430B2 (en) Image processing apparatus and method, and storage medium
Robert et al. Impact of content mastering on the throughput of a bit stream video watermarking system
JP2004214740A (en) Moving picture encoder

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 11571187

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase
WWP Wipo information: published in national office

Ref document number: 11571187

Country of ref document: US