WO2010125606A1 - Procédé et système de compression de données d'image - Google Patents

Procédé et système de compression de données d'image Download PDF

Info

Publication number
WO2010125606A1
WO2010125606A1 PCT/JP2009/001943 JP2009001943W WO2010125606A1 WO 2010125606 A1 WO2010125606 A1 WO 2010125606A1 JP 2009001943 W JP2009001943 W JP 2009001943W WO 2010125606 A1 WO2010125606 A1 WO 2010125606A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
blocks
flag
compression
parameters
Prior art date
Application number
PCT/JP2009/001943
Other languages
English (en)
Inventor
Greg Ellis
Noboru Takeda
Original Assignee
Aspa-Japan Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aspa-Japan Co., Ltd. filed Critical Aspa-Japan Co., Ltd.
Priority to PCT/JP2009/001943 priority Critical patent/WO2010125606A1/fr
Publication of WO2010125606A1 publication Critical patent/WO2010125606A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to video processing generally and, more particularly, to a method and/or apparatus for transcoding of video files and usage.
  • image data including video streaming data are distributed by means of the Internet, generally these data are compressed. It is difficult or impossible to distribute such data because of the huge data volume intrinsic to these data. Video streaming needs to transmit tens of still images per second, resulting in a total amount of information far greater than a single text file or still image.
  • High quality imaging is also vital in the fields of video recording.
  • television broadcasting for example, the high resolution digital technology that uses twice the number of scanning lines used for conventional broadcasting is prevailing.
  • H.264 is a standard for video compression, and is equivalent to MPEG-4 Part 10, or MPEG-4.
  • MPEG-4 is the latest block-oriented motion-compensation-based codec standard developed by the ITU-T Video Coding Experts Group (VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG), and it was the product of a partnership effort known as the Joint Video Team (JVT).
  • the ITU-T H.264 standard and the ISO/IEC MPEG-4 Part 10 standard (formally, ISO/IEC 14496-10) are jointly maintained so that they have identical technical content.
  • the final drafting work on the first version of the standard was completed in May 2003.
  • the intent of the H.264 project was to create a standard capable of providing good video quality at substantially lower bit rates than previous standards (e.g. half or less the bit rate of MPEG-2, H.263, or MPEG-4 Part 2), without increasing the complexity of design so much that it would be impractical or excessively expensive to implement.
  • An additional goal was to provide enough flexibility to allow the standard to be applied to a wide variety of applications on a wide variety of networks and systems, including low and high bit rates, low and high resolution video, broadcast, DVD storage, RTP/IP packet networks, and ITU-T multimedia telephony systems.
  • Fidelity Range Extensions Several other features are also included in the Fidelity Range Extensions project, such as adaptive switching between 4x4 and 8x8 integer transforms, encoder-specified perceptual-based quantization weighting matrices, efficient inter-picture lossless coding, and support of additional color spaces.
  • the design work on the Fidelity Range Extensions was completed in July 2004, and the drafting work on them was completed in September 2004.
  • the H.264 name follows the ITU-T naming convention, where the standard is a member of the H.26x line of VCEG video coding standards; the MPEG-4 AVC name relates to the naming convention in ISO/IEC MPEG, where the standard is part 10 of ISO/IEC 14496, which is the suite of standards known as MPEG-4.
  • the standard was developed jointly in a partnership of VCEG and MPEG, after earlier development work in the ITU-T as a VCEG project called H.26L.
  • the invention addresses the challenge in the marketplace, for a compression system designed to meet a variety of industry needs with many profiles and levels, allowing for varying compression, quality and CPU usage levels, where the lowest level is for portable devices, designed with low CPU usage in mind, while the high levels are designed with very high quality and compression efficiency in mind.
  • the patent reviews the compression problems that arise, when desired support for studio archiving requirements with 4:4:4 color space, separate black and white (BW) video mode, as well as profiles for low CPU usage, and increasing the quality of video depending on the bit rate desired for the end output of said video file.
  • BW black and white
  • the purpose of the technology detailed herein is to provide a video data compression method for webcasting which enables the user to broadcast superior images with less distortion.
  • the present invention includes the following:
  • a method for compress image data comprising; splitting input image data into plural packets, providing each of the obtained packets with a flag, the flag being capable of instructing said packets to be a unit of compression and decompression, and applying image compression to each of the aforementioned packet provided with the flag, the image compression comprising transformation from image data into frequency components.
  • a compression system of image data comprising; a splitter which splits input image data into plural packets, a packet providing means which provides each of the obtained packets with a flag, the flag being capable of instructing said packets to be a unit of compression and decompression, and a data compressor which applies image compression to each of the aforementioned packet provided with the flag, the image compression comprising transformation from image data into frequency components.
  • the system of (8) further comprising a data transmitter which sends the compressed data to Internet or intranet.
  • the system of (8) or (9), wherein the transformation from image data into frequency components is a discrete cosine transform (DCT).
  • DCT discrete cosine transform
  • a streaming distribution apparatus comprising the system of any of (8) to (10).
  • (13) A method for calculating a flag parameter in a digital image decoding comprises: receiving a macroblock comprising MxN blocks; performing a first operation on M block, along a first edge in a first direction to obtain corresponding M first parameters, then storing the those first parameters into corresponding locations in a first buffer array; performing a second operation on N blocks along a second edge in a second operation to obtain corresponding N second parameters, and storing the second parameters into corresponding locations in a second buffer array.
  • a system for calculating a flag parameter in a digital image decoding comprising: a buffer configured with a plurality of arrays for storing flag parameters of a plurality of blocks within a macroblock into corresponding locations; and a computation unit coupled to the buffer and configured to perform a plurality of operations to obtain offset values corresponding to the plurality of blocks based on the flag parameters stored in the buffer.
  • a method for updating flag parameters in CABAC processing comprises: providing two buffer arrays having locations corresponding to a plurality of blocks respectively; storing with first parameters corresponding to a first plurality of blocks along a left most edge of the macroblock in the first buffer array; storing with second parameters corresponding to a second plurality of blocks along a top most edge of the macroblock in the second buffer array; and updating the first and second buffer arrays with flag parameters corresponding to the plurality of blocks obtained by an operation performed on each of the plurality of blocks within the macroblock in a specific order; wherein the specific order is arranged that neighboring left and upper blocks next to a given block are processed prior to the given block.
  • Fig. 1 is a schematic drawing of the present invention.
  • Fig. 2 is a schematic drawing of the present invention.
  • Fig. 3 is a diagram showing the placement of the CABAC and CAVLC components.
  • Fig. 4 is a diagram of one embodiment of the present invention.
  • Fig. 5 is a diagram showing the Header provision step.
  • Fig. 6 is showing a luminance macroblock.
  • Fig. 7 is showing a prediction block.
  • Fig. 8 is a schematic drawing of the direction of prediction in each mode.
  • Fig. 9 is showing the prediction block P created by each of the predictions.
  • Fig. 10 is a schematic drawing of luma prediction modes.
  • Fig. 11 is a schematic drawing of the present invention.
  • Fig. 12 is a diagram showing the present process.
  • Fig. 1 is a schematic drawing of the present invention.
  • Fig. 2 is a schematic drawing of the present invention.
  • Fig. 3 is a diagram showing the placement of the CABAC and
  • FIG. 13 is a sample diagram of the data division steps.
  • Fig. 14 is a schematic drawing of a hardware device of the present invention.
  • Fig. 15 is a diagram showing how the encoder searches for a segment of a subsequent frame.
  • Fig. 16 is a diagram of one embodiment of the present invention.
  • Fig. 17 is a diagram that represents the Learning-based Approach for video cut detection.
  • Fig. 18 is a diagram showing how the encoder can look both forwards and backwards for redundant picture information.
  • Fig. 19 is a diagram showing the relevant placement of parameters.
  • Fig. 20 represents the low-pass filtering of the predictor to improve prediction.
  • Fig. 21 is a diagram showing the placement of the quantization matrices components.
  • Fig. 22 is a diagram representing the lossless mode utilized in the present system.
  • Fig. 23 is a diagram of the present process.
  • the structural compression of this technology for webcasting compresses each of packets rather than a whole file.
  • compressing image calls for a compression algorithm.
  • the reason why the structural compression causes less image distortion is that it uses a Fourier transformation algorithm.
  • a Fourier transformation disassembles a function into a number of continuous spectrums of each of frequency components and assembles the continuous spectrums back to the original function which may be a numeric integration value from minus infinity to infinity.
  • Figure 1 illustrates the phenomena above.
  • 10 is the image before compression
  • 21 and 22 are the compressed images
  • 31 is compression of whole file
  • 32 is compression of each packet.
  • the structural compression works effectively for restoring (decompressing) the data sent by webcasting.
  • video streaming process the video data webcasted from the server is decompressed and then played by a media player or shown on a web browser along with html files related to the website.
  • the parallel processing in which multiple videos need to be processed becomes highly complicated.
  • the parallel processing of a total of 4 tasks from Task A to D progresses sequentially from Step 1.
  • Task A is video data processing and it takes a long time to process Steps 1, 5, and 9, other tasks are kept on hold.
  • a waiting time is usually assigned with a specific time-out duration.
  • the waiting time becomes very long because Task A, B and C are all video data processing, the ongoing computer processing may be aborted due to the time-out. This causes freeze-up of a PC during video data processing.
  • this technology allows a PC to process multiple movie windows on a single webpage simultaneously.
  • the structural compression and decompression achieves "lighter” webcasting compared to a file compression and decompression. This "lightness” enables quick webpage downloading and viewing even if the top page of the website has movie segments attached.
  • creating a top page with a video file has been avoided due to the "weight" of the file compressed with a conventional file compression technology, the file compressed with the new technology does not add burden to such a page.
  • a flag parameter in a digital image decoding is calculated.
  • a first operation is performed on M block along a first edge to obtain M first parameters
  • a second operation is performed on N blocks along a second edge to obtain N second parameters.
  • the first and second parameters are stored into corresponding locations in a first and a second buffer array.
  • a flag parameter corresponding to a given block is calculated according to corresponding values stored in the first and second buffer arrays. Calculation for all of the MxN blocks is performed in the order that neighboring left and upper blocks next to the give block is processed prior to the given block.
  • This process employs a method for calculating a flag parameter, and more particularly to a method and a system for calculating a flag parameter in a decoding process of a digital image.
  • video compression is to omit certain data of an image that are imperceptible to human's eyes, i.e. so-called visual redundancy.
  • the data to be omitted are generally similar to other data in space or time dimension, and thus can be removed according to a compression algorithm.
  • an image frame is divided into a plurality of rectangular areas called as macroblocks (MB).
  • the macroblocks are then encoded.
  • intra-frame prediction and inter-frame prediction techniques are used to remove the similarities between images so as to obtain the residual differences.
  • the residual differences are spatially transformed and quantized to remove the visual redundancy.
  • CABAC context adaptive binary arithmetic coding
  • the decoding operation of the decoder is based on a context ID which includes a base portion and an offset portion.
  • the base portion can be obtained by a lookup table, but the offset portion is calculated from the coded_block_flag. Therefore, a key point to calculate the coded_block_flag parameter is to obtain corresponding offset portion.
  • the offset value of each block is calculated according to coded_block_flag parameters of the neighboring left and top blocks of the current block.
  • the present process utilizes a universal algorithm for calculating the offset value of all blocks.
  • the present process provides a simplified method and device for calculating a flag parameter.
  • the present process provides a method for calculating a flag parameter in a digital image decoding, wherein the method comprises: receiving a macroblock comprising MxN blocks; performing a first operation on M block, along a first edge in a first direction to obtain corresponding M first parameters. Then storing the those first parameters into corresponding locations in a first buffer array; performing a second operation on N blocks along a second edge in a second operation to obtain corresponding N second parameters, and storing the second parameters into corresponding locations in a second buffer array.
  • the present process starts calculating a flag parameter corresponding to a given block according to corresponding values stored in the first and second buffer arrays according to a third operation; storing the flag parameter into location corresponding to a neighboring right block next to the given block in the first buffer array and location corresponding to a neighboring lower block next to the given block in the second buffer array; repeating steps for each of the MxN blocks in the order from blocks along a left most edge, to blocks along a top most edge, blocks along a second left edge, blocks along a second top edge and so on.
  • the present process also provides a system for calculating a flag parameter in a digital image decoding, wherein the system comprises: a buffer configured with a plurality of arrays for storing flag parameters of a plurality of blocks within a macroblock into corresponding locations; and a computation unit coupled to the buffer and configured to perform a plurality of operations to obtain offset values corresponding to the plurality of blocks based on the flag parameters stored in the buffer.
  • the present process also provides a method for updating flag parameters in CABAC processing, wherein the method comprises: providing two buffer arrays having locations corresponding to a plurality of blocks respectively; storing with first parameters corresponding to a first plurality of blocks along a left most edge of the macroblock in the first buffer array; storing with second parameters corresponding to a second plurality of blocks along a top most edge of the macroblock in the second buffer array; and updating the first and second buffer arrays with flag parameters corresponding to the plurality of blocks obtained by an operation performed on each of the plurality of blocks within the macroblock in a specific order; wherein the specific order is arranged that neighboring left and upper blocks next to a given block are processed prior to the given block.
  • coded_block_flag parameters of two neighboring left and top blocks have to be determined first.
  • the offset value of block 3 is determined according to coded_block_flag parameters of block 1 and block 2;
  • the offset value of block 6 is determined according to coded_block_flag parameters of block 3 and block 4;
  • offset value of Block 7 is determined according to coded_block_flag parameters of block 5 and block 6; and so on.
  • coded_block_flag parameter for blocks 3, 6, 7, 9, 11, 12, 13, 14 and 15 does not require complex processing and it is easier to obtain necessary information.
  • the present process makes use of the observation to simplify processing of coded_block_flag parameters.
  • the block processing unit when the block processing unit receives a digital image, the digital image may comprise a plurality of macroblocks, and each of the macroblock further comprises MxN blocks
  • the buffer includes two buffer arrays corresponding to the blocks for storing coded_block_flag parameters of the neighboring left and upper blocks next to a given block, respectively.
  • each of the buffer arrays includes 16 units, and each of the unit may store 1 bit of data. That is, a total of 32 bits can be stored.
  • the pair of coded_block_flag parameters corresponding to the given block together forms the offset value of the given block. Then the offset value will be combined with corresponding base value of the given block so as to determine the coded_block_flag parameter.
  • the computation unit is coupled to the block processing unit and buffer, and is configured to implement various operations.
  • the computation unit may perform a first operation, a second operation and a third operation on the current macroblock being processed to obtain relating coded_flag_parameters.
  • the macroblock comprises MxN blocks such as 4x4 blocks.
  • the first operation is performed on blocks along a first edge in a first direction
  • the second operation is performed on blocks along a second edge in a second direction
  • the third operation is performed on every block of the current macroblock in a specific order.
  • the resulting parameters are written and/or updated to the corresponding location in the buffer array.
  • coded_block_flag parameters of the neighboring left and upper blocks need not be obtained by the conventional method and may take advantage of prior determined parameters. But for blocks lying on the left most and top most edges of the macroblock, reference must be made to the macroblocks at left or at top (i.e. macroblock A and/or B) of the current macroblock. As a result, once the coded_block_flag parameters of blocks on the left most and top most edges are determined, calculation of the remaining blocks may be simplified using these determined parameters without repeating same processing for every block. Efficient computation can be thus achieved as well as reduction of time.
  • the first operation M blocks on the first edge in the first direction are processed to obtain respective first parameters.
  • the first parameters represent coded_block_paramteters of the neighboring left blocks next to the M blocks respectively. These M blocks are blocks 0, 2, 8 and 10 on the left most edge. For each of the blocks 0, 2, 8 and 10, coded_block_flag parameters of the neighboring left block (lying within macroblock A) are calculated. The resulting first parameters are then written to corresponding locations in the buffer respectively.
  • the first operation can be the universal algorithm adopted.
  • the second operation is performed on N blocks on the second edge in the second direction to obtain respective second parameters.
  • the second parameters represent coded_block_parameters of the neighboring upper blocks (lying within macroblock B) next to the N blocks.
  • the second edge is taken as the top most edge having blocks 0, 1, 4 and 5.
  • the resulting second parameters are then written to corresponding locations in the buffer. Determination of the second operation or a given block is described as follows.
  • the coded_block_flag parameter of the neighboring upper block next to blocks 0, 1, 4, 5 is set to 0.
  • macroblock B is an inter-MB
  • the current macroblock is an intra-MB and is transmitted by data partition, and a constrained intra prediction flag (constrained_intra_pred_flag) corresponding to the given block is set to "1"; the coded_block_flag parameter of the neighboring upper block next to blocks 0, 1, 4, 5 is set to 0.
  • constrained_intra_pred_flag constrained intra prediction flag
  • macroblock B consists of four blocks B0, B1, B2 and B3, each having 8x8 pixels.
  • the coded_block_flag parameter of block B2 is used and written in response to bit 2 of a coded block pattern (CBP) of macroblock B is set to 1; otherwise, the coded_block_parameters are set to 0.
  • CBP coded block pattern
  • the coded_block_flag parameter of block B3 is used and written in response to bit 3 of CBP of macroblock B is set to 1; otherwise, the coded_block_parameters are set to 0.
  • macroblock B consists of four Blocks, each having 4x4 pixels.
  • the coded_block_flag parameters of blocks B10 and B11 are used respectively in response to bit 2 of CBP of macroblock B is set to 1; otherwise, the coded_block_parameters are set to 0.
  • the coded_block_flag parameters of blocks B14 and B15 are used and written to nB[4] and nB[5] respectively in response to bit 3 of CBP of macroblock B is set to 1; otherwise, the coded_block_flag parameters are set to 0.
  • the second operation is much simplified than the conventional first operation due to the reason that in macroblock-adaptive frame/field (MBAFF) encoding is more flexible about the format of the neighboring left macroblock.
  • MAAFF macroblock-adaptive frame/field
  • the first operation adopts the conventional method.
  • the first operation can be simplified the same as the second operation so as to reach fast computation.
  • coded_block_flag parameters that would be referenced by blocks on the first and second edges are obtained and stored in corresponding position within the buffer.
  • the third operation can proceed on each of the blocks by use of the stored parameters. The order of the third operation is arranged so that a given block will not be processed unless the neighboring left and upper blocks next to the given block have been processed and corresponding coded_block_flag parameters have been written or updated in the buffer arrays. Then the offset value is calculated for the given block x.
  • the CABAC decoder may decode the context ID and thus generate the corresponding coded_block_flag parameter of the given block. Furthermore, the coded_block_flag parameter of the given block is written or updated into corresponding locations of the buffer arrays for the use of other blocks that need reference to the value, which are blocks at the right of and lower to the given block. In such way, computation can be largely reduced and thus performance is improved without repeating complex processing for every block.
  • coded_block_flag parameters of neighboring left or upper blocks next to blocks 0, 1, 2, 4, 5, 8, 10 are determined and stored in the buffer. Starting from block 0, a pair of coded_block_flag parameters corresponding to the upper-left block 0 are obtained, the computation unit further performs the third operation to obtain the offset value and further processes the offset value into a coded_block_flag parameter of block 0.
  • the procedure is executed as follows:
  • block 0 After the coded_block_flag of block 0 is decoded, it is stored in a corresponding location for block 1 and for block 2. Thus block 0 is the neighboring left block next to block 1 and the neighboring upper block next to block 2. Both blocks 1 and 2 will need reference to the coded_block_flag parameter of block 0, thus the value is stored for later use.
  • CABAC decoder decodes context ID of block 1 into coded_block_flag parameter of block 1 and stores the value.
  • block 1 is the neighboring left block to block 4 and the neighboring upper block to block 3.
  • coded_block_flag parameter of block 2 can be obtained according to the value stored during the first operation of block 0.
  • the offset value of block 2 and context ID of block 2 base value+offset value of block 2.
  • the CABAC decoder decodes context ID of Block 2 into coded_block_flag parameter of block 2 and stores the value.
  • block 2 is the neighboring left block to block 3 and the neighboring upper block to block 8.
  • coded_block_flag parameters of blocks on the left most and top most edges in a current macroblock are calculated through the first and second operations
  • coded_block_flag parameters of all the blocks in the current macroblock can be obtained based on coded_block_flag parameters of neighboring left and upper blocks. Since the second and third operations are simplified, the efficiency for calculating the coded_block_flag parameters are improved in this process
  • a packet may be a digital data corresponding to a macroblock that is a rectangular area obtained by dividing the image frame.
  • Embodiment (1) Usage of said video compression in the CAVLC/CABAC mode is described.
  • CABAC Context-adaptive binary arithmetic coding
  • CABAC Context-adaptive binary arithmetic coding
  • CAVLC Context-adaptive variable-length coding (CAVLC) is a form of entropy coding used in the present process.
  • FIG. 3 is a diagram showing the placement of the CABAC and CAVLC components.
  • entropy coding via CABAC or CAVLC is done as a last step, after ME and quantization have taken place.
  • the already-computed data transformed & quantized coefficients, macroblock types, motion vectors, plus additional side-infos.
  • this process we decode the CAVLC/CABAC stream and reencode it to the other format, while signalling in the slice headers that we've switched entropy coder type. No data is lost in this process, since detail removal (which comes from quantization) need not be performed again.
  • Embodiment (2) Usage of said video compression in the Multi-References mode is described.
  • Multi-References are associated with data header files, and help limit the data blocks, and macroblock transactions from scene change to scene change in a given file encode.
  • This step in the process helps by using one reference frame as a macroblock data flag file, and thus decreasing the over all file size, and end encoded output.
  • the header files will turn out relevant path files, as shown in the example in Figure 4.
  • the Header provision step on those divided blocks include the data points shown in Figure 5.
  • the compression step for the block with header provision completed can be outlined in the recovery point of the information structure, and subsequent data fields.
  • Embodiment (3) Usage of said video compression in the Intra Mode macroblock types (16x16, 8x8, and 4x4 with prediction modes is described.
  • a block or macroblock is encoded in intra mode, thus a prediction block is formed based on previously encoded and reconstructed (but un-filtered) blocks.
  • This prediction block P is subtracted from the current block prior to encoding.
  • P may be formed for each 4x4 subblock or for a 16x16 macroblock.
  • Figure 6 shows a luminance macroblock in a QCIF frame and a 4x4 luma block that is required to be predicted.
  • the samples above and to the left have previously been encoded and reconstructed and are therefore available in the encoder and decoder to form a prediction reference.
  • the prediction block P is calculated based on the samples labeled A-M in Figure 7, as follows. Note that in some cases, not all of the samples A-M are available within the current slice: in order to preserve independent decoding of slices, only samples within the current slice are available for prediction.
  • DC prediction (mode 0) is modified depending on which samples A-M are available; the other modes (1-8) may only be used if all of the required prediction samples are available (except that, if E, F, G and H are not available, their value is copied from sample D).
  • the arrows in Figure 8 indicate the direction of prediction in each mode.
  • the predicted samples are formed from a weighted average of the prediction samples A-Q.
  • the encoder may select the prediction mode for each block that minimizes the residual between P and the block to be encoded.
  • the 9 prediction modes (0-8) are calculated for the 4x4 block shown in Figure 6.
  • Figure 9 shows the prediction block P created by each of the predictions.
  • the Sum of Absolute Errors (SAE) for each prediction indicates the magnitude of the prediction error.
  • SAE Absolute Errors
  • mode 7 vertical-right because this mode gives the smallest SAE; a visual comparison shows that the P block appears quite similar to the original 4x4 block.
  • 16x16 Luma prediction modes As an alternative to the 4x4 luma modes described above, the entire 16x16 luma component of a macroblock may be predicted. Four modes are available, shown in diagram form in Figure 10:
  • Mode 0 vertical: extrapolation from upper samples (H).
  • Mode 1 horizontal: extrapolation from left samples (V).
  • Mode 2 DC: mean of upper and left-hand samples (H+V).
  • Mode 4 Plane: a linear "plane" function is fitted to the upper and left-hand samples H and V. This works well in areas of smoothly-varying luminance.
  • Each 8x8 chroma component of a macroblock is predicted from chroma samples above and/or to the left that have previously been encoded and reconstructed.
  • the 4 prediction modes are very similar to the 16x16 luma prediction modes described in section 3 and illustrated in Figure 10, except that the order of mode numbers is different: DC (mode 0), horizontal (mode 1), vertical (mode 2) and plane (mode 3). The same prediction mode is always applied to both chroma blocks. If any of the 8x8 blocks in the luma component are coded in Intra mode, both chroma blocks are Intra coded.
  • intra prediction mode For each 4x4 block must be signalled to the decoder and this could potentially require a large number of bits.
  • intra modes for neighbouring 4x4 blocks are highly correlated. For example, if previously-encoded 4x4 blocks A and B in Figure 11 were predicted using mode 2, it is likely that the best mode for block C (current block) is also mode 2.
  • the encoder and decoder calculate the most_probable_mode. If A and B are both coded in 4x4 intra mode and are both within the current slice, most_probable_mode is the minimum of the prediction modes of A and B; otherwise most_probable_mode is set to 2 (DC prediction).
  • the encoder sends a flag for each 4x4 block, use_most_probable_mode. If the flag is "1", the parameter most_probable_mode is used. If the flag is "0", another parameter remaining_mode_selector is sent to indicate a change of mode. If remaining_mode_selector is smaller than the current most_probable_mode then the prediction mode is set to remaining_mode_selector; otherwise the prediction mode is set to remaining_mode_selector+1. In this way, only 8 values of remaining_mode_selector are required (0 to 7) to signal the current intra mode (0 to 8).
  • Figure 12 is a diagram showing the present process.
  • a "hybrid" video encoder model in which a prediction (formed from previously-transmitted frames) is subtracted from the current frame to form a residual (e.g. motion-compensated difference frame) which is then transformed (using a block transform such as the DCT), quantized and coded for transmission.
  • a block transform such as the DCT
  • the data division step includes the directed encoding of the video source. It can be divided into subsets, for the output to the header provisions, below.
  • Embodiment (3) Continued: Data Division Steps
  • Figure 13 is a sample diagram of the data division steps.
  • the below is a sample code of the data division steps.
  • Truncated class because of truncation */ class Invisible ⁇ ⁇ ; /*! Truncated class, inheritance relation is hidden */ class Truncated : public Invisible ⁇ ⁇ ; /* Class not documented with doxygen comments */ class Undocumented ⁇ ⁇ ; /*! Class that is inherited using public inheritance */ class PublicBase : public Truncated ⁇ ⁇ ; /*! A template class */ template ⁇ class T> class Templ ⁇ ⁇ ; /*! Class that is inherited using protected inheritance */ class ProtectedBase ⁇ ⁇ ; /*! Class that is inherited using private inheritance */ class PrivateBase ⁇ ⁇ ; /*!
  • Figure 5 is an example of the header provisioning steps in the diagram.
  • Embodiment (4) Weighted prediction support module
  • weighted prediction support module we utilize the following header structure in compression, and this can be programmed to express any other call functions after the compression as needed. This is used in the above sampling examples.
  • Embodiment (5) Usage of compression in a mobile hardware device
  • a cell phone, or mobile device can act as a receiver or producer of video, via the built in hardware camera, or built in modem.
  • the video compression outlined in this application proposes to add, in a module form, the video compression of a mobile device.
  • the video compression module is represented in this diagram as "A".
  • the code interacts with the call functions of the hardware device, in the video processing module.
  • the Module "A” will take its input and instructions from the data stream assigned by the embedded instructions.
  • the Module "A" compression will act according to those instructions.
  • the data stream that is input in "A” is representative of a video stream, consisting of the raw video, (incoming) to be encoded.
  • This encode sequence in this instance can have a dynamic structure.
  • DCT Discreet Cosine Transform
  • DCT thus transforms the method of expressing an image from "pixel blocks to frequency components.” preserving al l information contained in the original input.
  • a motion vector is used for a block to be coded, and prediction images are searched for the most similar block, and the motion between these blocks is represented by a motion vector and prediction error information. Prediction images are produced by using motion vectors and then each prediction error is computed.
  • intra-prediction based on similarities among adjacent pixels is used to compress an image. More specifically, from an original input image, the pixel value in an 8x8 block to be coded is predicted and produced by using pre-coded adjacent pixel values.
  • a prediction mode is set for luminance and chroma modes. An appropriate mode is optimized.
  • the present process intra-compresses and records this intra-prediction mode information and the DCT of the residual image together. This accurate intra-prediction can reduce the amount of data of theresidual image, and thus can achieve high efficiency compression.
  • the intra-prediction process predicts within the limits of a single frame, it has an advantage over inter-frame prediction in preventing the deterioration of prediction accuracy even for volatile movement.
  • the data is handled in the following manner: Individual steps in compression processes To compress the image, the codec will usually remove "spatial” and “temporal” redundancies.
  • run-length encode the resulting series of values (that is, send the number of identical values in a "run” along with that value, instead of sending every value separately), and, finally.
  • This activity is (usually) performed on “blocks” of 8x8 pixels, and such blocks are sometimes aggregated into “macroblocks” of 4 blocks and/or “slices” for other purposes.
  • This block can be transformed via DCT into the next block: which has concentrated information from the entire matrix into the upper left-hand corner.
  • the upper-left-most DCT value represents an average block value (actually 8 times the mean value). It is called the "DC coefficient"; the other values are “AC coefficients”.
  • the DC coefficient must be sent with higher precision, so it is not included in the stream of values to be quantized, run-length encoded, and then variable-length encoded. Instead, the DC coefficients from all blocks may be combined and treated as a separate block to which the DCT transformation and quantizing and encoding steps are applied prior to transmission.
  • the transformed example block can be quantized into this block using quantization tables. Note that these Tables treat each location within the block separately, so as to improve subsequent encoding efficiency:
  • inter-picture prediction can be used to remove "temporal redundancy".
  • a frame in this instance is a matrix of 8x8 pixel blocks, encoded using the DCT process.
  • a block in a frame being encoded is quite similar to a block in another frame, either before the current frame or after it.
  • the center or "Current” frame
  • the front portion of the ball can be found in the "Previous" frame, and the back portion can be found in the subsequent (“Next") frame.
  • - I (Index) frames are built only from INTRA-frame data.
  • - P (Predicted) frames can contain pointers to blocks within the most recent previous I or P frames.
  • - B (Bidirectional) frames can point to blocks within the most recent previous I or P frames or the closest subsequent I or P frames.
  • a Group of Pictures will be an I frame, followed by several sets composed of....one P frame followed by several B frames.
  • frames are thought of as a matrix of 16x16 pixel blocks ("macroblocks"), each of which contains the DCT-encoded data for 4 8x8 blocks (along with the smaller chroma blocks) that will be used to create such an image, pointers to similar blocks elsewhere in the data stream, DCT-encoded data using similar blocks elsewhere in the data stream, or some combination of these.
  • macroblocks 16x16 pixel blocks
  • the process of searching frames for similar portions is described below.
  • the first problem is "How can you tell how similar two visual regions are?"
  • the approach is to develop a formula that compares pixel values at the same relative positions within each block.
  • the approaches are to compute the: - “mean square error” (MSE) or - “minimum absolute difference” (MAD)
  • MSE mean square error
  • MAD minimum absolute difference
  • Mean Square Error is calculated by calculating the mean value of...the list of values generated by squaring the difference between corresponding values in each block.
  • the two most similar regions should have the smallest MSE or MAD.
  • Figure 15 is a diagram showing how the encoder searches for a segment of a subsequent ("Next") frame that is similar to a block to be encoded within the "Current" frame, outlined in heavy-line (after Li).
  • the search begins with the block in the same position as the reference block, as described above, and proceeds in this example by moving the comparison region (shown as a square) to the right some number of pixels at each step.
  • the step size is several pixels. Since pixels are discrete, this requires interpolation of the pixel values between two known pixels.
  • Embodiment (6) Encoding setting examples, and decompression
  • the following settings are examples of different encoding option combinations that affect the speed vs quality tradeoff at the same target bitrate. All the encoding settings were tested on a 720x448 @30000/1001 fps video sample, the target bitrate was 900kbps, and the machine was an AMD-64 3400+ at 2400 MHz in 64 bits mode. Each encoding setting features the measured encoding speed (in frames per second) and the PSNR loss (in dB) compared to the "very high quality" setting.
  • the video decoder receives the compressed bitstream, decodes each of the syntax elements and extracts the information described quantized transform coefficients, prediction information, etc. This information is then used to reverse the coding process and recreate a sequence of video images.
  • the header information in this sequence would be the main item needed to extract the compression at the destinations (see Figure 16).
  • RTPpacket_t Defines #define MAXRTPPAYLOADLEN (65536 - 40) #define MAXRTPPACKETSIZE (65536 - 28) #define H264PAYLOADTYPE 105 #define H264SSRC 0x12345678 #define RTP_TR_TIMESTAMP_MULT 1000 Functions void RTPUpdateTimestamp (int tr) void OpenRTPFile (char *Filename) void CloseRTPFile (void) int WriteRTPNALU (NALU_t *n)
  • Embodiment (6) Usage of scene cut technology
  • scene cut detection to divide a video into units easy to handle is necessary as base technology comprehensively judges scene change by putting together change on image information and change on audio signal No data for machine learning is required This is utilized to improve efficiency of basic functionality and performance of a video access system such as cueing video by clicking a thumbnail image.
  • the first step for video-content analysis, content-based video browsing and retrieval is the partitioning of a video sequence into shots.
  • a shot is the fundamental unit of a video, it captures a continuous action from a single camera and represents a spatio-temporally coherent sequence of frames. Thus, shots are considered as the primitives for higher level content analysis, indexing and classification.
  • Figure 17 is a diagram that represents the Learning-based Approach for video cut detection.
  • Embodiment (7) Usage of Adaptive B-frame placement. B-frames as references / arbitrary frame order
  • B-frames are bi-directional predicted frames. As shown in Figure 18, this diagram means that when producing B-frames, the method used in our encoder can look both forwards and backwards for redundant picture information. This makes B-frames the most efficient frame review process to use.
  • the combined skip and inter prediction method in the encoder when B pictures is analyzed.
  • the rate distortion costs of coding or skipping a macroblock are estimated prior to processing.
  • a decision whether to code the macroblock or stop further processing is made based on a Lagrangian cost function.
  • selective intra fast mode decision in encoders is proposed by reducing the number of candidate modes using the directional information. Coding time is reduced by 35-42 % through early identification of macroblocks that are likely to be skipped during the coding process and through reducing the number of candidate modes. There is not significant loss of rate-distortion performance.
  • coding time is substantially reduced because a significant number of macroblocks are not processed by the encoder.
  • the computational saving depends on the activity of video sequences.
  • Embodiment (8) Usage of 8x8 and 4x4 adaptive spatial transform
  • an exact-match integer 4x4 spatial block transform is used, allowing precise placement of residual signals without the feedback often found with prior codec designs. This is conceptually similar to the discrete cosine transform design, but simplified and made to provide exactly-specified decoding. Also in the present process, an exact-match integer 8x8 spatial block transform can also be used, allowing highly correlated regions to be compressed more efficiently than just with the 4x4 transform alone.
  • the block and diagram in Figure 19 show the relevant placement of parameters in this sequence.
  • Figure 20 represents the low-pass filtering of the predictor to improve prediction performance
  • Table 3 represents the 8 x 8 Integer transform prediction performance.
  • Embodiment (9) Usage of Custom quantization matrices
  • transformation coefficients are quantized using a control parameter, using 8-bit/source sample: 52 quantization steps. For greater bit depth, 6 more quant. steps for each additional bit are used.
  • Quantization step-sizes are not linearly related to the quantization parameter in this scenario, and perceptual-based quantization scaling matrices are separate for each block size. We also utilize separate quantization scaling matrices for intra and inter prediction. The default values of matrices are specified in the code.
  • Figure 21 is a diagram showing the placement of the quantization matrices components.
  • Embodiment (10) Usage of Lossless Mode
  • entropy-coded transform-bypass is utilized, to enhance the lossless representation of specific regions.
  • we utilize the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
  • Huffuv coding uses a specific method for choosing the representation for each symbol, resulting in a prefix code (sometimes called "prefix-free codes") (that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol) that expresses the most common characters using shorter strings of bits than are used for less common source symbols. No other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code.
  • prefix-free codes that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol
  • Figure 22 is a diagram and code block, representing the lossless mode utilized in the present system.
  • Embodiment (11) Usage of Interlacing
  • Interlaced scan refers to one of two common methods for "painting" a video image on an electronic display screen (the second is progressive scan) by scanning or displaying each line or row of pixels.
  • This technique uses two fields to create a frame.
  • One field contains all the odd lines in the image, the other contains all the even lines of the image.
  • a PAL based television display for example, scans 50 fields every second (25 odd and 25 even). The two sets of 25 fields work together to create a full frame every 1/25th of a second, resulting in a display of 25 frames per second.
  • interlacing is utilized in the multiple ref frames and in the weighted prediction support.
  • Figure 23 is a diagram, and code block referencing this process.
  • Video webcast identical to DVD video image quality means that video webcasting may be applied much widely.
  • the present invention also enables a 5.1-channel surround or multi-surround audio streaming.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Cette invention porte d'une manière générale sur un traitement vidéo et plus particulièrement sur un procédé et/ou un appareil pour transcoder des fichiers vidéo et sur leur utilisation. La présente invention porte sur un procédé de compression de données d'image qui comporte la division de données d'image d'entrée en plusieurs paquets, la fourniture d'un drapeau à chacun des paquets obtenus, le drapeau pouvant instruire lesdits paquets de façon à ce que ceux-ci soient des unités de compression et de décompression, et l'application d'une compression d'image à chacun des paquets susmentionnés muni du drapeau, la compression d'image comportant la transformation de données d'image en composantes de fréquence.
PCT/JP2009/001943 2009-04-29 2009-04-29 Procédé et système de compression de données d'image WO2010125606A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/001943 WO2010125606A1 (fr) 2009-04-29 2009-04-29 Procédé et système de compression de données d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2009/001943 WO2010125606A1 (fr) 2009-04-29 2009-04-29 Procédé et système de compression de données d'image

Publications (1)

Publication Number Publication Date
WO2010125606A1 true WO2010125606A1 (fr) 2010-11-04

Family

ID=43031774

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2009/001943 WO2010125606A1 (fr) 2009-04-29 2009-04-29 Procédé et système de compression de données d'image

Country Status (1)

Country Link
WO (1) WO2010125606A1 (fr)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9462282B2 (en) 2011-07-11 2016-10-04 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9525881B2 (en) 2011-06-30 2016-12-20 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9591311B2 (en) 2011-06-27 2017-03-07 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9635361B2 (en) 2011-06-24 2017-04-25 Sun Patent Trust Decoding method and decoding apparatus
US9794578B2 (en) 2011-06-24 2017-10-17 Sun Patent Trust Coding method and coding apparatus
US10154264B2 (en) 2011-06-28 2018-12-11 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10237579B2 (en) 2011-06-29 2019-03-19 Sun Patent Trust Image decoding method including determining a context for a current block according to a signal type under which a control parameter for the current block is classified
US10237562B2 (en) 2011-02-22 2019-03-19 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
USRE47366E1 (en) 2011-06-23 2019-04-23 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
USRE47537E1 (en) 2011-06-23 2019-07-23 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
US10439637B2 (en) 2011-06-30 2019-10-08 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10511844B2 (en) 2011-02-22 2019-12-17 Tagivan Ii Llc Filtering method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
CN117097905A (zh) * 2023-10-11 2023-11-21 合肥工业大学 一种无损图像分块压缩方法、设备、存储介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06334995A (ja) * 1993-03-25 1994-12-02 Sony Corp 動画像符号化又は復号化方法
JP2003348355A (ja) * 2002-05-29 2003-12-05 Canon Inc 画像処理装置及びその制御方法

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06334995A (ja) * 1993-03-25 1994-12-02 Sony Corp 動画像符号化又は復号化方法
JP2003348355A (ja) * 2002-05-29 2003-12-05 Canon Inc 画像処理装置及びその制御方法

Cited By (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10237562B2 (en) 2011-02-22 2019-03-19 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
US10798391B2 (en) 2011-02-22 2020-10-06 Tagivan Ii Llc Filtering method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
US10602159B2 (en) 2011-02-22 2020-03-24 Sun Patent Trust Image coding method, image decoding method, image coding apparatus, image decoding apparatus, and image coding and decoding apparatus
US10511844B2 (en) 2011-02-22 2019-12-17 Tagivan Ii Llc Filtering method, moving picture coding apparatus, moving picture decoding apparatus, and moving picture coding and decoding apparatus
USRE49906E1 (en) 2011-06-23 2024-04-02 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
USRE48810E1 (en) 2011-06-23 2021-11-02 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
USRE47547E1 (en) 2011-06-23 2019-07-30 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
USRE47537E1 (en) 2011-06-23 2019-07-23 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
USRE47366E1 (en) 2011-06-23 2019-04-23 Sun Patent Trust Image decoding method and apparatus based on a signal type of the control parameter of the current block
US11109043B2 (en) 2011-06-24 2021-08-31 Sun Patent Trust Coding method and coding apparatus
US9794578B2 (en) 2011-06-24 2017-10-17 Sun Patent Trust Coding method and coding apparatus
US10182246B2 (en) 2011-06-24 2019-01-15 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10200696B2 (en) 2011-06-24 2019-02-05 Sun Patent Trust Coding method and coding apparatus
US11758158B2 (en) 2011-06-24 2023-09-12 Sun Patent Trust Coding method and coding apparatus
US11457225B2 (en) 2011-06-24 2022-09-27 Sun Patent Trust Coding method and coding apparatus
US9635361B2 (en) 2011-06-24 2017-04-25 Sun Patent Trust Decoding method and decoding apparatus
US10638164B2 (en) 2011-06-24 2020-04-28 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10687074B2 (en) 2011-06-27 2020-06-16 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9912961B2 (en) 2011-06-27 2018-03-06 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9591311B2 (en) 2011-06-27 2017-03-07 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10750184B2 (en) 2011-06-28 2020-08-18 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10154264B2 (en) 2011-06-28 2018-12-11 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10652584B2 (en) 2011-06-29 2020-05-12 Sun Patent Trust Image decoding method including determining a context for a current block according to a signal type under which a control parameter for the current block is classified
US10237579B2 (en) 2011-06-29 2019-03-19 Sun Patent Trust Image decoding method including determining a context for a current block according to a signal type under which a control parameter for the current block is classified
US11792400B2 (en) 2011-06-30 2023-10-17 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10595022B2 (en) 2011-06-30 2020-03-17 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9794571B2 (en) 2011-06-30 2017-10-17 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9525881B2 (en) 2011-06-30 2016-12-20 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10165277B2 (en) 2011-06-30 2018-12-25 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US11356666B2 (en) 2011-06-30 2022-06-07 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10903848B2 (en) 2011-06-30 2021-01-26 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10382760B2 (en) 2011-06-30 2019-08-13 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10439637B2 (en) 2011-06-30 2019-10-08 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9462282B2 (en) 2011-07-11 2016-10-04 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US11343518B2 (en) 2011-07-11 2022-05-24 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10154270B2 (en) 2011-07-11 2018-12-11 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US11770544B2 (en) 2011-07-11 2023-09-26 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US10575003B2 (en) 2011-07-11 2020-02-25 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US9854257B2 (en) 2011-07-11 2017-12-26 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
US12108059B2 (en) 2011-07-11 2024-10-01 Sun Patent Trust Image decoding method, image coding method, image decoding apparatus, image coding apparatus, and image coding and decoding apparatus
CN117097905A (zh) * 2023-10-11 2023-11-21 合肥工业大学 一种无损图像分块压缩方法、设备、存储介质
CN117097905B (zh) * 2023-10-11 2023-12-26 合肥工业大学 一种无损图像分块压缩方法、设备、存储介质

Similar Documents

Publication Publication Date Title
US11166016B2 (en) Most probable transform for intra prediction coding
WO2010125606A1 (fr) Procédé et système de compression de données d'image
JP6193432B2 (ja) 大型マクロ・ブロックを用いたビデオ・コーディング
KR101344115B1 (ko) 큰 매크로블록들을 이용한 비디오 코딩
EP2774364B1 (fr) Partitionnement d'unité de transformée pour composantes de chrominance en codage vidéo
US9025661B2 (en) Indicating intra-prediction mode selection for video coding
EP2781094B1 (fr) Choix d'un mode de référence dans le codage à mode intra
US20120082222A1 (en) Video coding using intra-prediction
EP2347591A2 (fr) Codage vidéo avec de grands blocs
EP2904788A1 (fr) Codage intra pour format d'échantillon 4 : 2 : 2 dans un codage vidéo
WO2010039729A2 (fr) Codage vidéo avec de grands macroblocs
US8768083B2 (en) Apparatus and method for encoding images, and apparatus and method for decoding images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09843952

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED29-02-2012)

122 Ep: pct application non-entry in european phase

Ref document number: 09843952

Country of ref document: EP

Kind code of ref document: A1