WO2003043342A1 - Procede, appareil et ordinateur permettant de coder des images successives - Google Patents
Procede, appareil et ordinateur permettant de coder des images successives Download PDFInfo
- Publication number
- WO2003043342A1 WO2003043342A1 PCT/FI2002/000894 FI0200894W WO03043342A1 WO 2003043342 A1 WO2003043342 A1 WO 2003043342A1 FI 0200894 W FI0200894 W FI 0200894W WO 03043342 A1 WO03043342 A1 WO 03043342A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- block
- motion vector
- indexed
- vector candidate
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
Definitions
- the invention relates to a method, an apparatus and a com- puter for encoding successive images.
- Encoding of successive images is used to reduce the amount of data so that data can be stored more efficiently in a memory means or transmitted using a telecommunications connection.
- An example of a video encoding standard is MPEG-4 (Moving Images Expert Group).
- image sizes in use e.g. the cif size is 352 x 288 pixels and the qcif size 176 x 144 pixels.
- a single image is typically divided into blocks containing information on luminance, colour and location.
- the data included in the blocks are compressed blockwise by a desired encoding method.
- the compression is based on deletion of less significant data.
- Compression methods are mainly divided into three classes: spectral redundancy reduction, spatial redundancy reduction and temporal redundancy reduction. Typically various combinations of these methods are used in compression.
- a YUV colour model is used to reduce spectral redundancy.
- the YUV model utilizes the fact that the human eye is more sensitive to variations in luminance, i.e. light, than to variations in chrominance, i.e. colour.
- the YUV model includes one luminance component (Y) and two chrominance components (U and V, or C b and C r) .
- a luminance block in accordance with the H.263 video encoding standard is 16 x 16 pixels and both chrominance blocks, which cover the same area as the luminance block, are 8 x 8 pixels.
- a combination of one luminance block and two chrominance blocks is called a macro block.
- Each pixel both in the luminance and in the chrominance block may receive a value from 0 to 255, i.e. eight bits are needed to present one pixel.
- value 0 of the luminance pixel means black and value 255 white.
- DCT discrete cosine transform
- the pixel presentation of a block is transformed into a space frequency presentation.
- the signal frequencies that appear in it have high-amplitude factors, and the factors of the signals which do not appear in the block are close to zero.
- discrete cosine transform is a loss-free transform and interference is caused to the signal only in quantization.
- Temporal redundancy is reduced utilizing the fact that suc- cessive images usually resemble each other.
- motion data are generated on the blocks. This is called motion compensation.
- a reference image stored in the memory earlier is searched for as good previously encoded block as possible for the block to be encoded, the motion between the reference block and the block to be encoded is mod- elled and calculated motion vectors are transmitted to the receiver.
- the difference between the block to be encoded and the reference block is expressed as difference data.
- This kind of coding is known as inter-coding, which means utilization of similarities between images of the same image sequence.
- a search area where a block similar to the one in the image to be encoded is searched for is typically defined in the reference image.
- the best correspondence is found by calculating a cost function between the pixels between the block in the search area and the block to be encoded, e.g. a sum of absolute differences SAD.
- MV is the final motion vector
- f xy is a pixel of the macro block to be encoded
- r xy is a pixel of the reference image in the search area.
- An object of the invention is to provide an improved method, an improved apparatus and an improved computer program.
- One aspect of the invention provides a method of encoding successive images according to claim 1.
- One aspect of the invention provides an apparatus according to claim 9 for encoding successive images.
- One aspect of the invention provides an apparatus according to claim 17 for encoding successive images.
- One aspect of the invention provides a computer program according to claim 25 for encoding successive images.
- the invention is based on performing motion estimation using an indexed image and an indexed reference image.
- Figure 1 illustrates an apparatus for encoding successive images
- Figure 2 illustrates division of a qcif-sized image into blocks
- Figure 3 illustrates part of an image to be encoded
- Figure 4 illustrates part of a reference image
- Figure 8 is a flow chart illustrating a method of encoding successive images.
- Video encoding is well known to a person skilled in the art from standards and textbooks, e.g. from the following works which are incorporated herein by reference: Vasudev Bhaskaran and Konstantinos Konstanti- nides: Image and Video Compressing Standards - Algorithms and Architectures, Second Edition; and Kluwer Academic Publishers 1997, Chapter 6: The MPEG video standards, and Digital Video Processing, Prentice Hall Signal Processing Series, Chapter 6: Block Based Methods.
- the successive images to be encoded are typically moving images, e.g. video.
- a video image consists of individual successive images.
- the camera forms a matrix which presents the images as pixels e.g.
- the data flow that presents the image as pixels is supplied to an encoder. It is also feasible to build a device where data flow is transmitted to the encoder along a data transmission connection, for example, or from the memory means of a computer. In that case the purpose is to compress an uncompressed video image with an encoder for forwarding or storage.
- the compressed video image formed by the encoder is transmitted along a channel to a decoder.
- the decoder performs the same functions as the encoder when it forms an image but inversely.
- the channel may be, for example, a fixed or a wireless data transmission connection.
- the channel can also be interpreted as a transmission path which is used for storing the video image in a memory means, e.g. on a laser disc, and by means of which the video image is read from the memory means and processed in the decoder.
- Encoding of other kind can also be performed on the compressed video image to be transmitted on the channel, e.g. channel coding by a channel coder.
- Channel coding is decoded by a channel decoder.
- the encoder and decoder may be arranged in different devices, such as computers, subscriber terminals of differ- ent radio systems, e.g. mobile stations, or in other devices where video is to be processed.
- the encoder and decoder can also be connected to the same device, which can be called a video codec.
- Figure 1 describes the function of the encoder on a theoretical level. In practice, the structure of the encoder will be more complicated since a person skilled in the art adds necessary prior art features to it, such as timing and blockwise processing of images.
- Successive images 130 are supplied to a frame buffer 102 for temporary storage.
- a single image 132 is sup- plied from the frame buffer 102 to block 104, where the desired coding mode is selected.
- the function of the device is controlled by a control part 100, which selects the desired coding mode and informs block 104 and block 120 of the selected coding mode 156, 158, for instance.
- the coding mode may be intra- coding or inter-coding. Motion compensation is not performed on an intra- coded image whereas an inter-coded image is compensated for motion. Usu- ally the first image is intra-coded and the following images are inter-coded. In- tra-images can also be transmitted after the first image if, for example, sufficiently good motion vectors are not found for the image to be encoded.
- Block 104 receives only the image 132 arriving from the frame buffer 102 as input for the intra-image.
- the image 132 obtained from the frame buffer 102 is supplied as such 134 to a discrete cosine transform block 106 where the discrete cosine transform described at the beginning is performed.
- the image 136 on which discrete cosine transform has been performed is supplied to a quantization block 108, where quantization is performed, i.e. in principle each element of the image on which discrete cosine transform has been performed is divided by a constant and the result of the division is rounded to an integer. This constant may vary between different macro blocks.
- a quantization parameter, from which the divisors are calculated, is typically between 1 and 31. The more zeroes the block includes, the better it can be packed since zeroes are not transmitted to the channel.
- the quantisized image 138 on which discrete cosine transform has been performed is supplied to a variable length coder 110, which outputs the encoded image 140 produced by the device.
- the quantized image 138 on which discrete cosine transform has been performed is taken from the quantization block 108 to an inverse quantization block 112, which performs inverse quantization on the input quantized image 138 on which dis- crete cosine transform has been performed, i.e. restores it to image 136 as accurately as possible. Then the image 142 quantized inversely is supplied to an inverse discrete cosine transform block 114, where inverse discrete cosine transform is performed. Since the discrete cosine transform is a loss-free transform and quantization is not, image 144 does not completely correspond to image 134.
- inverse quantization and inverse discrete cosine transform are to produce an image in the encoder which is similar to the one produced by the decoder corresponding to the encoding device.
- the 'decoded' image 144 is then supplied to block 124, where the part deleted from the image, i.e. difference data, would be added to it if the image had been inter- coded. Since the image in question is intra-coded, nothing is added to it.
- This decision is made by block 120, where intra-coding is the pre-selected option, in which case there is nothing in the input of block 120 and thus nothing is included in the output 154 connected to its block 124. After this, the intra-image 146 is stored in the frame buffer 116.
- a reconstructed image is stored in the frame buffer 116, i.e. the encoded image in the form in which it is after de- coding performed in the decoder.
- the image arriving at the device is stored in the first buffer 102 and the reconstructed 'previous' image is stored in the second buffer 116.
- inter-coding is selected in blocks 104 and 120.
- the image 116 stored in the frame buffer is now a reference image and the image to be encoded is the image 132 to be obtained next from the frame buffer 102.
- the next image is supplied to a motion estimation block 118 in addition to block 104.
- the motion estimation block 118 also receives a reference image 150 from the frame buffer 116.
- the function of the motion estimation block 118 will be described in greater detail below.
- the block searches the reference image for blocks corresponding to the blocks in the image to be encoded. Transitions between the blocks are expressed as motion vectors 152, 166, which are supplied both to the variable length coder and to the frame buffer 116.
- the reference image 148 is taken from the frame buffer 116 to block 122.
- Block 122 subtracts the reference image 148 from the image 132 to be encoded to provide difference data 164, which are supplied from block 104 via the discrete cosine transform block 106 and quantization block 108 to the variable length coder 110.
- variable length coder 110 encodes the difference data 138 and motion vectors 166, in which case the output 140 of the variable length coder 110 provides an inter-coded image.
- the variable length coder 1 10 receives as inputs the quantized difference data 138 on which discrete cosine transform has been performed and motion vectors 166.
- the output 140 of the encoder thus provides the inter-coded image with compressed data, which represent the encoded image and the encoded image in relation to the reference image by means of motion vectors and difference data.
- the motion estimation is carried out using the luminance blocks but the difference data to be encoded are calculated both for the luminance and the chrominance block.
- Inverse quantization is also performed on the difference data 138 of the inter-coded image in the inverse quantization block 112 and inverse discrete cosine transform in the inverse discrete cosine transform block 114.
- the difference data 144 processed this way are supplied to block 124, where the previous image 154 subtracted in the encoding of the inter-image in question and obtained from the place indicated by the motion vector is added to the difference data.
- the sum 146 of the difference data and the previous image is supplied from block 124 to the frame buffer 116 to obtain a reconstructed im- age.
- the reconstructed image corresponds to the image obtained in the decoder when the encoding of the inter-coded image 140 is decoded.
- the frame buffer 116 has a reference image ready for encoding of the image 132 received next from the frame buffer 102.
- the control block 100 controls the function of the encoder. In addition to selection of the coding mode, it controls selection 160 of the correct quantization ratio and performance 162 of encoding with a variable length, for instance.
- the control block 100 may also control other encoder blocks even though this is not illustrated in Figure 1.
- the function of the motion estimation block 118 is controlled by the control block 100.
- a method of encoding successive images will be described with reference to the flow chart shown in Figure 8. The encoding is presented expressly in respect of reduction of temporal redundancy and no other methods of redundancy reduction are described here.
- the method starts in block 800, where the encoder is started.
- the next image is retrieved from the frame memory.
- the image may be e.g.
- the image is divided into macro blocks 200 whose luminance parts have the size of 16 x 16 pixels.
- the macro blocks comprise eleven columns and nine rows.
- the image is processed into an indexed image by dividing the image into parts referred to with indexes and forming a number from the values of pixels in each part to describe pixel values in the part concerned.
- the pixels included in the part form a square because this is advantageous according to the experiments carried out by the applicant.
- the image size and block size set certain limits on the size of the indexed part used in the method. According to the applicant's experiments, it is advantageous in the case of a qcif-sized image that the part includes 4 x 4 pixels.
- matrix 2 can be presented as follows:
- a number describing pixel values in a given part is formed from the values of pixels in each part.
- the simplest way to obtain this number is to add together the numerical values of the pixels in the part concerned.
- the number for each part of matrix 3 is obtained from matrix 2 by the following formula:
- the areas referred to with indexes F 0 ,o and F ⁇ ,o include twelve same pixels of matrix 2, i.e. pixels f ⁇ ,o, f ⁇ , ⁇ , f ⁇ , 2 , f ⁇ , 3 , f 2 ,o, h,2, f2,3, f3,o. f3, ⁇ , f3,2 and f 3 ⁇ 3 .
- the numbers for two adjacent parts referred to with indexes are calculated in an embodiment, the number already calculated for the second part is utilized in the calculation of the number for the first part.
- This principle of sliding calculation can be described as follows. As stated above, the number for the part referred to with index F 0 ,o can be calculated using formula 5. In that case the other numbers are obtained in a sliding manner
- the selected coding mode is intra-coding
- the image is encoded, i.e. discrete cosine transform and quantization are performed on it but no motion estimation.
- the intra-coded image is stored as a reference image.
- the indexed image is stored in block 828 as an indexed reference image referred to with indexes.
- a number is formed from the values of pixels in each part to describe pixel values in the part concerned. It should be noted that the fact that the im- age has been processed into an indexed image in block 804 is utilized here.
- blocks 826 and 828 may be optional since the reference image does not necessarily need to be the image that immediately precedes the image to be encoded but it can also be an earlier image.
- From block 828 we move to block 830, where it is checked whether there are images to be encoded left. If there are no images left, we move in accordance with arrow 832 to block 834, where the method ends. If there are images left, we move in accordance with arrow 836 to block 802, where the next image is retrieved from the frame memory.
- the coding mode selected in block 806 is inter-coding, we move in accordance with arrow 812 from block 806 to block 814, where a search area where the block to be encoded in the indexed image is searched for is defined for the indexed reference image.
- Inter-coding cannot thus be performed on the first image because it requires at least one reference image.
- there has to be one intra-coded image i.e. the operations of blocks 810, 826 and 828 have to have been performed on the reference image in question.
- the reference image may naturally be an image which has been inter-coded earlier.
- Figures 3 and 4 illustrate how a search is performed.
- Figure 3 illustrates part of an image 300 to be encoded and Figure 4 part of a reference image 400.
- the parts 300, 400 illustrate the same point of the qcif-sized image shown in Figure 2.
- the image 300 to be encoded thus consists of luminance blocks with a size of 16 x 16 pixels.
- the size of the chrominance blocks is usually 8 x 8 pixels but they are not shown in Figures 3 and 4 because chrominance blocks are not utilized in motion estimation. It should be noted that, for the sake of clarity, Figures 3 and 4 describe the real content of the images without indexing.
- a search area 402 is thus defined in the reference image 400, which is the indexed reference image in our example. This area is searched for an image element included in the image to be encoded, i.e. the indexed encoded image in our example.
- the image element is located in block 302.
- the search for motion vectors is usually limited to a search area 402 with a size of [-16, 16], in which case the search area 402 consists of nine blocks of 16 x 16 pixels.
- the nine blocks of the search area 402 are located in the reference image 400 in the manner shown in Figure 4 around the location of the block 302 to be encoded in the image to be encoded 300.
- the size of the search area 402 is thus 48 x 48 pixels. In that case the number of possible motion vectors, i.e. motion vector candidates, is 33 x 33.
- Figures 5, 6 and 7 illustrate indexing by describing part of the search area 402 of Figure 4 pixel-by-pixel 502.
- Figure 5 shows which ones of the areas 500 referred to with indexes are needed in the calcula- tion of motion vector candidate [0,0], i.e. the following elements of matrix 3 are used:
- a block 404 corresponding to the block 302 to be encoded in the (indexed) image 300 to be encoded was found in the (indexed) reference image.
- Motion of the block 302 to be encoded with respect to the block 404 found in the reference image is ex- pressed by a motion vector 406.
- the motion vector may be described as a motion vector of the pixel of the leftmost upper corner in the block 302 to be encoded, for instance.
- the other pixels of block 302 naturally also move in the direction of the motion vector concerned.
- the origin (0, 0) of the image is usually the pixel in the left upper corner in the image.
- motions are expressed as follows: a motion to the right is positive, a motion to the left is negative, a motion up is negative and a motion down is positive.
- the motion vector 406 is (12, -4), i.e. the motion is twelve pixels to the right in the direction of the X axis and four pixels up in the direction of the Y axis.
- block 820 we move to block 820, where it is tested whether blocks to be encoded are still left in the image to be encoded. If there are blocks to be encoded left, we move in accordance with arrow 822 to block 814, where the search for the block corresponding to the next block to be encoded starts in the reference image. The loop according to arrow 822 is repeated until the blocks of the image to be encoded have been processed in the desired manner, either all or some of them.
- the area around the motion vector candidate in question is searched for the best motion vector candidate with an accuracy of half a pixel.
- This is illustrated in Figure 8 with block 840.
- the location of the 16 x 16 block found in motion estimation of half a pixel is still checked with an accuracy of half a pixel.
- this requires an 18 x 18 matrix so that the pixels can be interpolated, but our method, which is based on indexes, enables the fact that the area to be interpolated is a 6 x 6 matrix which employs indexes.
- a motion vector of half a pixel is obtained by applying formula 13, in which case the method also needs 16 times fewer calculations that the traditional motion estimation with an accuracy of half a pixel.
- the best motion vector candidate is searched for with an accuracy of half a pixel as follows: interpolating values of half a pixel for the indexed candidate block found, which corresponds to the one-pixel motion vector candidate in the reference image, and around the block; calculating a cost function for each motion vector candidate of half a pixel using the indexed image and interpolated and indexed candidate block: encoding the image block to be encoded using the motion vector candidate of half a pixel that gives the lowest cost function value.
- An embodiment utilizes the well known fact that existing en- coding standards also allow motion vectors pointing outside the image. In the indexed motion estimation described, this is achieved by overfilling the index table so that there are 16 pixels on each edge of the image, i.e. at the top, at the bottom and on the sides, which have been copied there from the outer edges of the actual image area. This can be performed using block 842 of Fig- ure 8.
- the size of the indexed matrix 3 is the image size minus three, i.e. in our example 173 x 141. When the overfilling is taken into account, the image size is 173+32 x 141+32, or 176+29 x 144+29. Number 29 is naturally obtained by subtracting three from the space required by overfill, i.e. number 32.
- the method described can be implemented using the en- coder shown in Figure 1 , for example.
- the encoder shown in Figure 1 i.e. the apparatus for encoding successive images, comprises means 110 for encoding the image block to be encoded using the motion vector candidate that gives the lowest cost function value.
- the motion vector candidate defines the motion between the image block to be encoded and the candidate block in the search area of the reference image.
- the apparatus further comprises means 118 for processing the image into an indexed image and the reference image into an indexed reference image so that the image and the reference image are divided into parts referred to with indexes and a number is formed from the values of pixels in each part to describe pixel values in a given part; defining a search area in the indexed reference image where the block to be encoded in the indexed image is searched for; and calculating a cost function for each motion vector candidate using the indexed image and the indexed reference image.
- the apparatus can also be configured to encode the image block to be encoded using the motion vector candidate that gives the lowest cost function value.
- the motion vector candidate defines the motion between the image block to be encoded and a candidate block in the search are of a reference image.
- the apparatus is also configured to: process the image into an indexed image and the reference image into an indexed reference image so that the image and the reference image are divided into parts referred to with indexes and a number is formed from the values of pixels in each part to describe pixel values in a given part; define a search area in the indexed reference image where the block to be encoded in the indexed image is searched for; and calculate a cost function for each motion vector candidate using the indexed image and the indexed reference image.
- the encoder blocks shown in Figure 1 can be implemented as one or more application-specific integrated circuits ASIC. Other embodiments are also feasible, such as a circuit built of separate logic components, or a processor with its software. A hybrid of these different embodiments is also feasible.
- a person skilled in the art will consider the requirements set on the size and power consumption of the device, necessary processing capacity, production costs and production volumes, for example.
- the above-mentioned means can be placed in the en- coder blocks described or they can be implemented as new blocks related to the blocks described.
- the means for processing an image into an indexed image and a reference image into an indexed reference image can be implemented in block 118 or in the frame buffer 102, or using a new block connected to the frame buffer 102.
- the device can also be configured using the described blocks or new blocks.
- One embodiment of the encoder is a computer program on a carrier for encoding successive images, comprising computer executable instructions for causing a computer to perform the encoding when the software is run.
- the carrier can be any means for distributing the software to the customers.
- the carrier can be a distribution package (contain- ing a diskette, CD-ROM or another computer readable medium for storing the software), a computer memory (for example a programmed memory chip or another memory device connectable to the computer), a telecommunications signal (for example a signal transferred in the Internet and/or in a cellular radio network containing the software in normal or compressed format).
- the encoder may also be part of a complete video codec.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FI20012203 | 2001-11-13 | ||
FI20012203A FI110745B (fi) | 2001-11-13 | 2001-11-13 | Menetelmä ja laite peräkkäisten kuvien koodaamiseksi |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2003043342A1 true WO2003043342A1 (fr) | 2003-05-22 |
Family
ID=8562246
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/FI2002/000894 WO2003043342A1 (fr) | 2001-11-13 | 2002-11-12 | Procede, appareil et ordinateur permettant de coder des images successives |
Country Status (2)
Country | Link |
---|---|
FI (1) | FI110745B (fr) |
WO (1) | WO2003043342A1 (fr) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007138151A1 (fr) | 2006-05-30 | 2007-12-06 | Hantro Products Oy | Appareil, agencement, procédé et produit de programme informatique pour un traitement vidéo numérique |
US7463778B2 (en) | 2004-01-30 | 2008-12-09 | Hewlett-Packard Development Company, L.P | Motion estimation for compressing multiple view images |
US8665318B2 (en) | 2009-03-17 | 2014-03-04 | Google Inc. | Digital video coding |
US8780984B2 (en) | 2010-07-06 | 2014-07-15 | Google Inc. | Loss-robust video transmission using plural decoders |
US8838680B1 (en) | 2011-02-08 | 2014-09-16 | Google Inc. | Buffer objects for web-based configurable pipeline media processing |
US8891626B1 (en) | 2011-04-05 | 2014-11-18 | Google Inc. | Center of motion for encoding motion fields |
US8907821B1 (en) | 2010-09-16 | 2014-12-09 | Google Inc. | Apparatus and method for decoding data |
US8908767B1 (en) | 2012-02-09 | 2014-12-09 | Google Inc. | Temporal motion vector prediction |
US9014265B1 (en) | 2011-12-29 | 2015-04-21 | Google Inc. | Video coding using edge detection and block partitioning for intra prediction |
US9042261B2 (en) | 2009-09-23 | 2015-05-26 | Google Inc. | Method and device for determining a jitter buffer level |
US9078015B2 (en) | 2010-08-25 | 2015-07-07 | Cable Television Laboratories, Inc. | Transport of partially encrypted media |
US9094663B1 (en) | 2011-05-09 | 2015-07-28 | Google Inc. | System and method for providing adaptive media optimization |
US9172970B1 (en) | 2012-05-29 | 2015-10-27 | Google Inc. | Inter frame candidate selection for a video encoder |
US9210424B1 (en) | 2013-02-28 | 2015-12-08 | Google Inc. | Adaptive prediction block size in video coding |
US9313493B1 (en) | 2013-06-27 | 2016-04-12 | Google Inc. | Advanced motion estimation |
US9503746B2 (en) | 2012-10-08 | 2016-11-22 | Google Inc. | Determine reference motion vectors |
US9807416B2 (en) | 2015-09-21 | 2017-10-31 | Google Inc. | Low-latency two-pass video coding |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5610658A (en) * | 1994-01-31 | 1997-03-11 | Sony Corporation | Motion vector detection using hierarchical calculation |
US5742710A (en) * | 1994-02-23 | 1998-04-21 | Rca Thomson Licensing Corporation | Computationally-efficient method for estimating image motion |
WO1999041912A2 (fr) * | 1998-02-13 | 1999-08-19 | Koninklijke Philips Electronics N.V. | Procede et dispositif de codage video |
US5987180A (en) * | 1997-09-26 | 1999-11-16 | Sarnoff Corporation | Multiple component compression encoder motion search method and apparatus |
US6011870A (en) * | 1997-07-18 | 2000-01-04 | Jeng; Fure-Ching | Multiple stage and low-complexity motion estimation for interframe video coding |
US6014181A (en) * | 1997-10-13 | 2000-01-11 | Sharp Laboratories Of America, Inc. | Adaptive step-size motion estimation based on statistical sum of absolute differences |
EP0979011A1 (fr) * | 1998-08-06 | 2000-02-09 | STMicroelectronics S.r.l. | Détection d'un changement de scène dans un estimateur de mouvement d'un encodeur vidéo |
EP1091592A2 (fr) * | 1999-10-07 | 2001-04-11 | Matsushita Electric Industrial Co., Ltd. | Appareil de codage d'un signal vidéo |
-
2001
- 2001-11-13 FI FI20012203A patent/FI110745B/fi not_active IP Right Cessation
-
2002
- 2002-11-12 WO PCT/FI2002/000894 patent/WO2003043342A1/fr not_active Application Discontinuation
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5610658A (en) * | 1994-01-31 | 1997-03-11 | Sony Corporation | Motion vector detection using hierarchical calculation |
US5742710A (en) * | 1994-02-23 | 1998-04-21 | Rca Thomson Licensing Corporation | Computationally-efficient method for estimating image motion |
US6011870A (en) * | 1997-07-18 | 2000-01-04 | Jeng; Fure-Ching | Multiple stage and low-complexity motion estimation for interframe video coding |
US5987180A (en) * | 1997-09-26 | 1999-11-16 | Sarnoff Corporation | Multiple component compression encoder motion search method and apparatus |
US6014181A (en) * | 1997-10-13 | 2000-01-11 | Sharp Laboratories Of America, Inc. | Adaptive step-size motion estimation based on statistical sum of absolute differences |
WO1999041912A2 (fr) * | 1998-02-13 | 1999-08-19 | Koninklijke Philips Electronics N.V. | Procede et dispositif de codage video |
EP0979011A1 (fr) * | 1998-08-06 | 2000-02-09 | STMicroelectronics S.r.l. | Détection d'un changement de scène dans un estimateur de mouvement d'un encodeur vidéo |
EP1091592A2 (fr) * | 1999-10-07 | 2001-04-11 | Matsushita Electric Industrial Co., Ltd. | Appareil de codage d'un signal vidéo |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7463778B2 (en) | 2004-01-30 | 2008-12-09 | Hewlett-Packard Development Company, L.P | Motion estimation for compressing multiple view images |
US8396117B2 (en) | 2006-05-30 | 2013-03-12 | Google Inc. | Apparatus, arrangement, method and computer program product for digital video processing |
WO2007138151A1 (fr) | 2006-05-30 | 2007-12-06 | Hantro Products Oy | Appareil, agencement, procédé et produit de programme informatique pour un traitement vidéo numérique |
US8665318B2 (en) | 2009-03-17 | 2014-03-04 | Google Inc. | Digital video coding |
US9042261B2 (en) | 2009-09-23 | 2015-05-26 | Google Inc. | Method and device for determining a jitter buffer level |
US8780984B2 (en) | 2010-07-06 | 2014-07-15 | Google Inc. | Loss-robust video transmission using plural decoders |
US9078015B2 (en) | 2010-08-25 | 2015-07-07 | Cable Television Laboratories, Inc. | Transport of partially encrypted media |
US8907821B1 (en) | 2010-09-16 | 2014-12-09 | Google Inc. | Apparatus and method for decoding data |
US8838680B1 (en) | 2011-02-08 | 2014-09-16 | Google Inc. | Buffer objects for web-based configurable pipeline media processing |
US8891626B1 (en) | 2011-04-05 | 2014-11-18 | Google Inc. | Center of motion for encoding motion fields |
US9094663B1 (en) | 2011-05-09 | 2015-07-28 | Google Inc. | System and method for providing adaptive media optimization |
US9014265B1 (en) | 2011-12-29 | 2015-04-21 | Google Inc. | Video coding using edge detection and block partitioning for intra prediction |
US8908767B1 (en) | 2012-02-09 | 2014-12-09 | Google Inc. | Temporal motion vector prediction |
US9172970B1 (en) | 2012-05-29 | 2015-10-27 | Google Inc. | Inter frame candidate selection for a video encoder |
US9503746B2 (en) | 2012-10-08 | 2016-11-22 | Google Inc. | Determine reference motion vectors |
US9210424B1 (en) | 2013-02-28 | 2015-12-08 | Google Inc. | Adaptive prediction block size in video coding |
US9313493B1 (en) | 2013-06-27 | 2016-04-12 | Google Inc. | Advanced motion estimation |
US9807416B2 (en) | 2015-09-21 | 2017-10-31 | Google Inc. | Low-latency two-pass video coding |
Also Published As
Publication number | Publication date |
---|---|
FI110745B (fi) | 2003-03-14 |
FI20012203A0 (fi) | 2001-11-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10841613B2 (en) | Low-complexity intra prediction for video coding | |
KR100319944B1 (ko) | 화상부호화장치및화상복호장치 | |
KR101213326B1 (ko) | 신호 처리 장치 | |
CN104244007B (zh) | 一种图像编码方法和装置及解码方法和装置 | |
RU2683591C1 (ru) | Способ и устройство для компенсации движения с предсказанием | |
WO2003043342A1 (fr) | Procede, appareil et ordinateur permettant de coder des images successives | |
CN103843345B (zh) | 编码设备、解码设备、编码方法和解码方法 | |
JP2002532026A (ja) | 動き推定とブロックマッチング・パターンの改良 | |
KR20010006292A (ko) | 비디오 화상 코딩 장치 및 방법 | |
CN113573053B (zh) | 视频编解码设备、方法和计算机可读记录介质 | |
FI109634B (fi) | Menetelmä ja laite videokuvan koodaamiseksi | |
FI109635B (fi) | Menetelmä ja laite videokuvan jälkikäsittelemiseksi | |
CN112602327B (zh) | 对视频样本的变换块编码和解码的方法、设备和系统 | |
CN114615498A (zh) | 视频解码方法、视频编码方法、相关设备及存储介质 | |
US20080095240A1 (en) | Method for interpolating chrominance signal in video encoder and decoder | |
US20070147511A1 (en) | Image processing apparatus and image processing method | |
JP2005502285A (ja) | 連続する画像を符号化する方法および装置 | |
CN115769573A (zh) | 编码方法、解码方法及相关装置 | |
CN112532988A (zh) | 视频编码方法、视频解码方法及相关设备 | |
CN112913242B (zh) | 编码方法和编码装置 | |
JP2000308066A (ja) | 動画像符号化装置および動画像符号化方法 | |
JP2005151152A (ja) | データ処理装置およびその方法と符号化装置 | |
JP2004129160A (ja) | 画像復号装置、画像復号方法および画像復号プログラム | |
KR20220058959A (ko) | 화상 부호화 장치, 화상 복호 장치 및 그것들의 제어 방법 및 프로그램 | |
CN116418991A (zh) | 处理方法及装置、视频解码方法、视频编码方法及装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase | ||
NENP | Non-entry into the national phase |
Ref country code: JP |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: JP |