WO2002062072A1 - Method for post-processing decoded video image, using diagonal pixels - Google Patents

Method for post-processing decoded video image, using diagonal pixels Download PDF

Info

Publication number
WO2002062072A1
WO2002062072A1 PCT/FI2002/000074 FI0200074W WO02062072A1 WO 2002062072 A1 WO2002062072 A1 WO 2002062072A1 FI 0200074 W FI0200074 W FI 0200074W WO 02062072 A1 WO02062072 A1 WO 02062072A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
processed
post
processing
mean
Prior art date
Application number
PCT/FI2002/000074
Other languages
English (en)
French (fr)
Other versions
WO2002062072A8 (en
Inventor
Jarno Tulkki
Original Assignee
Hantro Products Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hantro Products Oy filed Critical Hantro Products Oy
Publication of WO2002062072A1 publication Critical patent/WO2002062072A1/en
Publication of WO2002062072A8 publication Critical patent/WO2002062072A8/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/527Global motion vector estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/86Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness

Definitions

  • the invention relates to a method, an apparatus, a computer program and computer memory means for post-processing decoded video image formed of consecutive still images.
  • Video image is encoded and decoded in order to reduce the amount of data so that the video image can be stored more efficiently in mem- ory means or transferred using a telecommunication connection.
  • An example of a video coding standard is MPEG-4 (Moving Pictures Expert Group), where the idea is to send video image in real time on a wireless channel. This is a very ambitious aim, as if the image to be sent is for example of cif size (288 x 352 pixels) and the transmission frequency is 15 images per second, then 36.5 million bits should be packed into 64 kilobits each second. The packing ratio would in such a case be extremely high, 570:1.
  • the image In order to transfer an image, the image is typically divided into image blocks, the size of which is selected to be suitable with the system.
  • the image block information generally comprises information about the bright- ness, colour and location of an image block in the image itself.
  • the data in the image blocks is compressed block-by-block using a desired coding method. Compression is based on deleting the less significant data.
  • the compression methods are mainly divided into three different categories: spectral redundancy reduction, spatial redundancy reduction and temporal redundancy reduction. Typically various combinations of these methods are employed for compression.
  • a YUV colour model is for instance applied.
  • the YUV model takes advantage of the fact that the human eye is more sensitive to the variation in luminance, or brightness, than to the changes in chrominance, or colour.
  • the YUV model comprises one luminance component (Y) and two chrominance components (U, V).
  • the chrominance components can also be referred to as cb and cr components.
  • the size of an H.263 luminance block according to the video coding standard is 16 x 16 pixels
  • the size of each chrominance block is 8 x 8 pixels, together covering the same area as the luminance block.
  • the combination of one luminance block and two chrominance blocks is referred to as a macro block.
  • the macro blocks are generally read from the image line-by-line.
  • Each pixel in both the luminance and chrominance blocks may obtain a value ranging between 0 and 255, meaning that eight bits are required to present one pixel. For example, value 0 of the luminance pixel refers to black and value 255 refers to white.
  • DCT discrete cosine transform
  • the pixel presentation in the image block is transformed to a spatial frequency presentation.
  • the discrete cosine transform is basically a lossless transform, and interference is caused to the signal only in quantization.
  • Temporal redundancy tends to be reduced by taking advantage of the fact that consecutive images generally resemble one another, and therefore instead of compressing each individual image, the motion data in the image blocks is generated.
  • the basic principle is the following: a previously encoded reference block that is as good as possible is searched for the image block to be encoded, the motion between the reference block and the image block to be encoded is modelled and the motion vector coefficients are sent to the receiver.
  • the difference between the block to be encoded and the reference block is indicated as a prediction error component, or prediction error frame.
  • a reference picture, or reference frame, previously stored in the mem- ory can be used in motion vector prediction of the image block.
  • Such a coding is referred to as intercoding, which means utilizing the similarities between the images in the same image string.
  • the discrete cosine transform can be performed for the macro block using the formula:
  • table 1 shows an example of how an 8 x 8 pixel block is transformed using the discrete cosine transform.
  • the upper part of the table shows the non-transformed pixels, and the lower part of the table shows the result after the discrete cosine transform has been carried out, where the first element of value 1303, what is known as a dc coefficient, depicts the mean size of the pixels in the block, and the remaining 63 elements, what are known as ac coefficients, illustrate the spread of the pixels in the block.
  • Table 1 illustrates a block in which the spread between pixels is small. As table 2 shows, the ac coefficients receive the same values, meaning that the block is compressed very efficiently.
  • the discrete cosine transformed block is "quantized", or each element therein is basically divided using a constant.
  • This constant may vary between different macro blocks.
  • a higher divider is generally used for ac coefficients than for dc coefficients.
  • the "quantization parameter ", from which said dividers are calculated, ranges between 1 and 31. The more zeroes are obtained to the block, the better the block is packed, since zeroes are not sent to the channel.
  • Different coding methods can also be performed to the quantized blocks and finally a bit stream can be formed thereof that is sent to a decoder.
  • Quantization is a problem in video coding, as the higher the quantization used, the more information disappears from the image and the final result is unpleasant to watch.
  • a decoder After decoding the bit stream and performing the decoding methods, a decoder basically carries out the same measures as the encoder when generating a reference image, meaning that similar steps are performed to the blocks as in the encoder but inversely. [0014] Finally the assembled video image is supplied onto a display, and the final result depends to a great extent on the quantization parameter used. If an element in the block descends to zero during quantization, it can no longer be restored in inverse quantization. The discrete cosine transform and quantization cause the quality of the image to deteriorate and can be observed as noise and segmentation.
  • a method according to claim 1 is provided for post-processing the decoded video image.
  • a computer program according to claim 15 is also provided as an aspect of the invention.
  • computer memory means according to claim 16 are provided.
  • an apparatus according to claim 17 is provided for post-processing the decoded video image. Further preferred embodiments of the invention are disclosed in the dependent claims.
  • the invention is based on the idea that in post-processing all or nearly all pixels in a still image are processed except for the pixels at the edge of the image so that the pixels to be used for generating a pixel to be processed are selected from the diagonals, or diameters, of a square fictitiously formed to surround the pixel to be processed utilizing the information about the quantization parameters of the macro block to which the pixel belongs in the selection.
  • the reference pixels are selected only if they deviate from the pixel to be processed at the most for the remainder lost in quantization.
  • the smoothing reduces the segmentation that can be seen in the image, but at the same time the boundaries between some objects are sharpened, meaning that the invention enables to simultaneously smooth and sharpen the image.
  • Figure 1 shows apparatuses for encoding and decoding video im- age
  • FIGS. 2A, 2B and 2C show the choice of reference pixels
  • FIGS 3A and 3B show different embodiments for selecting reference pixels
  • Figure 3C illustrates the advantage achieved with the choice of ref- erence pixels
  • Figure 4 illustrates the pixels at the interfaces between the apparatus parts shown in Figure 1 .
  • Figure 5 is a flow chart illustrating a method for post-processing decoded video image formed of consecutive still images.
  • FIG. 1 apparatuses for encoding and decoding video image are described.
  • the face of a person 100 is filmed using a video camera 102.
  • the camera 102 produces video image of individual consecutive still images, whereof one still image 104 is shown in the Figure.
  • the camera 102 forms a matrix describing the image 104 as pixels, for example as described above, where both luminance and chrominance are provided with specific matrixes.
  • a data flow 106 depicting the image 104 as pixels is next applied to an encoder 108. It is naturally also possible to provide such an apparatus, in which the data flow 106 is applied to the encoder 108, for instance along a data transmission connection or from computer memory means. In such a case, the idea is to compress un-compressed video image 106 using the encoder 108 for instance in order to be forwarded or stored.
  • the encoder 108 comprises discrete cosine transform means 110 for performing discrete cosine transform as described above for the pixels in each still image 104.
  • a data flow 112 formed using discrete cosine transform is applied to quantization means 114 that carry out quantization using a selected quantization ratio.
  • Other types of coding can also be performed to a quantized data flow 116 that are not further described in this context.
  • the compressed video image formed using the encoder 108 is transferred over a channel 118 to the decoder 120.
  • the channel 118 may for instance be a fixed or wire- less data transmission connection.
  • the channel 118 can also be interpreted as a transmission path, by means of which the video image is stored in memory means, for example on a laser disc, and by means of which the video image is read from the memory means and processed using the decoder 120.
  • the decoder 120 comprises inverse quantization means 122, which are used to decode the quantization performed in the encoder 108.
  • the inverse quantization is unfortunately unable to restore the element of the block, the value of which descends to zero in quantization.
  • An inverse quantized data flow 124 is next applied to inverse discrete cosine transform means 126, which carry out inverse discrete cosine transform to the pixels in each still image 104.
  • a data flow 128 obtained is then applied through other possible decoding processes onto a display 130, which shows the video image formed of still images 104.
  • the encoder 108 and decoder 120 can be placed into different apparatuses, such as computers, subscriber terminals of various radio sys- terns like mobile stations, or into other apparatuses where video image is to be processed.
  • the encoder 108 and the decoder 120 can also be connected to the same apparatus, which can in such a case be referred to as a video codec.
  • Figure 4 describes prior art pixels at the interfaces 106, 112, 116, 124 and 128 between the apparatus parts shown in Figure 1.
  • the test image used is the first 8 x 8 luminance block in the first image of the test sequence "calendar_qcif.yuv" known to those skilled in the art.
  • the interface 106 shows the contents of the data flow after the camera 102.
  • the interface 112 depicts the contents of the data flow after the discrete cosine transform means 110.
  • the interface 116 shows the contents of the data flow after the quantiza- tion means 114.
  • the quantization ratio used is 17.
  • the interface 124 describes the contents of the data flow after the inverse quantization means 122.
  • Figure 5 shows when the original data flow 112 before quantization is compared with the reconstructed data flow 124 after the inverse quantization, the ac component values, which have descended to zero and which are represented at the interface 116 as a result of the quantization, can no longer be restored. In practice this means that the original image 106 before decoding and the image recon- structed using the inverse discrete cosine transform means 126 described at the interface 128 no longer correspond with one another. Noise that degrades the quality of the image has appeared on the reconstructed image.
  • an apparatus is attached to the decoder 120 for post-processing the decoded video image formed of consecutive still images.
  • Said apparatus comprises processing means 140 for post-processing still image.
  • the post-processing apparatus can be implemented so that it is integrated into the decoder 120, in which case the processing means 140 may constitute a processor including software.
  • the processing means 140 are arranged to repeat at a time the post-processing for the pixels in each still image.
  • a pixel is post-processed, at first the pixels on both diagonals in a square area formed to surround the pixel to be processed are se- lected as reference pixels. Such a selection phase is depicted in Figures 2A, 2B and 2C.
  • Figure 2A shows a block of the size of 8 x 8 pixels 200.
  • the pixel to be processed is described using reference numeral 202 and the letter P.
  • a square area 204 is formed around the pixel P to be processed including two diagonals 206 and 208.
  • the pixels on said diagonals 206, 208 are selected as described in Figure 2C as reference pixels 210, which are illustrated by the letter R.
  • the number of reference pixels R surrounding the pixel P to be processed is eight.
  • Figure 3A describes an embodiment in which a square 300 formed to surround the pixel to be processed is smaller than the one in Figure 2B.
  • the size of the square 204 is 5 x 5 pixels, but in Figure 3A the size of the square 300 is 3 x 3 pixels, whereby the number of reference pixels R obtained is four.
  • Figure 3B describes an embodiment, in which the processing means 140 are arranged to select at least four reference pixels 302, 310, 312, 314, two from both diagonals, i.e. the first one on each diagonal starting from the pixel to be processed. The diagonals are not shown in Figure 3B for clarity, but they are located as shown in Figure 2B. One of the four diagonal parts starting from the pixel to be processed is described, i.e. the diagonal part 208 sloping downwards on the right.
  • Figure 3B describes an embodiment, in which the processing means 140 are arranged to select four new reference pixels R in addition to the already selected four 302, 310, 312, 314 in such a manner that the new selected pixel is either the following pixel 304 in said diagonal part, or the pixel 306 or 308 located adjacent to the first pixel P and the second pixel 304 in the diagonal part.
  • reference pixels R are a sort of minimum, as it is preferable that the reference pixels are evenly located around the pixel P to be processed.
  • Figure 3C illustrates the significance of how the reference pixels R are located. It is assumed that Figure 3C shows such a spot in a still image, where four blocks are placed adjacent to one another, the size of each block being for example 8 x 8 pixels. Thus, block 320 is found on top left, block 322 on top right, block 324 on bottom left and block 326 on bottom right. As the Figure shows, each one of the four reference pixels R closest to the pixel P to be processed is placed in a different block 320, 322, 324, 326.
  • the reference pixels R thus preferably form a pattern resembling the letter X.
  • the example described does not include any reference pixels in block 326, instead the same block 320 where pixel P to be processed is placed would include a double amount of reference pixels, which might weaken the improvement of the image provided in post-processing.
  • the aim is therefore to select the reference pixels R in such a manner that as many as possible of the reference pixels R are located in different blocks than the pixel P to be processed.
  • the block boundaries are maximally faded in this way.
  • the reference pixels are generally located in a pattern resembling the letter X, the length of the branches thereof being determined by the length of the diagonal in the square.
  • the shape of the pattern formed as the letter X may also be distorted as shown in Figure 3B, meaning that the middle part of the letter X is even, but the tips of the branches in the letter X are twisted either to the left or to the right, when examining the situation in relation the diagonal part 208.
  • the processing means 140 are arranged to form an absolute value of the difference between the pixel P to be processed at a time and each reference pixel R, and if the absolute value is lower than the quantization pa- rameter of the macro block to which the pixel to be processed P belongs, then a reference pixel R is selected to form the reference mean.
  • a reference pixel R is selected to form the reference mean.
  • the processing means 140 are arranged to perform the following test: if at least one reference pixel R was selected to form the reference mean, then the reference mean of the selected reference pixels R is formed and the mean formed from the pixel P to be processed and the reference mean is set as the value of the pixel P to be processed.
  • the post-processing is repeated for the pixels of each still image at a time.
  • Figure 5 illustrates the particular post-processing process that is to be performed for a single still image. In practice, when video image is post-processed, the measures shown in Figure 5 are repeated for each individual still image of the video image. Post-processing can be per- formed for both the luminance data and the chrominance data of the still image.
  • the method starts from block 500, where a decoded still image is obtained into the memory.
  • the next un-post-processed pixel of the image is read, which becomes the pixel P to be processed.
  • the quantization parameter of the macro block to which the pixel P to be processed belongs is read.
  • the pixels on both the diagonals in the square area formed to surround the pixel P to be processed are selected as reference pixels R as described above.
  • the number of reference pixels is indicated in our example with the letter N.
  • the processing means 140 process still image 104 line-byline, column-by-column, macro block-by-macro block, block-by-block or in ac- cordance with another predetermined non-random way.
  • the processing to be performed in blocks 508, 510, 512, 518 and 520, where the choice of a reference pixel for calculating the reference mean is tested can be simplified. This occurs in such a manner that if the reference pixel closer to the pixel to be processed on the diagonal is not selected to form the reference mean, then the reference pixel further in the same direction from the pixel to be processed on the diagonal is not selected to form the reference means. For example in figure 3B, if the reference pixel 302 is not selected to form the reference mean, then other pixels in said branch, for instance the pixel 304, is not even worth testing, meaning that the remaining reference pixels in said branch in block 508 can be bypassed. Obviously in this example, N must correspondingly be reduced with the number of bypassed reference pixels.
  • the processing means 140 are arranged to weightedly calculate the mean of the pixel to be processed and the refer- ence mean, in other words formula 9 obtains the following form
  • Weighting can be used to adjust smoothing. For example, if the reference mean is weighted more than the pixel to be processed, then the image obtains more smoothing, and correspondingly weighting the reference mean more than the pixel to be processed can reduce the smoothing. This kind of weighting affects the smoothing of evenly coloured areas in particular, in which case the colour is more evenly distributed in the coloured area when the reference mean is weighted more than the pixel to be processed.
  • the processing means 140 are arranged to employ the value of the already post-processed pixel when post-processing an unprocessed pixel, in which case the still image is stored into the memory of the processing means 140 in one example only. This has the advantage that the apparatus requires less memory than if both the unprocessed and post- processed image were separately stored into the memory.
  • processing means 140 are arranged to inform about the value of the quantization parameter using the weighting coefficient before the comparison with the absolute value of the difference is carried out, i.e. formula 4 obtains the following form
  • each branch of the letter X comprises only two reference pixels, meaning that the weighting might for instance be such that the reference pixel that is placed closer is weighted using the figure two, and the reference pixel further apart is weighted using figure one, whereby the weighting ratio is 2:1. If the processing means 140 comprise an adequate amount of calculation capacity, then the branch of the letter X may include even more reference pixels, i.e.
  • the number of reference pixels in the branch is three, whereby the weighting ratio thereof may be for example 3:2:1.
  • Such a weighting provides the advantage that the accuracy of the smoothing can be improved, as the significance of the reference pixels is weighted.
  • Post-processing is not necessarily carried out for all the pixels in a still image.
  • the border areas in an image are problematic.
  • the processing means 140 are arranged so as not to perform postprocessing to at least one, preferably two, leftmost columns, rightmost columns, topmost rows and bottommost rows of the still image. This does not substantially deteriorate the quality of the image, since the narrow margin that may include errors is not generally considered to be disturbing.
  • the processing means 140 in an embodiment are ar- ranged to perform post-processing to at least one, preferably two leftmost columns, rightmost columns, topmost rows and bottommost rows of the still image so that the pixel in the image that is perpendicularly closest to the reference pixel is employed as the value of the reference pixel outside the image.
  • This can be implemented for instance in such a manner that the operation logic of the method is able to retrieve the value of the pixel outside the border in block 508.
  • Another implementation is such that the rows and columns closest to the edges are copied to surround the pixel to provide a margin of two pixels so as to form a frame. This provides simpler operation logic but requires somewhat more memory owing to the copied pixels.
  • Another embodiment that can be used for post-processing the pixels of the border areas is such that the processing means 140 are arranged not to select a reference pixel that is outside the image for postprocessing.
  • the X-shaped pattern of the reference pixels thus lacks two or even three branches.
  • the processing means 140 can be implemented as a computer program operating in the processor, whereby for instance each required operation is implemented as a specific program module.
  • the computer program thus comprises the routines for implementing the steps of the method. In order to promote the sales of the computer program, it can be stored into the computer memory means, such as a CD-ROM (Compact Disc Read Only Memory).
  • the computer program can be designed so as to operate also in a standard general-purpose personal computer, in a portable computer, in a computer network server or in another prior art computer.
  • the processing means 140 can be implemented also as an equipment solution, for example as one or more application specific integrated circuits (ASIC) or as operation logic composed of discrete components.
  • ASIC application specific integrated circuits
  • Dif- ferent hybrid implementations formed of software and equipment are also possible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Image Processing (AREA)
PCT/FI2002/000074 2001-02-01 2002-01-31 Method for post-processing decoded video image, using diagonal pixels WO2002062072A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20010191 2001-02-01
FI20010191A FI109635B (sv) 2001-02-01 2001-02-01 Förfarande och anordning för efterbehandling av en videobild

Publications (2)

Publication Number Publication Date
WO2002062072A1 true WO2002062072A1 (en) 2002-08-08
WO2002062072A8 WO2002062072A8 (en) 2003-11-06

Family

ID=8560199

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2002/000074 WO2002062072A1 (en) 2001-02-01 2002-01-31 Method for post-processing decoded video image, using diagonal pixels

Country Status (2)

Country Link
FI (1) FI109635B (sv)
WO (1) WO2002062072A1 (sv)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007138151A1 (en) 2006-05-30 2007-12-06 Hantro Products Oy Apparatus, arrangement, method and computer program product for digital video processing
US8665318B2 (en) 2009-03-17 2014-03-04 Google Inc. Digital video coding
US8780984B2 (en) 2010-07-06 2014-07-15 Google Inc. Loss-robust video transmission using plural decoders
US9014265B1 (en) 2011-12-29 2015-04-21 Google Inc. Video coding using edge detection and block partitioning for intra prediction
US9094663B1 (en) 2011-05-09 2015-07-28 Google Inc. System and method for providing adaptive media optimization
US9210424B1 (en) 2013-02-28 2015-12-08 Google Inc. Adaptive prediction block size in video coding
US9807416B2 (en) 2015-09-21 2017-10-31 Google Inc. Low-latency two-pass video coding

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2302845B1 (en) 2009-09-23 2012-06-20 Google, Inc. Method and device for determining a jitter buffer level
US8630412B2 (en) 2010-08-25 2014-01-14 Motorola Mobility Llc Transport of partially encrypted media
US8477050B1 (en) 2010-09-16 2013-07-02 Google Inc. Apparatus and method for encoding using signal fragments for redundant transmission of data
US8751565B1 (en) 2011-02-08 2014-06-10 Google Inc. Components for web-based configurable pipeline media processing
US9313493B1 (en) 2013-06-27 2016-04-12 Google Inc. Advanced motion estimation

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11103463A (ja) * 1997-09-26 1999-04-13 Casio Comput Co Ltd 画像符号化方法及び記憶媒体
WO2000022832A1 (en) * 1998-10-09 2000-04-20 Telefonaktiebolaget Lm Ericsson (Publ) A METHOD AND A SYSTEM FOR CODING ROIs
WO2000044176A1 (en) * 1999-01-21 2000-07-27 Koninklijke Philips Electronics N.V. Method and arrangement for quantizing data
WO2000054511A2 (en) * 1999-03-09 2000-09-14 Conexant Systems, Inc. Error resilient still image packetization method and packet structure
JP2000358192A (ja) * 1999-06-14 2000-12-26 Sony Corp シーン記述生成装置及び方法、オブジェクト抽出方法、並びに記録媒体
JP2001043388A (ja) * 1999-07-29 2001-02-16 Canon Inc 画像処理方法、画像処理装置及び記憶媒体

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11103463A (ja) * 1997-09-26 1999-04-13 Casio Comput Co Ltd 画像符号化方法及び記憶媒体
WO2000022832A1 (en) * 1998-10-09 2000-04-20 Telefonaktiebolaget Lm Ericsson (Publ) A METHOD AND A SYSTEM FOR CODING ROIs
WO2000044176A1 (en) * 1999-01-21 2000-07-27 Koninklijke Philips Electronics N.V. Method and arrangement for quantizing data
WO2000054511A2 (en) * 1999-03-09 2000-09-14 Conexant Systems, Inc. Error resilient still image packetization method and packet structure
JP2000358192A (ja) * 1999-06-14 2000-12-26 Sony Corp シーン記述生成装置及び方法、オブジェクト抽出方法、並びに記録媒体
JP2001043388A (ja) * 1999-07-29 2001-02-16 Canon Inc 画像処理方法、画像処理装置及び記憶媒体

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PATENT ABSTRACTS OF JAPAN *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007138151A1 (en) 2006-05-30 2007-12-06 Hantro Products Oy Apparatus, arrangement, method and computer program product for digital video processing
US8396117B2 (en) 2006-05-30 2013-03-12 Google Inc. Apparatus, arrangement, method and computer program product for digital video processing
US8665318B2 (en) 2009-03-17 2014-03-04 Google Inc. Digital video coding
US8780984B2 (en) 2010-07-06 2014-07-15 Google Inc. Loss-robust video transmission using plural decoders
US9094663B1 (en) 2011-05-09 2015-07-28 Google Inc. System and method for providing adaptive media optimization
US9014265B1 (en) 2011-12-29 2015-04-21 Google Inc. Video coding using edge detection and block partitioning for intra prediction
US9210424B1 (en) 2013-02-28 2015-12-08 Google Inc. Adaptive prediction block size in video coding
US9807416B2 (en) 2015-09-21 2017-10-31 Google Inc. Low-latency two-pass video coding

Also Published As

Publication number Publication date
FI20010191A0 (sv) 2001-02-01
WO2002062072A8 (en) 2003-11-06
FI109635B (sv) 2002-09-13

Similar Documents

Publication Publication Date Title
US10026200B2 (en) System and method for encoding and decoding using texture replacement
US6125201A (en) Method, apparatus and system for compressing data
JP5107495B2 (ja) 品質ベースのイメージ圧縮
US7545989B1 (en) System and method for encoding and decoding using texture replacement
JP3743384B2 (ja) 画像符号化装置及び方法、並びに画像復号装置及び方法
Belloulata et al. Fractal image compression with region-based functionality
US7177356B2 (en) Spatially transcoding a video stream
EP0908055A1 (en) Method, apparatus and system for compressing data
US20090016442A1 (en) Deblocking digital images
KR20100095833A (ko) Roi 의존형 압축 파라미터를 이용하여 영상을 압축하는 장치 및 방법
WO2002062072A1 (en) Method for post-processing decoded video image, using diagonal pixels
WO2002033979A1 (en) Encoding and decoding of video image
JP2004528791A (ja) インターフレーム符号化方法および装置
KR100555419B1 (ko) 동영상 코딩 방법
Zaghetto et al. Segmentation-driven compound document coding based on H. 264/AVC-INTRA
WO2002067590A1 (en) Video encoding of still images
US8023559B2 (en) Minimizing blocking artifacts in videos
Yu et al. Advantages of motion-JPEG2000 in video processing
Kim et al. Subband coding using human visual characteristics for image signals
Wandt et al. Extending HEVC using texture synthesis
Bhojani et al. Hybrid video compression standard
JPH04311195A (ja) 画像信号符号化装置及び画像信号復号化装置
KR100775788B1 (ko) 화질 향상을 위한 사이클릭 미세입자 스케일러빌리티기반에서 플렉서블 매크로 블록 오더링에 의한 코딩방법 및 그 방법을 기록한 기록매체
Jacovitti et al. Bayesian removal of coding block artifacts in the harmonic angular filter features domain
Bender et al. Image enhancement using nonuniform sampling

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
CFP Corrected version of a pamphlet front page
CR1 Correction of entry in section i

Free format text: IN PCT GAZETTE 32/2002 DUE TO A TECHNICAL PROBLEM AT THE TIME OF INTERNATIONAL PUBLICATION, SOME INFORMATION WAS MISSING (81). THE MISSING INFORMATION NOW APPEARS IN THE CORRECTED VERSION.

Free format text: IN PCT GAZETTE 32/2002 DUE TO A TECHNICAL PROBLEM AT THE TIME OF INTERNATIONAL PUBLICATION, SOME INFORMATION WAS MISSING (81). THE MISSING INFORMATION NOW APPEARS IN THE CORRECTED VERSION.

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP