WO2005045764A1 - Video encoding method and device - Google Patents

Video encoding method and device Download PDF

Info

Publication number
WO2005045764A1
WO2005045764A1 PCT/IB2004/003618 IB2004003618W WO2005045764A1 WO 2005045764 A1 WO2005045764 A1 WO 2005045764A1 IB 2004003618 W IB2004003618 W IB 2004003618W WO 2005045764 A1 WO2005045764 A1 WO 2005045764A1
Authority
WO
WIPO (PCT)
Prior art keywords
frame
ccs
quantization
frames
coefficients
Prior art date
Application number
PCT/IB2004/003618
Other languages
French (fr)
Inventor
Stephan Oliver Mietens
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Priority to EP04798778A priority Critical patent/EP1683110A1/en
Priority to US10/578,072 priority patent/US20070025440A1/en
Priority to JP2006537481A priority patent/JP2007515097A/en
Publication of WO2005045764A1 publication Critical patent/WO2005045764A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T9/00Image coding
    • G06T9/004Predictors, e.g. intraframe, interframe coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation

Definitions

  • the present invention relates to a video encoding method provided for encoding an input image sequence consisting of successive groups of frames themselves subdivided into blocks, said method comprising the steps of : - preprocessing said sequence on the basis of a so-called content-change strength (CCS) computed for each frame by applying some predetermined rules ; - estimating a motion vector for each block of the current frame ; - generating a predicted frame using said motion vectors respectively associated to the blocks of the current frame ; - applying to a difference signal between the current frame and the last predicted frame a transformation sub-step producing a plurality of coefficients and followed by a quantization sub-step of said coefficients ; - coding said quantized coefficients.
  • CCS content-change strength
  • Said invention is for instance applicable to video encoding devices that require reference frames for reducing e.g. temporal redundancy (like motion estimation and compensation devices). Such an operation is part of current video coding standards and is expected to be similarly part of future coding standards also.
  • Video encoding techniques are used for instance in devices like digital video cameras, mobile phones or digital video recording devices. Furthermore, applications for coding or transcoding video can be enhanced using the technique according to the invention.
  • low bit rates for the transmission of a coded video sequence may be obtained by (among others) a reduction of the temporal redundancy between successive pictures. Such a reduction is based on motion estimation (ME) and motion compensation (MC) techniques.
  • ME motion estimation
  • MC motion compensation
  • Performing ME and MC for the current frame of the video sequence however requires reference frames (also called anchor frames).
  • reference frames also called anchor frames.
  • I-frames or intra frames
  • I-frames are independently coded, by themselves, without any reference to past or future frames (i.e.
  • P-frames or forward predicted pictures
  • B-frames or bidirectionally predicted frames
  • I- and P-frames serve as reference frames.
  • these reference frames need to be of high quality, i.e. many bits have to be spent to code them, whereas non-reference frames can be of lower quality (for this reason, a higher number of non-reference frames, B-frames in the case of MPEG-2, generally lead to lower bit rates).
  • the object of the invention to propose a video encoding method based on said previous method for finding good frames that can serve as reference frames, but allowing to reduce more noticeably the coding cost.
  • the invention relates to a video encoding method such as defined in the introductory paragraph of the description and in which said CCS is used in said quantization sub-step for modifying the quantization factor used in said quantization sub- step, said CCS and said quantization factor increasing or decreasing simultaneously.
  • the invention also relates to a device for implementing said method.
  • the document cited above describes a method for finding which frames in the input sequence can serve as reference frames, in order to reduce the coding cost.
  • the principle of this method is to measure the strength of content change on the basis of some simple rules, such as listed below and illustrated in Fig.1 , where the horizontal axis corresponds to the number of the concerned frame and the vertical axis to the level of the strength of content change : the measured strength of content change is quantized to levels (for instance five levels, said number being however not a limitation), and I-frames are inserted at the beginning of a sequence of frames having content-change strength (CCS) of level 0, while P- frames are inserted before a level increase of CCS occurs or after a level decrease of CCS occurs.
  • CCS content-change strength
  • the measure may be for instance a simple block classification that detects horizontal and vertical edges, or other types of measures based on luminance, motion vectors, etc.
  • An implementation of this previous method in the MPEG encoding case is described in Fig.2.
  • the encoder comprises a coding branch 101 and a prediction branch 102.
  • the signals to be coded, received by the branch 101 are transformed into coefficients and quantized in a DCT and quantization module 11, the quantized coefficients being then coded in a coding module 13, together with motion vectors MV.
  • the prediction branch 102 receiving as input signals the signals available at the output of the DCT and quantization module 11, comprises in series an inverse quantization and inverse DCT module 21, an adder 23, a frame memory 24, a motion compensation (MC) circuit 25 and a subtracter 26.
  • the MC circuit 25 also receives the motion vectors MV generated by a motion estimation (ME) circuit 27 (many types of motion estimators may be used) from the input reordered frames (defined as explained below) and the output of the frame memory 24, and these motion vectors are also sent towards the coding module 13, the output of which ("MPEG output”) is stored or transmitted in the form of a multiplexed bitstream.
  • ME motion estimation
  • the video input of the encoder (successive frames Xn) is preprocessed in a preprocessing branch 103.
  • First a GOP structure defining circuit 31 is provided for defining from the successive frames the structure of the GOPs.
  • Frame memories 32a, 32b, are then provided for reordering the sequence of I, P, B frames available at the output of the circuit 31 (the reference frames must be coded and transmitted before the non-reference frames depending on said reference frames). These reordered frames are sent on the positive input of the subtracter 26 (the negative input of which receives, as described above, the output predicted frames available at the output of the MC circuit 25, these output predicted frames being also sent back to a second input of the adder 23).
  • the output of the subtracter 26 delivers frame differences that are the signals to be coded processed by the coding branch 101.
  • a CCS computation circuit 33 is provided for the definition of the GOP structure. It has then been observed that the higher the CCS - which can result from motion - the less the viewer can really follow the presented video. It is consequently proposed, according to the present invention, to increase or decrease the quantization factor used in the module 11 as a function of the CCS - said CCS and the quantization factor increasing or decreasing simultaneously - which can be obtained by sending the output information of the CCS computation circuit towards the DCT and quantization module 11 of the coding branch.
  • the coding module 13 is in fact composed of a variable-length coding (VLC) circuit arranged in series with a buffer memory, the output of said memory being sent back towards a rate control circuit 133 for modifying the quantization factor.
  • VLC variable-length coding
  • an additional connection 200 intended to allow to implement the proposed modification of quantization factor is provided between the CCS computation circuit 33 and the rate control circuit 133 and also between said circuit 33 and the DCT and quantization module 11 of the coding branch.
  • This connection 200 extends two coding modes of the coding system, namely a so-called open-loop coding mode (without bit- rate control) and a closed loop coding mode (with bit-rate control).
  • the quantizer settings are usually fixed.
  • the resulting bit rate of the encoded stream is automatically lower for simple scenes (less residue needs to be coded) than for complex scenes (higher residue needs to be coded). Coding cases as described above, where the sequence contains high motion, result in complex scenes that are coded with high bit-rates.
  • the bit-rates for the high-motion scenes can be reduced by higher quantization, thereby removing spatial details of these scenes that the observer cannot follow due to the motion.
  • the quantization can be controlled by defining a quantization factor, q ccs, which is a function of CCS and the original fixed quantizer factor, called qjixed : q_ccs -qJixed+f(CCS) , where f() is a function resulting in positive integers 0 (qjnax-qjixed) to increase q ccs from qjixed upto an allowed maximum qjnax.
  • q ccs which is a function of CCS and the original fixed quantizer factor, called qjixed : q_ccs -qJixed+f(CCS) , where f() is a function resulting in positive integers 0 (qjnax-qjixed) to increase q ccs from qjixed upto an allowed maximum qjnax.
  • the quantization factor, q_adapt is adapted in order to achieve a desired predefined bit rate. Bit-rate controllers that are required for closed-loop coding work basically with bit budgets and chose qjxdapt based on the available budget.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention relates to a video encoding method provided for encoding an input image sequence consisting of successive groups of frames in which each frame is itself subdivided into blocks, and to a corresponding video encoding device. This method and device perform the steps of preprocessing the sequence on the basis of a so-called content-change strength (CCS) computed for each frame, generating a predicted frame using motion vectors estimated for each block, applying to a difference signal between the current frame and the last predicted frame a transformation sub-step producing a plurality of coefficients and followed by a quantization sub-step of said coefficients, and coding said quantized coefficients. According to the invention, the CCS is used in the quantization sub-step for modifying the quantization factor used in this sub-step, the CCS and the quantization factor increasing or decreasing simultaneously.

Description

VIDEO ENCODING METHOD AND DEVICE
FIELD OF THE INVENTION The present invention relates to a video encoding method provided for encoding an input image sequence consisting of successive groups of frames themselves subdivided into blocks, said method comprising the steps of : - preprocessing said sequence on the basis of a so-called content-change strength (CCS) computed for each frame by applying some predetermined rules ; - estimating a motion vector for each block of the current frame ; - generating a predicted frame using said motion vectors respectively associated to the blocks of the current frame ; - applying to a difference signal between the current frame and the last predicted frame a transformation sub-step producing a plurality of coefficients and followed by a quantization sub-step of said coefficients ; - coding said quantized coefficients. Said invention is for instance applicable to video encoding devices that require reference frames for reducing e.g. temporal redundancy (like motion estimation and compensation devices). Such an operation is part of current video coding standards and is expected to be similarly part of future coding standards also. Video encoding techniques are used for instance in devices like digital video cameras, mobile phones or digital video recording devices. Furthermore, applications for coding or transcoding video can be enhanced using the technique according to the invention.
BACKGROUND OF THE INVENTION In video compression, low bit rates for the transmission of a coded video sequence may be obtained by (among others) a reduction of the temporal redundancy between successive pictures. Such a reduction is based on motion estimation (ME) and motion compensation (MC) techniques. Performing ME and MC for the current frame of the video sequence however requires reference frames (also called anchor frames). Taking MPEG-2 as an example, different frames types, namely I-, P- and B-frames, have been defined, for which ME and MC are performed differently : I-frames (or intra frames) are independently coded, by themselves, without any reference to past or future frames (i.e. without any ME and MC), while P-frames (or forward predicted pictures) are encoded each one relatively to a past frame (i.e. with motion compensation from a previous reference frame) and B-frames (or bidirectionally predicted frames) are encoded relatively to two reference frames (a past frame and a future frame). The I- and P-frames serve as reference frames. In order to obtain good frame predictions, these reference frames need to be of high quality, i.e. many bits have to be spent to code them, whereas non-reference frames can be of lower quality (for this reason, a higher number of non-reference frames, B-frames in the case of MPEG-2, generally lead to lower bit rates). In order to indicate which input frame is processed as an I- frame, a P-frame or a B-frame, a structure based on groups of pictures (GOPs) is defined in MPEG-2. More precisely, a GOP uses two parameters N and M, where N is the temporal distance between two I-frames and M is the temporal distance between reference frames. For example, an (N,M)-GOP with N=12 and M=4 is commonly used, defining an " I B B B P B B B P B B B " structure. Succeeding frames generally have a higher temporal correlation than frames having a larger temporal distance between them. Therefore shorter temporal distances between the reference and the currently predicted frame on the one hand lead to higher prediction quality, but on the other hand imply that less non-reference frames can be used. Both a higher prediction quality and a higher number of non-reference frames generally result in lower bit rates, but they work against each other since the frame prediction quality results from shorter temporal distances only. However, said quality also depends on the usefulness of the reference frames to actually serve as references. For example, it is obvious that with a reference frame located just before a scene change, the prediction of a frame located just after the scene change is not possible with respect to said reference frame, although they may have a frame distance of only 1. One the other hand, in scenes with a steady or almost steady content (like video conferencing or news), even a frame distance of more than 100 can still result in high quality prediction. From the above-mentioned examples, it appears that a fixed GOP structure like the commonly used (12, 4)-GOP may be inefficient for coding a video sequence, because reference frames are introduced too frequently, in case of a steady content, or at a unsuitable position, if they are located just before a scene change. Scene-change detection is a known technique that can be exploited to introduce an I- frame at a position where a good prediction of the frame (if no I- frame is located at this place) is not possible due to a scene change. However, sequences do not profit from such techniques if the frame content is almost completely different after some frames having high motion, with however no scene change at all (for instance, in a sequence where a tennis player is continuously followed within a single scene). A previous European patent application, already filed by the applicant on October 14, 2003, with the filing number 03300155.3 (PHFR030124) has then described a new method for finding better reference frames. This method will be recalled below.
SUMMARY OF THE INVENTION It is therefore the object of the invention to propose a video encoding method based on said previous method for finding good frames that can serve as reference frames, but allowing to reduce more noticeably the coding cost. To this end, the invention relates to a video encoding method such as defined in the introductory paragraph of the description and in which said CCS is used in said quantization sub-step for modifying the quantization factor used in said quantization sub- step, said CCS and said quantization factor increasing or decreasing simultaneously. The invention also relates to a device for implementing said method.
BRIEF DESCRIPTION OF THE DRAWINGS The present invention will now be described, by way of example, with reference to the accompanying drawings in which : - Fig. 1 illustrates the rules used for defining, according to the description given in the previous European patent application cited above, the place of the reference frames of the video sequence to be coded ; - Fig.2 shows an encoder carrying out the encoding method described in said previous
European patent application, taking the MPEG-2 case as an example ; - Fig.3 shows an encoder carrying out the encoding method according to the invention.
DETAILED DESCRIPTION OF THE INVENTION The document cited above describes a method for finding which frames in the input sequence can serve as reference frames, in order to reduce the coding cost. The principle of this method is to measure the strength of content change on the basis of some simple rules, such as listed below and illustrated in Fig.1 , where the horizontal axis corresponds to the number of the concerned frame and the vertical axis to the level of the strength of content change : the measured strength of content change is quantized to levels (for instance five levels, said number being however not a limitation), and I-frames are inserted at the beginning of a sequence of frames having content-change strength (CCS) of level 0, while P- frames are inserted before a level increase of CCS occurs or after a level decrease of CCS occurs. The measure may be for instance a simple block classification that detects horizontal and vertical edges, or other types of measures based on luminance, motion vectors, etc. An implementation of this previous method in the MPEG encoding case is described in Fig.2. The encoder comprises a coding branch 101 and a prediction branch 102. The signals to be coded, received by the branch 101, are transformed into coefficients and quantized in a DCT and quantization module 11, the quantized coefficients being then coded in a coding module 13, together with motion vectors MV. The prediction branch 102, receiving as input signals the signals available at the output of the DCT and quantization module 11, comprises in series an inverse quantization and inverse DCT module 21, an adder 23, a frame memory 24, a motion compensation (MC) circuit 25 and a subtracter 26. The MC circuit 25 also receives the motion vectors MV generated by a motion estimation (ME) circuit 27 (many types of motion estimators may be used) from the input reordered frames (defined as explained below) and the output of the frame memory 24, and these motion vectors are also sent towards the coding module 13, the output of which ("MPEG output") is stored or transmitted in the form of a multiplexed bitstream. The video input of the encoder (successive frames Xn) is preprocessed in a preprocessing branch 103. First a GOP structure defining circuit 31 is provided for defining from the successive frames the structure of the GOPs. Frame memories 32a, 32b, are then provided for reordering the sequence of I, P, B frames available at the output of the circuit 31 (the reference frames must be coded and transmitted before the non-reference frames depending on said reference frames). These reordered frames are sent on the positive input of the subtracter 26 (the negative input of which receives, as described above, the output predicted frames available at the output of the MC circuit 25, these output predicted frames being also sent back to a second input of the adder 23). The output of the subtracter 26 delivers frame differences that are the signals to be coded processed by the coding branch 101. For the definition of the GOP structure, a CCS computation circuit 33 is provided. It has then been observed that the higher the CCS - which can result from motion - the less the viewer can really follow the presented video. It is consequently proposed, according to the present invention, to increase or decrease the quantization factor used in the module 11 as a function of the CCS - said CCS and the quantization factor increasing or decreasing simultaneously - which can be obtained by sending the output information of the CCS computation circuit towards the DCT and quantization module 11 of the coding branch. As described in the conventional part of Fig.3 (said Fig.3 is introduced in the next paragraph in relation with the description of the invention), it is known, indeed, that the coding module 13 is in fact composed of a variable-length coding (VLC) circuit arranged in series with a buffer memory, the output of said memory being sent back towards a rate control circuit 133 for modifying the quantization factor. According to the invention, and as shown in Fig.3 in which similar circuits are designated by the same references as in Fig.2, an additional connection 200 intended to allow to implement the proposed modification of quantization factor is provided between the CCS computation circuit 33 and the rate control circuit 133 and also between said circuit 33 and the DCT and quantization module 11 of the coding branch. This connection 200 extends two coding modes of the coding system, namely a so-called open-loop coding mode (without bit- rate control) and a closed loop coding mode (with bit-rate control). In the open-loop coding mode for example, the quantizer settings are usually fixed. The resulting bit rate of the encoded stream is automatically lower for simple scenes (less residue needs to be coded) than for complex scenes (higher residue needs to be coded). Coding cases as described above, where the sequence contains high motion, result in complex scenes that are coded with high bit-rates. The bit-rates for the high-motion scenes can be reduced by higher quantization, thereby removing spatial details of these scenes that the observer cannot follow due to the motion. The quantization can be controlled by defining a quantization factor, q ccs, which is a function of CCS and the original fixed quantizer factor, called qjixed : q_ccs -qJixed+f(CCS) , where f() is a function resulting in positive integers 0 (qjnax-qjixed) to increase q ccs from qjixed upto an allowed maximum qjnax. Examples forfQ are fl (CCS) = round (CCS * (qjnax-qjixed) / (CCSjnax) ) or f2 (CCS) = round ( (q_max-q ixed+ 1) Λ (CCS/CC _max) -1) for CCS=0 to CCSjnax. In closed-loop coding, the quantization factor, q_adapt, is adapted in order to achieve a desired predefined bit rate. Bit-rate controllers that are required for closed-loop coding work basically with bit budgets and chose qjxdapt based on the available budget. This means that the quantization factor q css as described for open- loop coding can be used, and only qjixed has to be replaced with q adapt. Then, compared to an unmodified rate controller, the bit budget will increase with higher CCS, and these additional bits are automatically spent on frames with lower CCS, because the qjxdapt value will decrease due to the increased bit budget.

Claims

CLAIMS :
1. A video encoding method provided for encoding an input image sequence consisting of successive groups of frames themselves subdivided into blocks, said method comprising the steps of : - preprocessing said sequence on the basis of a so-called content-change strength
(CCS) computed for each frame by applying some predetermined rules ; - estimating a motion vector for each block of the frames ; - generating a predicted frame using said motion vectors respectively associated to the blocks of the current frame ; - applying to a difference signal between the current frame and the last predicted frame a transformation sub-step producing a plurality of coefficients and followed by a quantization sub-step of said coefficients ; - coding said quantized coefficients ; wherein said CCS is used in said quantization sub-step for modifying the quantization factor used in said quantization sub-step, said CCS and the quantization factor increasing or decreasing simultaneously.
2. A video encoding device provided for encoding an input image sequence consisting of successive groups of frames themselves subdivided into blocks, said device comprising the following means : - preprocessing means, provided for preprocessing said sequence on the basis of a so- called content-change strength (CCS) computed for each frame by applying some predetermined rules ; - estimating means, provided for estimating a motion vector for each block of the frames ; - generating means, provided for generating a predicted frame on the basis of said motion vectors respectively associated to the blocks of the current frame ; - transforming and quantizing means, provided for applying to a difference signal between the current frame and the last predicted frame a transformation producing a plurality of coefficients and followed by a quantization of said coefficients ; - coding means, provided for encoding said quantized coefficients ; wherein an output of said preprocessing means is received on an input of said transformation and quantization means for modifying on the basis of said CCS the quantization factor used in said quantization sub-step, said CCS and the quantization factor increasing or decreasing simultaneously.
PCT/IB2004/003618 2003-11-07 2004-11-01 Video encoding method and device WO2005045764A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP04798778A EP1683110A1 (en) 2003-11-07 2004-11-01 Video encoding method and device
US10/578,072 US20070025440A1 (en) 2003-11-07 2004-11-01 Video encoding method and device
JP2006537481A JP2007515097A (en) 2003-11-07 2004-11-01 Video encoding method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP03300205 2003-11-07
EP03300205.6 2003-11-07

Publications (1)

Publication Number Publication Date
WO2005045764A1 true WO2005045764A1 (en) 2005-05-19

Family

ID=34560247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2004/003618 WO2005045764A1 (en) 2003-11-07 2004-11-01 Video encoding method and device

Country Status (6)

Country Link
US (1) US20070025440A1 (en)
EP (1) EP1683110A1 (en)
JP (1) JP2007515097A (en)
KR (1) KR20060118459A (en)
CN (1) CN1894725A (en)
WO (1) WO2005045764A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737838A (en) * 2017-04-19 2018-11-02 北京金山云网络技术有限公司 A kind of method for video coding and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102014111435B4 (en) * 2014-08-11 2024-09-26 Infineon Technologies Ag Chip arrangement

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7031388B2 (en) * 2002-05-06 2006-04-18 Koninklijke Philips Electronics N.V. System for and method of sharpness enhancement for coded digital video

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
FAN J ET AL: "ADAPTIVE MOTION-COMPENSATED VIDEO CODING SCHEME TOWARDS CONTENT-BASED BIT RATE ALLOCATION", JOURNAL OF ELECTRONIC IMAGING, SPIE + IS&T, US, vol. 9, no. 4, October 2000 (2000-10-01), pages 521 - 533, XP001086815, ISSN: 1017-9909 *
LEE J ET AL: "ADAPTIVE FRAME TYPE SELECTION FOR LOW BIT-RATE VIDEO CODING", SPIE VISUAL COMMUNICATIONS AND IMAGE PROCESSING, XX, XX, vol. 2308, no. PART 2, 25 September 1994 (1994-09-25), pages 1411 - 1422, XP002035257 *
LEE J ET AL: "Motion compensated subband coding with scene adaptivity", PROCEEDINGS OF THE SPIE - THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING USA, vol. 2186, February 1994 (1994-02-01), pages 278 - 288, XP002313730, ISSN: 0277-786X *
ZABIH R ET AL: "A FEATURE-BASED ALGORITHM FOR DETECTING AND CLASSIFYING SCENE BREAKS", PROCEEDINGS OF ACM MULTIMEDIA '95 SAN FRANCISCO, NOV. 5 - 9, 1995, NEW YORK, ACM, US, 5 November 1995 (1995-11-05), pages 189 - 200, XP000599032, ISBN: 0-201-87774-0 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108737838A (en) * 2017-04-19 2018-11-02 北京金山云网络技术有限公司 A kind of method for video coding and device

Also Published As

Publication number Publication date
CN1894725A (en) 2007-01-10
KR20060118459A (en) 2006-11-23
JP2007515097A (en) 2007-06-07
US20070025440A1 (en) 2007-02-01
EP1683110A1 (en) 2006-07-26

Similar Documents

Publication Publication Date Title
US9271004B2 (en) Method and system for parallel processing video data
EP2847993B1 (en) Motion sensor assisted rate control for video encoding
EP0883963B1 (en) Dynamic coding rate control in a block-based video coding system
EP2250813B1 (en) Method and apparatus for predictive frame selection supporting enhanced efficiency and subjective quality
EP1068736B1 (en) Method and apparatus for performing adaptive encoding rate control of a video information stream including 3:2 pull-down video information
US6628713B1 (en) Method and device for data encoding and method for data transmission
CN1175859A (en) Rate control for stereoscopic digital video encoding
WO2000018137A1 (en) Frame-level rate control for video compression
WO2008019525A1 (en) Method and apparatus for adapting a default encoding of a digital video signal during a scene change period
WO1999063760A1 (en) Sequence adaptive bit allocation for pictures encoding
US20040234142A1 (en) Apparatus for constant quality rate control in video compression and target bit allocator thereof
JPH09154143A (en) Video data compression method
EP1077000B1 (en) Conditional masking for video encoder
US7054364B2 (en) Moving picture encoding apparatus and moving picture encoding method
CN100521794C (en) Device for encoding a video data stream
US20050100231A1 (en) Pseudo-frames for MPEG-2 encoding
JPH09284770A (en) Image coding device and method
US20070025440A1 (en) Video encoding method and device
JP3428332B2 (en) Image encoding method and apparatus, and image transmission method
JPH08307860A (en) Scene re-encoder
WO1996033573A1 (en) Device and method for coding moving image
KR100778473B1 (en) Bit rate control method
JPH0646411A (en) Picture coder
US20070127565A1 (en) Video encoding method and device
McVeigh et al. Comparative study of partial closed-loop versus open-loop motion estimation for coding of HDTV

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 200480032612.3

Country of ref document: CN

AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2004798778

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2007025440

Country of ref document: US

Ref document number: 2006537481

Country of ref document: JP

Ref document number: 10578072

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 1020067008803

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 1569/CHENP/2006

Country of ref document: IN

WWP Wipo information: published in national office

Ref document number: 2004798778

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 1020067008803

Country of ref document: KR

WWP Wipo information: published in national office

Ref document number: 10578072

Country of ref document: US

WWW Wipo information: withdrawn in national office

Ref document number: 2004798778

Country of ref document: EP