USRE35158E - Apparatus for adaptive inter-frame predictive encoding of video signal - Google Patents
Apparatus for adaptive inter-frame predictive encoding of video signal Download PDFInfo
- Publication number
- USRE35158E USRE35158E US07/997,238 US99723892A USRE35158E US RE35158 E USRE35158 E US RE35158E US 99723892 A US99723892 A US 99723892A US RE35158 E USRE35158 E US RE35158E
- Authority
- US
- United States
- Prior art keywords
- frame
- prediction
- signal
- frames
- dependent
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/573—Motion compensation with multiple frame prediction using two or more reference frames in a given prediction direction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- the present invention relates to an apparatus for encoding a video signal to produce an encoded signal for transmission or recording, with the encoded signal containing substantially lower amounts of data than the original video signal.
- the invention relates to an apparatus for inter-frame predictive encoding of a video signal, which is especially applicable to television conferencing systems or to moving-image video telephone systems.
- inter-frame predictive encoding is executed with the independent frames being used as reference frames.
- adaptive predictive encoding such inter-frame predictive encoding is executed only when it is appropriate, that is to say only when there is no great difference between successive frames.
- intra-frame encoding is executed.
- Example of such inter-frame encoding are described in the prior art for example in "15/30 Mb/s Motion-Compensated Inter-frame, Inter-field and Intrafield Adaptive Prediction Coding" (Oct. 1985) Bulletin of the Society of Television Engineers (Japan), Vol. 39, No. 10. With that method, a television signal is encoded at a comparatively high data rate. Movement-compensation inter-frame prediction, intra-field prediction, and inter-field (i.e. intra-frame) prediction are utilized.
- FIGS. 1A and 1B are simple conceptual diagrams to respectively illustrate the basic features of the aforementioned inter-frame predictive encoding methods and the method used in the aforementioned U.S. patent application by the assignee of the present invention.
- a succession of frames of a video signal are indicated as rectangles numbered 1, 2, . . .
- the shaded rectangles denote independent frames (i.e. independently encoded frames that are utilized as reference frames) which occur with a fixed period of four frame intervals, i.e. inter-frame predictive encoding is assumed to be reset once in every four frames.
- independent frame No. 1 is used to derive prediction error values for each of frames 2, 3, and 4, which are encoded and transmitted as data representing these frames.
- Such a prior art prediction method has a basic disadvantage. Specifically, only the correlation disadvantage. Specifically, only the correlation between successive frames of the video signal along the forward direction of the time axis is utilized. However in fact there is generally also strong correlation between successive frames in the opposite direction.
- the operation of the aforementioned related patent application by the assignee of the present invention utilizes that fact, as illustrated in FIG. 1B.
- each frame occurring between two successive independent frames is subjected to inter-frame predictive encoding based on these two independent frames, as indicated by the arrows. For example, inter-frame predictive encoding of frame 2 is executed based on the independent frames 1 and 5. This is also true for frames 3 and 4.
- a first prediction signal for frame 2 is derived based on frame 1 as a reference frame
- a second prediction signal for frame 2 is derived based on frame 5 as a reference frame.
- These two prediction signals are then multiplied by respective weighting factors and combined to obtain a final prediction error signal for frame 2, with greater weight being given to the first prediction signal (since frame 2 will have greater correlation with frame 1 than frame 5).
- Prediction signals for the other dependent frames are similarly derived, and differences between the prediction signal and a signal of a current frame are derived as prediction errors, then encoded and transmitted.
- FIGS. 2A and 2C (and also in FIGS. 2B, 2D, described hereinafter) respective numbered rectangles represent successive frames of a video signal.
- the frames indicated by the # symbol represent independently encoded frames.
- frames 1 and 5 are independent frames which occur with a fixed period of four frame intervals, i.e. inter-frame predictive encoding is reset once in every four successive frame intervals in these examples.
- the white rectangles denote frames whose image contents are mutually comparatively similar.
- the dark rectangles denote frames whose image contents are mutually comparatively similar, but are considerably different from the contents of the "white rectangle" frames.
- frame 1 is an independent frame
- frame 2 is a dependent frame whose contents are encoded by inter-frame predictive encoding using frame 1 as a reference frame.
- There is a significant change e.g. resulting from a "scene change", or resulting from a new portion of the background of the image being uncovered, for example due to the movement of a person or object within the scene that is being televised
- inter-frame predictive encoding of frame 3 by using frame 2 as a reference frame so that it becomes impossible to execute inter-frame predictive encoding of frame 3 by using frame 2 as a reference frame.
- this is detected, and results in frame 3 being independently encoded.
- Frame 3 is then used as a reference frame for inter-frame predictive encoding of frame 4.
- Another basic disadvantage of such a prior art method of adaptive inter-frame predictive encoding occurs when the enclosed output data are to be recorded (e.g. by a video tape recorder) and subsequently played back and decoded to recover the original video signal.
- the enclosed output data are to be recorded (e.g. by a video tape recorder) and subsequently played back and decoded to recover the original video signal.
- reverse playback operation of the recorded encoded data in which playback is executed with data being obtained in the reverse sequence along the time axis with respect to normal playback operation, it would be very difficult to apply such a prior art method, due to the fact that predictive encoding is always based upon a preceding frame. That is to say, prediction values are not contained the playback signal (in the case of reverse playback operation) in the correct sequence for use in decoding the playback data.
- the present invention provides an adaptive encoding apparatus for encoding an input video signal comprising a sequence of frames each comprising successive data values, the apparatus comprising:
- encoder means for encoding successive blocks of a frame of the video signal, each of the blocks comprising a fixed-size array of the data values;
- N is a fixed integer of value greater than one
- adaptive prediction means for executing adaptive prediction processing, as a dependent frame, of each frame occurring between a preceding one and a succeeding one of the independent frames in the frame sequence, by deriving for the data values of each block of a dependent frame respective prediction error values based upon an optimum prediction signal selected from a plurality of prediction signals derived using a plurality of combinations of the preceding and succeeding independent frames.
- the adaptive prediction means of such an adaptive prediction encoding apparatus preferably comprises:
- predictive mode selection means for selecting, for each of the blocks, one out of four prediction modes in which the first, second and third prediction signals and the non-prediction signal to are respectively used in deriving predictive error values for respective data values of the block, to be sent to the encoder means and encoded thereby, the selection being based upon judgement of the error values, the predictive mode selection means further supplying to the encoding means, to be encoded thereby, predictive mode data indicating predictive modes which have been selected for respective ones of the blocks.
- FIGS. 1A and 1B are conceptual diagrams for describing inter-frame predictive operation using one direction and both directions of the time axis, respectively;
- FIGS. 2A, 2C are conceptual diagrams for describing a prior art method of adaptive inter-frame predictive encoding
- FIGS. 2B, 2D are for describing a method of adaptive inter-frame predictive encoding according to the present invention
- FIG. 3 is a general block diagram of an embodiment of an encoding apparatus for adaptive inter-frame predictive encoding according to the present invention
- FIG. 4 is a block diagram of an adpative prediction section of the apparatus of FIG. 3.
- FIG. 5 is a timing diagram for assistance in describing the operations of the apparatus of FIG. 5.
- FIG. 6 is a block diagram of a decoding apparatus for decoding signals in accordance with the present invention.
- FIG. 7 is a schematic diagram of the adaptive prediction section of the decoding apparatus of FIG. 6. .Iaddend.
- FIGS. 2B and 2D illustrate the manner in which the frame sequences of FIGS. 2A and 2C respectively described hereinabove) are handled by the method of adaptive inter-frame predictive encoding of the present invention, as compared with a prior art method of adaptive inter-frame predictive encoding in the case of FIG. 2B, in which there is a scene change between frames 2 and 3, so that the preceding independent frame 1 cannot be used for inter-frame predictive encoding of frames 3 and 4, use is made of the correlation between succeeding independent frame 5 and the dependent frames 3 and 4. That is to say, only the succeeding independent frame 5 is used for inter-frame predictive processing of the dependent frames 3 and 4. This makes it unnecessary to independently encode frame No.
- the basic operation of an adaptive predictive encoding apparatus is as follows.
- the encoder processes each frame of an input video signal in units of blocks (where each block will for example consist of an 8 ⁇ 8 array of pixels of the frame), and the apparatus determines for each block of a frame which of the following correlation conditions exists between that block and the correspondingly positioned blocks of the preceding independent frame and the succeeding independent frame:
- Optimum prediction will be achieved by processing using a combination of the corresponding blocks (i.e. correspondingly positioned within the frame) of both the preceding and succeeding independent frames.
- the decision as to which of the above four options is optimal is based upon a total of respective squared values of difference between each data value representing a pixel of the block and the corresponding data values of the corresponding blocks in the preceding and succeeding frames.
- Processing of the block is then executed, that is to say either a set of inter-frame prediction error values with respect to the pixels of the corresponding block of the preceding and/or succeeding independent frames, or the data values for the pixels of the block in question, slightly modified as described hereinafter, are then encoded for transmission or recording.
- prediction mode data which indicates which of the above four options has been selected for that block is encoded with the yideo data, and transmitted or recorded.
- decoding is executed, utilizing the prediction mode data to control the decoding operation.
- FIG. 3 is a general block diagram of an embodiment of an inter-frame adaptive predictive encoding apparatus according to the present invention.
- a frame memory 1a receives a (moving picture) digital video signal from an input terminal 1 as successive data values, consisting of luminance (Y) values for respective pixels, as well as chrominance (R-Y) and (B-Y), i.e. color difference, values. Successive frames of the video signal are stored in the frame memory 1a. Successive blocks of a frame that is currently held in the frame memory 1a are read out in a predetermined sequence, each of the blocks consisting for example of an 8 ⁇ 8 element array of luminance (Y) or chrominance, i.e. color difference (B-Y) or (R-Y) values.
- Each block of luminance data values directly corresponds to a physical (display) size of 8 ⁇ 8 pixels.
- each 8 ⁇ 8 block of chrominance values will correspond to a larger physical area that 8 ⁇ 8 pixels.
- each 8 ⁇ 8 block of color difference values will correspond (in display size) to a 16 ⁇ 16 macro block of luminance values consisting of four 8 ⁇ 8 blocks.
- the values of each block are successively read out in a predetermined sequence.
- the output data from the frame memory 1a is supplied to a movable contact of a changeover switch 2.
- the "a" fixed contact of the changeover switch 2 is connected to the "a" fixed contact of a changeover switch 5, while the "b" fixed contact of the changeover switch 2 is connected to the input of a (N-1) frame memory 3.
- the (N-1) frame memory 3 is a memory having a capacity for storing up to (N-1) successively inputted frames, where N is a fixed integer, and is used to produce a delay of N frame intervals, i.e. a frame which is written into that memory during one frame interval is subsequently read out from the memory in the fourth frame interval to occur thereafter.
- the output of the (N-1) frame memory 3 is supplied to an adaptive prediction section 4, while the output of the adaptive prediction section 4 is supplied to the "b" fixed contact of the changeover switch 5.
- the movable contact of the changeover switch 5 is connected to the input of a orthogonal transform section 6, whose output is supplied to a quantizer 7.
- the output from the quantizer 7 is supplied to an variable-length encoder section 8 and also to a dequantizer 10.
- the output from the variable-length encoder section 8 is applied to an output terminal 9.
- the output from the dequantizer 10 is supplied to an inverse orthogonal transform section 11, whose output is supplied to a (succeeding) frame memory 12.
- the output from the 12 is supplied to a (preceding) frame memory 13 and also to a second input of the adaptive prediction section 4.
- the output of the (preceding) frame memory 13 is applied to a third input of the adaptive prediction section 4.
- An output from the adaptive prediction section 4, consisting of the aforementioned prediction mode data, is supplied to a second input of the variable-length encoder section 8.
- a synchronizing signal separator circuit 4 receives the input video signal and separates the sync signal components thereof to derive synchronizing signals which are supplied to a control signal generating circuit 15.
- the control signal generating circuit 15 thereby generates variuous control and timing signals for controlling switching operation of the changeover switches 2 and 5, and memory read and write operations of the frame memory 1a, (N-1) frame memory 3, (succeeding) frame memory 12 and (preceding) frame memory 13.
- a weighting value generating circuit 16 receives a timing signal from the control signal generating circuit 15, and generates successive pairs of weighting values W and (1-W) which vary in value on successive frames as described hereinafter. These pairs of weighting values are supplied to the adaptive prediction section 4.
- the switching operation of the changeover switch 5 is linked to that of the changeover switch 2, and when both of these are set to the respective "a" terminals, the signal of an independent frame is directly inputted to the orthogonal transform section 6, to be directly transformed and encoded.
- the output signal from the changeover switch 5 thus consists of successive data values of successive blocks of an independent frame, during each interval in which data values of an independent frame are being read out from the frame memory 1a, with switches 2 and 5 set to their "a" positions.
- the output signal from the changeover switch 5 consists of either successive prediction error values for a block of a dependent frame, or data values (which may have been modified by intra-frame processing) of a block of a dependent frame.
- the Y (luminance) and (R-Y), (B-Y) (chrominance) values of the output signal from the changeover switch 5 are converted by the orthogonal transform section 6 to coefficient component values by an orthogonal transform operation, such as the discrete cosine transform (DCT), in units of blocks.
- the resultant output signal from the orthogonal transform section 6 is then quantized using steps of appropriate size, by the quantizer 7. Since the distribution of the resultant quantized signal is close to zero amplitude, encoding efficiency is further increased by encoding the quantized signal by a variable-length encoding technique, such as Huffman encoding.
- variable-length encoder section 8 the aforementioned prediction mode data values supplied from the adaptive prediction section 4 to the variable-length encoder section 8 are also encoded by the variable-length encoding technique.
- the resultant variable-length data are supplied to an output terminal 9, to be transmitted to a corresponding decoding apparatus, or to be recorded and subsequently played back and supplied to a corresponding decoding apparatus.
- FIG. 4 is a general block diagram of the adaptive prediction section 4 of FIG. 3.
- the data read out from the (preceding) frame memory 13 are applied, as a preceding frame signal, to an input terminal 40 and hence to one input of a subtractor 20.
- the output from the (succeeding) frame memory 12 is applied, as a succeeding frame signal, to an input terminal 40 and hence to one input of a subtractor 22.
- the output from the adder 34 is applied to one input of a prediction signal subtractor 21.
- the contents of the frame that is currently being read out from the (N-1) frame memory 3 (that frame being referred to in the following as the current frame) is applied as a current frame signal to an input terminal 42 and hence to one input of a subtractor 23.
- the current frame signal is also supplied to the respective other inputs of the subtractor 20, prediction signal subtractor 21 and subtractor 22.
- a fixed data value is applied to the other input of the subtractor 23.
- the value of the DC component of the signal of the current frame is derived by a DC level detection circuit 38, and applied to one input of a subtractor 39.
- the current frame signal is applied to the other input of the subtractor 39, to have the DC component subtracted therefrom. This subtraction of the DC component is necessary in order to prevent excessively high output values from being produced from a squaring circuit 27, described hereinafter.
- the respective outputs from the subtractors 21, 20, and 22 (these outputs being referred to in the following as the first, second, and third prediction signals), and the output from subtractor 23 (referred to in the following as a non-prediction signal) are applied as to corresponding inputs of a delay section 43, which subjects each of these signals to a delay which is equal to the period of one block (i.e. corresponding to 64 pixels, in this example).
- the delayed outputs from the 1-block delay circuit 43 are applied to respective fixed contacts of a prediction mode selector switch 45, whose movable contact is coupled to an output terminal 46.
- the first, second and third prediction signals from the subtractors 21, 20 and 22, and the non-prediction signal from subtractor 39 are also respectively applied to inputs of squaring circuits 25, 24, 26 and 27. Each of these thereby produces the square of each (prediction error) data value that is inputted thereto, and these squared error values produced from circuits 24 to 27 are respectively supplied to inputs of additive accumulator circuits 28 to 31, each of which functions to obtain the sum of the squared error values of respective pixels of one block at a time. This is to say, when the total of the squared error values for one block has been computed by one of these accumulator circuits, the result is outputted therefrom, the contents are reset to zero, and computation of the squared error value total for the next block begins.
- the output from the cumulative adder 28 is supplied directly to a first input terminal of a minimum value selector circuit 32.
- the output from the cumulative adder 29 is supplied via a subtractor 36, in which a predetermined fixed compensation value is subtracted therefrom, to a second input terminal of the minimum value selector circuit 32.
- the output from the cumulative adder 30 is supplied directly to a third input terminal of the minimum value selector circuit 32.
- the output from the cumulative adder 31 is supplied via an adder 37, in which a predetermined fixed compensation value is added thereto, to a fourth input terminal of the minimum value selector circuit 32.
- the minimum value selector circuit 32 judges which of these is lowest in value and produces an output data signal indicative of that value. That output data signal serves as prediction mode data, i.e. is used to determine which mode of operation will provide optimum encoding accuracy, to thereby determine which of Option 1 to Option 4 described hereinabove is applicable to the block for which judgement of the accumulated total error-squared values has been made.
- That prediction mode information is then applied to control the setting of the prediction mode selector switch 45, to determine which of the delayed outputs from the 1-block delay circuit 43 will be selected to be transferred to output terminal 46, and hence to the "b" terminal of the changeover switch 5 of FIG. 1.
- the setting of the prediction mode selector switch 45 is controlled by the prediction mode data output from the minimum value selector circuit 32 such that the delayed prediction error output from the prediction signal subtractor 21 is selected, if that output has resulted in the smallest value of accumulated squared error value for the block in question (representing the case of Option 1 above being selected). This will be referred to as mode 1.
- the delayed output from the subtractor 20 will be selected by the prediction mode selector switch 45 for the case of Option 2 above being selected (this being referred to the following as mode 2)
- the delayed output from the subtractor 22 will be selected by the prediction mode selector switch 45 for the case of Option 3 above being selected (this being referred to in the following as mode 3)
- the delayed output from the subtractor 23 will be selected by the prediction mode selector switch 45 for the case of Option 4 above being selected (this being the case in which no inter-frame prediction is executed for the block in question, and referred in the following as mode 4).
- Option 1 represents 2-dimensional linear prediction operation, with W being a maximum for the first dependent frame following an independent frame and reaching a minimum value for a dependent frame which immediately precedes an independent frame.
- the weighting value W is defined as:
- mc denotes the number of the current frame in the sequence of frames
- mp denotes the number of the preceding independent frame of that current frame
- the value X of a data value (corresponding to one pixel) of the output signal from the adder 34, that signal being referred to in the following as a prediction signal, is obtained as:
- Vms is the corresponding value of the succeeding independent frame signal from input terminal 40 and Vmp is the corresponding value of the preceding independent frame signal from input terminal 41].
- Each value X of the prediction signal produced from the adder 34 is subtracted from a corresponding value of the current frame signal, in the prediction signal subtractor 21, and the result is supplied as a preceding/succeeding frame prediction error value to the squaring circuit 25.
- Each value of the preceding frame signal is subtracted from a corresponding value of the current frame signal, in the subtractor 20, and the result is supplied as a preceding frame prediction error value to the squaring circuits 24.
- the fixed value that is subtracted from the current frame signal by the subtractor 23 can be established in various ways, for example as being equal to 50% of the maximum white level of the video signal, when a luminance (Y) value is being processed, and equal to zero when a color difference (B-Y) or (R-Y) value is being processed.
- the DC component of a spatially adjacent block within the same frame could be utilized instead of that fixed value. Whichever type of value is utilized, inter-frame prediction is not executed for a block, in the case of mode 4 being selected, and only intra-frame processing is executed for the block.
- FIG. 5 is a simple timing diagram for illustrating the basic timing relationships of this embodiment.
- F1 to F11 denote 11 successive frames of the input video signal, with corresponding frame intervals (specifically, intervals in which the respective frames are read out from the frame memory 1a) designated as T1 to T11.
- Each independent frame is designated by a #symbol, i.e. frames F1, F5 and F9. It is assumed that one out of every 4 frames is an independent frame, i.e. that periodic resetting of inter-frame prediction operation occurs with a period of 4 frames.
- the timings of processing operations for frames F2 to F5 will be described.
- the successive blocks of independent frame F1 are transferred through the switches 2 and 5, to be directly encoded, then are processed in the dequantizer 10 and inverse orthogonal transform section 11 to recover the original frame data, and then are written into the (succeeding) frame memory 12.
- Frames F2, F3, and F4 are successively written into the (N-1) frame memory 3.
- the successive blocks of independent frame F5 are transferred through the switches 2 and 5, to be directly encoded, then are processed in the dequantizer 10 and inverse orthogonal transform section 11 to recover the original frame data, and then are written into the (succeeding) frame memory 12 to replace the previous contents of that memory, after writing the contents of the (succeeding) frame memory 12 into the (preceding) frame memory 13 to replace the previous contents thereof.
- frame F6 is written into the frame memory 3, at the same time frame F2 is read out from memory 3, and corresponding prediction signals for frame F2 are outputted from the adaptive prediction section 4 and inputted to delay unit 43 together with the output from subtractor 23.
- the prediction mode output signal from the minimum value selector 32 sets switch 45 to an appropriate selection position, based on the minimum accumulated error-squared value that is inputted to the minimum value selector 32.
- the mode output signal is also transferred to the encoder 8 to be encoded and outputted.
- Frame F7 is written into the frame memory 3, at the same time, frame F3 is read out from memory 3, and processed in the same way as for frame F2, and the prediction mode data for frame F3 is sent to the encoder 8.
- the selected prediction signal for frame F3 (or the output from subtractor 23) is transferred from switch 45 to the orthogonal transform section 6, to be processed, encoded and outputted.
- (d) Mode 4 in which inter-frame prediction is not executed. This is selected when there is insufficient correlation between the current block and the corresponding blocks of each of the preceding and succeeding frames. This would be selected, for example, for a block in frame 3 of FIG. 2D. It has been assumed, for simplicity of description, that this applies to all of the blocks of frame 3 of FIG. 2D, so that inter-frame prediction is not applied to any blocks of that frame.
- the independent frame signal values that are used i deriving the prediction error values are obtained by recovering the original video signal by decoding operation (in the dequantizer 10 and inverse orthogonal transform section 11), in the same way that decoding is executed in a corresponding decoder apparatus .[.(not shown in the drawings).]. .Iadd.shown in FIG. 6.Iaddend., the various quantization errors etc. that are present in the final decoded data will also be present in the data that are used in denying the prediction error values. This ensures a greater accuracy of prediction than would be the case if the independent frames of the input video signal data were to be written directly into the memories 12 and then 13.
- evaluation for determining the prediction mode is based upon error-squared values of prediction error values that are obtained directly from the input video signal. Greater accuracy of evaluation would be obtained by using the video signal data of the dependent frames after all of the encoding processing (including transform processing, and quantization) has been executed. However this would require additional circuits for executing the inverse of such encoding, i.e. for the inverse transform processing etc., increasing the circuit scale substantially and making the apparatus more difficult to realize in practical form.
- the DC component of the current frame signal is subtracted from the current frame signal in the subtractor 39, to thereby prevent an excessively high output value being produced by the cumulative adder 31.
- this will tend to produce an excessively high probability that mode 4 will be selected by the minimum value selector circuit 32, i.e. the output from the cumulative adder 31 will tend to have too low a value.
- a compensating offset value B is added to the output from the cumulative adder 31 in the adder 37.
- the .Iaddend.decoding apparatus for decoding the encoded data .Iadd.on line 48 .Iaddend.that are transmitted from such an adaptive predictive encoded apparatus can be implemented very simply, by using the mode prediction data that are contained in the encoded output data.
- variable length decoder 50.Iaddend. After the inverse of the variable-length encoding executed by the variable-length encoder section 8 has been performed, .Iadd.in variable length decoder 50.Iaddend., followed by dequantizing .Iadd.in dequantizer 51 .Iaddend.and inverse transform processing .Iadd.in inverse orthogonal transform section 52.Iaddend., each independent frame is transferred successively to a first (.Iadd.succeeding) frame memory 54 .Iaddend.and then .Iadd.to .Iaddend.a second frame memory for use in processing (.Iadd.preceding.Iaddend.) frame memory .Iadd.55 .Iaddend.for use in processing the dependent frames, corresponding to the memories 12 and 13 of FIG.
- switches 61 and 62 correspond to the operation of switches 2 and 5 in the encoder of FIG. 3.
- switches 61 and 62 are in the respective positions "b" shown in FIG. 6 and during processing of the dependent frames, their contacts are switched to their respective "a" positions.
- .Iaddend.Each block of a dependent frame is processed .Iadd.by performing the inverse of the operations performed by the adaptive prediction section 4 of FIG. 3 during the encoding process in adaptive prediction section 53 of the decoder, shown schematically in FIG. 7, .Iaddend.depending upon the .Iadd.position of the movable contact of switch 64 which is controlled by the .Iaddend.associated decoded prediction mode data for that block, as follows:
- decoder apparatus .Iadd.shown in FIGS. 6 and 7 .Iaddend.for receiving an encoded output signal produced by an adaptive predictive encoder apparatus according to the present invention can have a simple configuration, and can for example by implemented by slightly modifying an encoder apparatus that is described in the aforementioned related U.S. application by the assignee of the present invention.
Abstract
An adaptive predictive encoding apparatus for encoding a video signal by utilizing correlation between frames in both the forward and reverse directions of the time axis. A prediction signal for use in deriving prediction error values to be encoded for a frame is selected by an adaptive prediction section, in units of blocks, from a plurality of mutually differently derived prediction signals, in accordance with the degree of correlation of the block with corresponding ones of a specific preceding independently encoded frame and a specific succeeding independently encoded frame. .Iadd.A complementary adaptive decoding apparatus receives the encoded information and reconstructs the video signal in accordance with information supplied to the adaptive decoding apparatus by the encoding signal. .Iaddend.
Description
The present application is a .Iadd.reissue of application No. 514,015 now U.S. Pat. No. 4,982,285, which is a .Iaddend.continuation-in-part of a U.S. patent application (application number 465,747) with filing date Jan. 16, 1990.Iadd., now U.S. Pat. No. 4,985,768.Iaddend..
1. Field of Application
The present invention relates to an apparatus for encoding a video signal to produce an encoded signal for transmission or recording, with the encoded signal containing substantially lower amounts of data than the original video signal. In particular, the invention relates to an apparatus for inter-frame predictive encoding of a video signal, which is especially applicable to television conferencing systems or to moving-image video telephone systems.
2. Prior Art Technology
Various methods have been proposed in the prior art for converting a digital video signal to a signal containing smaller amounts of data, for example in order to reduce the bandwidth requirements of a communications link, or to reduce the storage capacity required for recording the video signal. Such methods are especially applicable to television conferencing or moving image video telephone systems, and utilize the fact that there is generally a high degree of correlation between successive frames of a video signal, and hence some degree of redundancy if all of the frames are transmitted. One basic method, described for example in U.S. Pat. No. 4,651,207, is to periodically omit one or more frames from being transmitted, and to derive information at the receiving end for interpolating the omitted frames (based on movement components in the transmitted frames). Such a method will provide satisfactory operation only so long as successive frames contain only relatively small amounts of change between one frame and the next. Another basic method known in the prior art is to periodically transmit (i.e. at fixed numbers of frame intervals) frames which are independently encoded, these being referred to in the following as independent frames, while, for each frame occurring between successive independent frames (these being referred to in the following as dependent frames), only amounts of difference between that frame and the preceding independent frame as encoded and transmitted, i.e. inter-frame predictive encoding is executed with the independent frames being used as reference frames. With a more practical form of that method, known as adaptive predictive encoding, such inter-frame predictive encoding is executed only when it is appropriate, that is to say only when there is no great difference between successive frames. When such a large difference is detected, then intra-frame encoding is executed. Example of such inter-frame encoding are described in the prior art for example in "15/30 Mb/s Motion-Compensated Inter-frame, Inter-field and Intrafield Adaptive Prediction Coding" (Oct. 1985) Bulletin of the Society of Television Engineers (Japan), Vol. 39, No. 10. With that method, a television signal is encoded at a comparatively high data rate. Movement-compensation inter-frame prediction, intra-field prediction, and inter-field (i.e. intra-frame) prediction are utilized. Another example is described in "Adapative Hybrid Transform/Predictive Image Coding" (March 1987) Document D-1115 of the 70th Anniversary National Convention of the Society of Information and Communication Engineers (Japan). With that method, switching is executed between inter-frame prediction of each dependent frame based on a preceding independent frame (which is the normal encoding method) and prediction that is based on adjacent blocks of pixels, prediction that is based on the image background, and no prediction (i.e. direct encoding of the original video signal). In the case of the "no-prediction" processing, orthogonal transform intra-frame encoding is executed, while in the case of background prediction, a special type of prediction is utilized which is suitable for a video signal to be used in television conferencing applications. Processing operation is switched between pixel blocks varying in size from 16×16 to 8×8 elements, as block units.
With such prior art adaptive predictive encoding methods, when a dependent frame is to be decoded (at the receiving end of the system, or after playback from a recording medium) the required data are obtained by cumulative superposition of past data relating to that frame, so that all of the related past data are required. It is necessary to use storage media for decoding which will enable random access operation, to obtain such data. This sets a limit to the maximum size of period of repetition of the independent frames (alternatively stated, the period of resetting of inter-frame predictive encoding operation), since if that period is excessively long then decoding storage requirements and operation will be difficult. However the shorter this resetting period is made, the greater will be the amounts of data contained in the encoded output signal and hence the lower will become the encoding efficiency. Typically, a period of 4 to 8 frames has been proposed for the prior art methods.
FIGS. 1A and 1B are simple conceptual diagrams to respectively illustrate the basic features of the aforementioned inter-frame predictive encoding methods and the method used in the aforementioned U.S. patent application by the assignee of the present invention. A succession of frames of a video signal are indicated as rectangles numbered 1, 2, . . . The shaded rectangles denote independent frames (i.e. independently encoded frames that are utilized as reference frames) which occur with a fixed period of four frame intervals, i.e. inter-frame predictive encoding is assumed to be reset once in every four frames. As indicated by the arrows, prediction operation is executed only along the forward direction of the time axis, so that difference values between a dependent frame and an independent frame (referred to in the following as prediction error values) are always obtained by using a preceding independent frame as a reference frame. Thus, independent frame No. 1 is used to derive prediction error values for each of frames 2, 3, and 4, which are encoded and transmitted as data representing these frames.
Such a prior art prediction method has a basic disadvantage. Specifically, only the correlation disadvantage. Specifically, only the correlation between successive frames of the video signal along the forward direction of the time axis is utilized. However in fact there is generally also strong correlation between successive frames in the opposite direction. The operation of the aforementioned related patent application by the assignee of the present invention utilizes that fact, as illustrated in FIG. 1B. Here, each frame occurring between two successive independent frames is subjected to inter-frame predictive encoding based on these two independent frames, as indicated by the arrows. For example, inter-frame predictive encoding of frame 2 is executed based on the independent frames 1 and 5. This is also true for frames 3 and 4. More precisely, a first prediction signal for frame 2 is derived based on frame 1 as a reference frame, and a second prediction signal for frame 2 is derived based on frame 5 as a reference frame. These two prediction signals are then multiplied by respective weighting factors and combined to obtain a final prediction error signal for frame 2, with greater weight being given to the first prediction signal (since frame 2 will have greater correlation with frame 1 than frame 5). Prediction signals for the other dependent frames are similarly derived, and differences between the prediction signal and a signal of a current frame are derived as prediction errors, then encoded and transmitted. Since in this case correlation between a preceding independent frame and a succeeding independent frame is utilized to obtain prediction signals for each dependent frame, a substantially greater degree of accuracy of prediction is attained than is possible with prior art methods in which only inter-frame correlation along the forward direction of the time axis is utilized.
Prior art methods of adaptive inter-frame predictive encoding can overcome the basic disadvantages described above referring to FIG. 1A, as will be described referring to FIGS. 2A, 2C. In FIGS. 2A and 2C (and also in FIGS. 2B, 2D, described hereinafter) respective numbered rectangles represent successive frames of a video signal. The frames indicated by the # symbol represent independently encoded frames. Of these, frames 1 and 5 are independent frames which occur with a fixed period of four frame intervals, i.e. inter-frame predictive encoding is reset once in every four successive frame intervals in these examples. The white rectangles denote frames whose image contents are mutually comparatively similar. The dark rectangles denote frames whose image contents are mutually comparatively similar, but are considerably different from the contents of the "white rectangle" frames. In FIG. 2A, frame 1 is an independent frame, and frame 2 is a dependent frame whose contents are encoded by inter-frame predictive encoding using frame 1 as a reference frame. There is a significant change (e.g. resulting from a "scene change", or resulting from a new portion of the background of the image being uncovered, for example due to the movement of a person or object within the scene that is being televised) in the video signal contents between frames 2 and 3 of FIG. 2A, so that it becomes impossible to execute inter-frame predictive encoding of frame 3 by using frame 2 as a reference frame. With a prior art method of adaptive inter-frame predictive encoding, this is detected, and results in frame 3 being independently encoded. Frame 3 is then used as a reference frame for inter-frame predictive encoding of frame 4.
Thus, each time that a scene change or other very considerable change occurs in the video signal, which does not coincide with the start of a (periodically occurring) independent frame, independent encoding of an additional frame must be executed instead of inter-frame predictive encoding, thereby resulting in a corresponding increase in the amount of encoded data which must be transmitted or recorded.
In the example of FIG. 2C, with a prior art method of adaptive inter-frame predictive encoding, it is assumed that only one frame (frame 3) is considerably different from the preceding and succeeding frames 1, 2 and 4, 5. This is detected, and frame 3 is then independently encoded instead of being subjected to inter-frame predictive encoding. However since frame 4 is now very different in content from frame 3, it is not possible to apply inter-frame predictive encoding to frame 4, so that it is also necessary to independently encode that frame also. Hence, each time that a single frame occurs which is markedly different from preceding and succeeding frames, it is necessary to independently encode an additional two frames, thereby increasing the amount of encoded data that must be transmitted. Such occurrences of isolated conspicuously different frames such as frame 3 in FIG. 2C can occur, for example, each time that a photographic flash is generated within the images that constitute the video signal.
These factors result in the actual amount of data that must be encoded and transmitted, in actual practice, being much larger than that for the ideal case in which only the periodically occuring independent frames (i.e. frames 1, 5, etc.) are independently encoded, and in which all other frames are transmitted after inter-frame predictive encoding based on these independent frames.
Another basic disadvantage of such a prior art method of adaptive inter-frame predictive encoding occurs when the enclosed output data are to be recorded (e.g. by a video tape recorder) and subsequently played back and decoded to recover the original video signal. Specifically, when reverse playback operation of the recorded encoded data is to be executed, in which playback is executed with data being obtained in the reverse sequence along the time axis with respect to normal playback operation, it would be very difficult to apply such a prior art method, due to the fact that predictive encoding is always based upon a preceding frame. That is to say, prediction values are not contained the playback signal (in the case of reverse playback operation) in the correct sequence for use in decoding the playback data.
The aforementioned related patent application by the assignee of the present invention overcomes this problem of difficulty of use with reverse playback operation, since each dependent frame is predictively encoded based on both a preceding and a succeeding independent frame. However since the described apparatus is not of adaptive type, i.e. inter-frame predictive encoding is always executed for the dependent frames irrespective of whether or not large image content changes occur between successive ones of the dependent frames, it has the disadvantage of a deterioration of the resultant final display image in the event of frequent occurrences of scene changes, uncovering of the background, or other significant changes in the image content.
With a prior art method of adaptive inter-frame predictive encoding as described above, when scene changes occur, or movement of people or objects within the image conveyed by the video signal occurs, whereby new portions of the background of the image are uncovered, then large amounts of additional encoded data are generated, as a result of an increased number of frames being independently encoded rather than subjected to inter-frame predictive encoding. Various methods have been proposed for executing control such as to suppress the amount of such additional data. However this results in loss of image quality.
It is an objective of the present invention to overcome the disadvantages of the prior art as set out above, by providing an adaptive predictive encoding apparatus whereby an optimum prediction signal for use in deriving prediction error values for a dependent frame is selected, for each of successive blocks of the frame, from a plurality of prediction signals derived by respectively different combinations of signals obtained from a pair of preceding and succeeding independent claims. This selection is based upon the magnitude of the prediction error values that are produced, for the respective data values constituting a block, by these different prediction signals. If there is insufficient correlation between the block and the corresponding blocks of these preceding and succeeding frames, then the block is encoded independently of these other frames, by intra-frame encoding alone.
More specifically, the present invention provides an adaptive encoding apparatus for encoding an input video signal comprising a sequence of frames each comprising successive data values, the apparatus comprising:
encoder means for encoding successive blocks of a frame of the video signal, each of the blocks comprising a fixed-size array of the data values;
means for selecting one of every N of the frames to be transferred directly to the encoder means as an independent frame, to be encoded by intra-frame encoding, where N is a fixed integer of value greater than one; and
adaptive prediction means for executing adaptive prediction processing, as a dependent frame, of each frame occurring between a preceding one and a succeeding one of the independent frames in the frame sequence, by deriving for the data values of each block of a dependent frame respective prediction error values based upon an optimum prediction signal selected from a plurality of prediction signals derived using a plurality of combinations of the preceding and succeeding independent frames.
The adaptive prediction means of such an adaptive prediction encoding apparatus preferably comprises:
means for deriving a first prediction signal based on a combination of data values of the preceding and succeeding independent frames, a second prediction signal derived only from the preceding independent frame, a third prediction signal derived only from the succeeding, and a non-prediction signal derived only from the dependent frame; and
predictive mode selection means for selecting, for each of the blocks, one out of four prediction modes in which the first, second and third prediction signals and the non-prediction signal to are respectively used in deriving predictive error values for respective data values of the block, to be sent to the encoder means and encoded thereby, the selection being based upon judgement of the error values, the predictive mode selection means further supplying to the encoding means, to be encoded thereby, predictive mode data indicating predictive modes which have been selected for respective ones of the blocks.
FIGS. 1A and 1B are conceptual diagrams for describing inter-frame predictive operation using one direction and both directions of the time axis, respectively;
FIGS. 2A, 2C are conceptual diagrams for describing a prior art method of adaptive inter-frame predictive encoding, and FIGS. 2B, 2D are for describing a method of adaptive inter-frame predictive encoding according to the present invention;
FIG. 3 is a general block diagram of an embodiment of an encoding apparatus for adaptive inter-frame predictive encoding according to the present invention;
FIG. 4 is a block diagram of an adpative prediction section of the apparatus of FIG. 3; and
FIG. 5 is a timing diagram for assistance in describing the operations of the apparatus of FIG. 5.
.Iadd.FIG. 6 is a block diagram of a decoding apparatus for decoding signals in accordance with the present invention.
FIG. 7 is a schematic diagram of the adaptive prediction section of the decoding apparatus of FIG. 6. .Iaddend.
FIGS. 2B and 2D illustrate the manner in which the frame sequences of FIGS. 2A and 2C respectively described hereinabove) are handled by the method of adaptive inter-frame predictive encoding of the present invention, as compared with a prior art method of adaptive inter-frame predictive encoding in the case of FIG. 2B, in which there is a scene change between frames 2 and 3, so that the preceding independent frame 1 cannot be used for inter-frame predictive encoding of frames 3 and 4, use is made of the correlation between succeeding independent frame 5 and the dependent frames 3 and 4. That is to say, only the succeeding independent frame 5 is used for inter-frame predictive processing of the dependent frames 3 and 4. This makes it unnecessary to independently encode frame No. 3, as is required with a prior art method of adaptive inter-frame predictive encoding which uses only the forward direction of the time axis. Thus, the average amount of encoded data that are generated will be reduced, since it is no longer necessary to independently encode a dependent frame (or a large part of a dependent frame) each time that a scene change or other very substantial change in the contents of a frame occurs.
In the case of FIG. 2D, where only frame No. 3 is very different from the preceding and succeeding frames, it is necessary with a prior art method of adaptive inter-frame predictive encoding to independently encode both of frames 3 and 4, as described hereinabove. However with the present invention, use is made of the fact that frame 3 is an isolated occurrence, by using the succeeding independent frame No. 5 for inter-frame predictive encoding of frame No. 3. In this way, it becomes unnecessary to independently encode all of (or a large part of) a dependent frame which succeeds an isolated significantly different dependent frame, as is required for frame 4 in the case of a prior art method of adaptive inter-frame predictive encoding, as described above for FIG. 2C.
The basic operation of an adaptive predictive encoding apparatus according to the present invention is as follows. The encoder processes each frame of an input video signal in units of blocks (where each block will for example consist of an 8×8 array of pixels of the frame), and the apparatus determines for each block of a frame which of the following correlation conditions exists between that block and the correspondingly positioned blocks of the preceding independent frame and the succeeding independent frame:
(Option 1) Optimum prediction will be achieved by processing using a combination of the corresponding blocks (i.e. correspondingly positioned within the frame) of both the preceding and succeeding independent frames.
(Option 2) Optimum prediction will be achieved by processing using only the corresponding block of the preceding independent frame.
(Option 3) Optimum prediction will be achieved by processing using only the corresponding block of the succeeding independent frame.
(Option 4) Optimum operation will be achieved by directly encoding that block (only intra-frame encoding executed).
The decision as to which of the above four options is optimal is based upon a total of respective squared values of difference between each data value representing a pixel of the block and the corresponding data values of the corresponding blocks in the preceding and succeeding frames. Processing of the block is then executed, that is to say either a set of inter-frame prediction error values with respect to the pixels of the corresponding block of the preceding and/or succeeding independent frames, or the data values for the pixels of the block in question, slightly modified as described hereinafter, are then encoded for transmission or recording. In addition, prediction mode data which indicates which of the above four options has been selected for that block is encoded with the yideo data, and transmitted or recorded. At the receiving end, or upon playback of the recorded encoded data, decoding is executed, utilizing the prediction mode data to control the decoding operation.
FIG. 3 is a general block diagram of an embodiment of an inter-frame adaptive predictive encoding apparatus according to the present invention. A frame memory 1a receives a (moving picture) digital video signal from an input terminal 1 as successive data values, consisting of luminance (Y) values for respective pixels, as well as chrominance (R-Y) and (B-Y), i.e. color difference, values. Successive frames of the video signal are stored in the frame memory 1a. Successive blocks of a frame that is currently held in the frame memory 1a are read out in a predetermined sequence, each of the blocks consisting for example of an 8×8 element array of luminance (Y) or chrominance, i.e. color difference (B-Y) or (R-Y) values. Each block of luminance data values directly corresponds to a physical (display) size of 8×8 pixels. However in general each 8×8 block of chrominance values will correspond to a larger physical area that 8×8 pixels. For example as set out by the CCITT of the International Telecommunication Union, Document π339, Mar. 1988 document "Description of Ref. Model 5 (RM5)", in which a common source input format for coding of color television signals is specified, each 8×8 block of color difference values will correspond (in display size) to a 16×16 macro block of luminance values consisting of four 8×8 blocks.
It should be understood that the description of adaptive prediction operation given herein applies to both processing of luminance and color difference values.
The values of each block are successively read out in a predetermined sequence. The output data from the frame memory 1a is supplied to a movable contact of a changeover switch 2. The "a" fixed contact of the changeover switch 2 is connected to the "a" fixed contact of a changeover switch 5, while the "b" fixed contact of the changeover switch 2 is connected to the input of a (N-1) frame memory 3. The (N-1) frame memory 3 is a memory having a capacity for storing up to (N-1) successively inputted frames, where N is a fixed integer, and is used to produce a delay of N frame intervals, i.e. a frame which is written into that memory during one frame interval is subsequently read out from the memory in the fourth frame interval to occur thereafter. The output of the (N-1) frame memory 3 is supplied to an adaptive prediction section 4, while the output of the adaptive prediction section 4 is supplied to the "b" fixed contact of the changeover switch 5. The movable contact of the changeover switch 5 is connected to the input of a orthogonal transform section 6, whose output is supplied to a quantizer 7. The output from the quantizer 7 is supplied to an variable-length encoder section 8 and also to a dequantizer 10. The output from the variable-length encoder section 8 is applied to an output terminal 9. The output from the dequantizer 10 is supplied to an inverse orthogonal transform section 11, whose output is supplied to a (succeeding) frame memory 12. The output from the 12 is supplied to a (preceding) frame memory 13 and also to a second input of the adaptive prediction section 4. The output of the (preceding) frame memory 13 is applied to a third input of the adaptive prediction section 4. An output from the adaptive prediction section 4, consisting of the aforementioned prediction mode data, is supplied to a second input of the variable-length encoder section 8.
A synchronizing signal separator circuit 4 receives the input video signal and separates the sync signal components thereof to derive synchronizing signals which are supplied to a control signal generating circuit 15. The control signal generating circuit 15 thereby generates variuous control and timing signals for controlling switching operation of the changeover switches 2 and 5, and memory read and write operations of the frame memory 1a, (N-1) frame memory 3, (succeeding) frame memory 12 and (preceding) frame memory 13.
A weighting value generating circuit 16 receives a timing signal from the control signal generating circuit 15, and generates successive pairs of weighting values W and (1-W) which vary in value on successive frames as described hereinafter. These pairs of weighting values are supplied to the adaptive prediction section 4.
The switching operation of the changeover switch 5 is linked to that of the changeover switch 2, and when both of these are set to the respective "a" terminals, the signal of an independent frame is directly inputted to the orthogonal transform section 6, to be directly transformed and encoded.
The output signal from the changeover switch 5 thus consists of successive data values of successive blocks of an independent frame, during each interval in which data values of an independent frame are being read out from the frame memory 1a, with switches 2 and 5 set to their "a" positions. When the switches are set to their "b" positions, then the output signal from the changeover switch 5 consists of either successive prediction error values for a block of a dependent frame, or data values (which may have been modified by intra-frame processing) of a block of a dependent frame.
In order to maximize the efficiency of encoding, the Y (luminance) and (R-Y), (B-Y) (chrominance) values of the output signal from the changeover switch 5 are converted by the orthogonal transform section 6 to coefficient component values by an orthogonal transform operation, such as the discrete cosine transform (DCT), in units of blocks. The resultant output signal from the orthogonal transform section 6 is then quantized using steps of appropriate size, by the quantizer 7. Since the distribution of the resultant quantized signal is close to zero amplitude, encoding efficiency is further increased by encoding the quantized signal by a variable-length encoding technique, such as Huffman encoding. In addition, the aforementioned prediction mode data values supplied from the adaptive prediction section 4 to the variable-length encoder section 8 are also encoded by the variable-length encoding technique. The resultant variable-length data are supplied to an output terminal 9, to be transmitted to a corresponding decoding apparatus, or to be recorded and subsequently played back and supplied to a corresponding decoding apparatus.
FIG. 4 is a general block diagram of the adaptive prediction section 4 of FIG. 3. The data read out from the (preceding) frame memory 13 are applied, as a preceding frame signal, to an input terminal 40 and hence to one input of a subtractor 20. The output from the (succeeding) frame memory 12 is applied, as a succeeding frame signal, to an input terminal 40 and hence to one input of a subtractor 22. 33 denotes a coefficient multiplier which multiplies each data value from input terminal 40 by the aforementioned weighting value W and supplies the resultant values to one input of an adder 34, 35 denotes a coefficient multiplier which multiplies each data value from input terminal 40 by the aforementioned weighting value (1-W) and supplies the resultant values to the other input of the adder 34. The output from the adder 34 is applied to one input of a prediction signal subtractor 21. The contents of the frame that is currently being read out from the (N-1) frame memory 3 (that frame being referred to in the following as the current frame) is applied as a current frame signal to an input terminal 42 and hence to one input of a subtractor 23. The current frame signal is also supplied to the respective other inputs of the subtractor 20, prediction signal subtractor 21 and subtractor 22. A fixed data value is applied to the other input of the subtractor 23.
The value of the DC component of the signal of the current frame is derived by a DC level detection circuit 38, and applied to one input of a subtractor 39. The current frame signal is applied to the other input of the subtractor 39, to have the DC component subtracted therefrom. This subtraction of the DC component is necessary in order to prevent excessively high output values from being produced from a squaring circuit 27, described hereinafter.
The respective outputs from the subtractors 21, 20, and 22 (these outputs being referred to in the following as the first, second, and third prediction signals), and the output from subtractor 23 (referred to in the following as a non-prediction signal) are applied as to corresponding inputs of a delay section 43, which subjects each of these signals to a delay which is equal to the period of one block (i.e. corresponding to 64 pixels, in this example). The delayed outputs from the 1-block delay circuit 43 are applied to respective fixed contacts of a prediction mode selector switch 45, whose movable contact is coupled to an output terminal 46.
The first, second and third prediction signals from the subtractors 21, 20 and 22, and the non-prediction signal from subtractor 39 are also respectively applied to inputs of squaring circuits 25, 24, 26 and 27. Each of these thereby produces the square of each (prediction error) data value that is inputted thereto, and these squared error values produced from circuits 24 to 27 are respectively supplied to inputs of additive accumulator circuits 28 to 31, each of which functions to obtain the sum of the squared error values of respective pixels of one block at a time. This is to say, when the total of the squared error values for one block has been computed by one of these accumulator circuits, the result is outputted therefrom, the contents are reset to zero, and computation of the squared error value total for the next block begins.
The output from the cumulative adder 28 is supplied directly to a first input terminal of a minimum value selector circuit 32. The output from the cumulative adder 29 is supplied via a subtractor 36, in which a predetermined fixed compensation value is subtracted therefrom, to a second input terminal of the minimum value selector circuit 32. The output from the cumulative adder 30 is supplied directly to a third input terminal of the minimum value selector circuit 32. The output from the cumulative adder 31 is supplied via an adder 37, in which a predetermined fixed compensation value is added thereto, to a fourth input terminal of the minimum value selector circuit 32.
Each time that the respective accumulated total error-squared values for one block have been derived by the cumulative adder 28 to cumulative adder 31 respectively and supplied to the minimum value selector circuit 32, the minimum value selector circuit 32 judges which of these is lowest in value and produces an output data signal indicative of that value. That output data signal serves as prediction mode data, i.e. is used to determine which mode of operation will provide optimum encoding accuracy, to thereby determine which of Option 1 to Option 4 described hereinabove is applicable to the block for which judgement of the accumulated total error-squared values has been made. That prediction mode information is then applied to control the setting of the prediction mode selector switch 45, to determine which of the delayed outputs from the 1-block delay circuit 43 will be selected to be transferred to output terminal 46, and hence to the "b" terminal of the changeover switch 5 of FIG. 1.
More specifically, the setting of the prediction mode selector switch 45 is controlled by the prediction mode data output from the minimum value selector circuit 32 such that the delayed prediction error output from the prediction signal subtractor 21 is selected, if that output has resulted in the smallest value of accumulated squared error value for the block in question (representing the case of Option 1 above being selected). This will be referred to as mode 1. Similarly, the delayed output from the subtractor 20 will be selected by the prediction mode selector switch 45 for the case of Option 2 above being selected (this being referred to the following as mode 2), the delayed output from the subtractor 22 will be selected by the prediction mode selector switch 45 for the case of Option 3 above being selected (this being referred to in the following as mode 3), and the delayed output from the subtractor 23 will be selected by the prediction mode selector switch 45 for the case of Option 4 above being selected (this being the case in which no inter-frame prediction is executed for the block in question, and referred in the following as mode 4).
The values of the weighting values W and (1-W) vary for successive ones of the dependent frames in a linear manner, i.e. Option 1 represents 2-dimensional linear prediction operation, with W being a maximum for the first dependent frame following an independent frame and reaching a minimum value for a dependent frame which immediately precedes an independent frame.
Specifically, the weighting value W is defined as:
W=(mc-mp)/N
[where 0>W>1, mc denotes the number of the current frame in the sequence of frames, mp denotes the number of the preceding independent frame of that current frame].
The value X of a data value (corresponding to one pixel) of the output signal from the adder 34, that signal being referred to in the following as a prediction signal, is obtained as:
X=W·Vmp+(1-W)·Vms
[where Vms is the corresponding value of the succeeding independent frame signal from input terminal 40 and Vmp is the corresponding value of the preceding independent frame signal from input terminal 41].
Each value X of the prediction signal produced from the adder 34 is subtracted from a corresponding value of the current frame signal, in the prediction signal subtractor 21, and the result is supplied as a preceding/succeeding frame prediction error value to the squaring circuit 25.
Each value of the preceding frame signal is subtracted from a corresponding value of the current frame signal, in the subtractor 20, and the result is supplied as a preceding frame prediction error value to the squaring circuits 24.
Similarly, each value of the succeeding frame signal subtracted from a corresponding value of the current frame signal, in the subtractor 22, and the result is supplied as a succeeding frame prediction error value to the squaring circuit 26.
The fixed value that is subtracted from the current frame signal by the subtractor 23 can be established in various ways, for example as being equal to 50% of the maximum white level of the video signal, when a luminance (Y) value is being processed, and equal to zero when a color difference (B-Y) or (R-Y) value is being processed. Alternatively, the DC component of a spatially adjacent block within the same frame could be utilized instead of that fixed value. Whichever type of value is utilized, inter-frame prediction is not executed for a block, in the case of mode 4 being selected, and only intra-frame processing is executed for the block.
FIG. 5 is a simple timing diagram for illustrating the basic timing relationships of this embodiment. F1 to F11 denote 11 successive frames of the input video signal, with corresponding frame intervals (specifically, intervals in which the respective frames are read out from the frame memory 1a) designated as T1 to T11. Each independent frame is designated by a #symbol, i.e. frames F1, F5 and F9. It is assumed that one out of every 4 frames is an independent frame, i.e. that periodic resetting of inter-frame prediction operation occurs with a period of 4 frames. The timings of processing operations for frames F2 to F5 will be described.
(a) In frame interval T1
The successive blocks of independent frame F1 are transferred through the switches 2 and 5, to be directly encoded, then are processed in the dequantizer 10 and inverse orthogonal transform section 11 to recover the original frame data, and then are written into the (succeeding) frame memory 12.
(b) In frame intervals T2, T3 and T4
Frames F2, F3, and F4 are successively written into the (N-1) frame memory 3.
(c) In frame interval T5
The successive blocks of independent frame F5 are transferred through the switches 2 and 5, to be directly encoded, then are processed in the dequantizer 10 and inverse orthogonal transform section 11 to recover the original frame data, and then are written into the (succeeding) frame memory 12 to replace the previous contents of that memory, after writing the contents of the (succeeding) frame memory 12 into the (preceding) frame memory 13 to replace the previous contents thereof.
(d) Frame Interval T6
During T6, frame F6 is written into the frame memory 3, at the same time frame F2 is read out from memory 3, and corresponding prediction signals for frame F2 are outputted from the adaptive prediction section 4 and inputted to delay unit 43 together with the output from subtractor 23. At the end of T6, the prediction mode output signal from the minimum value selector 32 sets switch 45 to an appropriate selection position, based on the minimum accumulated error-squared value that is inputted to the minimum value selector 32. The mode output signal is also transferred to the encoder 8 to be encoded and outputted.
(e) Frame Interval T7
Frame F7 is written into the frame memory 3, at the same time, frame F3 is read out from memory 3, and processed in the same way as for frame F2, and the prediction mode data for frame F3 is sent to the encoder 8.
The selected prediction signal for frame F3 (or the output from subtractor 23) is transferred from switch 45 to the orthogonal transform section 6, to be processed, encoded and outputted.
It can be understood that the circuit of FIG. 4 serves to execute adaptive selection, on a block-by-block basis, of the optimum mode for encoding each block of each dependent frame of the video signal. That is to say, the variable-length encoder section 8 adaptively selects one of the following modes to be used in encoding each block of a dependent frame:
(a) Mode 1, in which 2-dimensional linear inter-frame prediction is executed. This is selected when there is sufficient (linearly weighted) correlation between the block and the corresponding blocks of the preceding and succeeding independent frames. This would be selected for a block in frame 2 of FIG. 2D, for example.
(b) Mode 2, in which inter-frame prediction is executed using only the preceding independent frame. This is selected when there is insufficient correlation with the corresponding block of the succeeding independent frame. This would be selected for a block of frame 2 in FIG. 2B, for example.
(c) Mode 3, in which inter-frame prediction is executed using only the succeeding independent frame. This is selected when there is sufficient correlation with the corresponding block of the preceding independent frame. This would be selected for a block of frame 3 or frame 4 in FIG. 2B, for example.
(d) Mode 4, in which inter-frame prediction is not executed. This is selected when there is insufficient correlation between the current block and the corresponding blocks of each of the preceding and succeeding frames. This would be selected, for example, for a block in frame 3 of FIG. 2D. It has been assumed, for simplicity of description, that this applies to all of the blocks of frame 3 of FIG. 2D, so that inter-frame prediction is not applied to any blocks of that frame.
Since the independent frame signal values that are used i deriving the prediction error values are obtained by recovering the original video signal by decoding operation (in the dequantizer 10 and inverse orthogonal transform section 11), in the same way that decoding is executed in a corresponding decoder apparatus .[.(not shown in the drawings).]. .Iadd.shown in FIG. 6.Iaddend., the various quantization errors etc. that are present in the final decoded data will also be present in the data that are used in denying the prediction error values. This ensures a greater accuracy of prediction than would be the case if the independent frames of the input video signal data were to be written directly into the memories 12 and then 13.
With this embodiment, evaluation for determining the prediction mode is based upon error-squared values of prediction error values that are obtained directly from the input video signal. Greater accuracy of evaluation would be obtained by using the video signal data of the dependent frames after all of the encoding processing (including transform processing, and quantization) has been executed. However this would require additional circuits for executing the inverse of such encoding, i.e. for the inverse transform processing etc., increasing the circuit scale substantially and making the apparatus more difficult to realize in practical form.
As stated above, the DC component of the current frame signal is subtracted from the current frame signal in the subtractor 39, to thereby prevent an excessively high output value being produced by the cumulative adder 31. However if not compensated for, this will tend to produce an excessively high probability that mode 4 will be selected by the minimum value selector circuit 32, i.e. the output from the cumulative adder 31 will tend to have too low a value. For that reason, a compensating offset value B is added to the output from the cumulative adder 31 in the adder 37.
On the other hand, in cases where there are only small differences between the respective values of prediction error that are being produced from the prediction signal subtractor 21, subtractor 20 and subtractor 22, it is preferable to prevent unnecessary switching between the modes 1, 2 and 3. For that reason, a slight amount of bias is given towards the selection of mode 1 (2-dimensional linear prediction) by the minimum value selector circuit 32. This is done by subtracting an offset value A from the output of the cumulative adder 29, in the subtractor 36. This has the advantage of increasing the rate of selection of mode 1, and so enabling a reduction in the amount of encoded data that are produced by encoding the prediction mode data from the minimum value selector circuit 32, if entropy encoding using for example the Huffman code is employed in the variable-length encoder section 8.
.[.The.]. .Iadd.As shown in FIG. 6, the .Iaddend.decoding apparatus for decoding the encoded data .Iadd.on line 48 .Iaddend.that are transmitted from such an adaptive predictive encoded apparatus can be implemented very simply, by using the mode prediction data that are contained in the encoded output data. After the inverse of the variable-length encoding executed by the variable-length encoder section 8 has been performed, .Iadd.in variable length decoder 50.Iaddend., followed by dequantizing .Iadd.in dequantizer 51 .Iaddend.and inverse transform processing .Iadd.in inverse orthogonal transform section 52.Iaddend., each independent frame is transferred successively to a first (.Iadd.succeeding) frame memory 54 .Iaddend.and then .Iadd.to .Iaddend.a second frame memory for use in processing (.Iadd.preceding.Iaddend.) frame memory .Iadd.55 .Iaddend.for use in processing the dependent frames, corresponding to the memories 12 and 13 of FIG. .[.1 .Iadd.3.Iaddend., and are outputted .Iadd.on line 70 .Iaddend.without further processing. .Iadd.The operation of switches 61 and 62 correspond to the operation of switches 2 and 5 in the encoder of FIG. 3. During processing of the independent frames, their contacts are in the respective positions "b" shown in FIG. 6 and during processing of the dependent frames, their contacts are switched to their respective "a" positions. .Iaddend.Each block of a dependent frame is processed .Iadd.by performing the inverse of the operations performed by the adaptive prediction section 4 of FIG. 3 during the encoding process in adaptive prediction section 53 of the decoder, shown schematically in FIG. 7, .Iaddend.depending upon the .Iadd.position of the movable contact of switch 64 which is controlled by the .Iaddend.associated decoded prediction mode data for that block, as follows:
(1) If the prediction mode data .Iadd.on line 57 .Iaddend.indicates that the block has been encoded in mode 1, then the pixel data values of the corresponding blocks of the corresponding preceding and succeeding independent frame (read out .Iadd.on lines 58 and 59 .Iaddend.from the aforementioned two frame memories .Iadd.54 and 55.Iaddend.) are respectively multiplied by the weighting values W and (1-W), the results added .Iadd.in adder 63.Iaddend., and the resultant value added .Iadd.in adder 65 .Iaddend.to the current frame signal .Iadd.supplied on line 56 to generate the output of the decoder on line 60.Iaddend..
(2) If the prediction mode data .Iadd.on line 57 .Iaddend.indicates that the block has been encoded in mode 2, then the pixel data values of the corresponding blocks of the corresponding preceding independent frame .Iadd.on line 58 .Iaddend.are added .Iadd.in adder 65 .Iaddend.to the current frame signal .Iadd.on line 56 to generate the output of the decoder on line 60.Iaddend..
(3) If the prediction mode data .Iadd.on line 57 .Iaddend.indicates that the block has been encoded in mode 3, then the pixel data values of the corresponding blocks of the corresponding succeeding independent frame .Iadd.supplied on line 59 .Iaddend.are added .Iadd.in adder 65 .Iaddend.to the current frame signal .Iadd.on line 56 to generate the output of the decoder on line 60.Iaddend..
(It will be apparent that a single circuit can be used to implement all of the functions (1), (2) and (3) above, by appropriately setting the weighting value W to either 1 or 0 for functions (2) and (3)).
(4) If the prediction mode data .Iadd.on line 57 .Iaddend.indicates that the block has been encoded in mode 4, then the fixed value .Iadd.on line 66 .Iaddend.(subtracted in the subtractor 23 of FIG. 2 of the encoder apparatus) is added .Iadd.in adder 65 .Iaddend.to the current frame signal .Iadd.on line 56 to generate the output of the decoder on line 60.Iaddend..
It will be apparent that the decoder apparatus .Iadd.shown in FIGS. 6 and 7 .Iaddend.for receiving an encoded output signal produced by an adaptive predictive encoder apparatus according to the present invention can have a simple configuration, and can for example by implemented by slightly modifying an encoder apparatus that is described in the aforementioned related U.S. application by the assignee of the present invention.
Claims (10)
1. An adaptive encoding apparatus for encoding an input video signal, said video signal comprising a sequence of frames each comprising successive pixel data the apparatus comprising:
encoder means for encoding successive blocks of a frame of said video signal, each of said blocks comprising a fixed-size array of said pixel data values;
means for selecting one in every N of said frames to be transferred directly to said encoder means as a reference frame, to be encoded by intra-frame encoding, where N is a fixed integer of value greater than one; and
adaptive prediction means for executing adaptive prediction processing, as a dependent frame, of each frame occurring between a preceding one and a succeeding one of said reference frames in said frame sequence, by deriving for the data values of each block of a dependent frame respective prediction error values based upon an optimum prediction signal selected from a plurality of prediction signals derived using a plurality of combinations of said preceding and succeeding reference frames.
2. An adaptive predictive encoding apparatus according to claim 1, in which said adaptive prediction means comprises:
means for deriving a first prediction signal based on a combination of pixel data of said preceding and succeeding reference frames, a second prediction signal derived only from said preceding reference frame, a third prediction signal derived only from said succeeding, and a non-prediction signal derived only from said dependent frame; and
predictive mode selection means for selecting, for each of said blocks, one out of four prediction modes in which said first, second and third prediction signals and said non-prediction signal are respectively used in deriving predictive error values for respective pixel data of said block, to be sent to said encoder means and encoded thereby, said selection being based upon judgement of said errors, said predictive mode selection means further supplying to said encoding means, to be encoded thereby, predictive mode data indicating predictive modes which have been selected for respective ones of the blocks.
3. An adaptive predictive encoding apparatus according to claim 2, in which said adaptive prediction means further comprises means for varying, in accordance with respective time axis positions of said frames in said video signal, respective weighting values assigned to said preceding and succeeding reference frames for establishing said combination.
4. An adaptive predictive encoding apparatus according to claim 1, and further comprising decoding means for decoding said reference frames after encoding by said encoding means, and for supplying resultant decoded reference frames to said adaptive prediction means for use in producing said prediction signals.
5. An adaptive predictive encoding apparatus according to claim 1, and further comprising an (N-1) frame memory for temporarily storing each dependent frame of said video signal and outputting said each dependent frame to said adaptive prediction means after a fixed delay time, and first and second 1-frame memories for respectively holding pixel data of said preceding and succeeding reference frames and supplying pixel data of said preceding and succeeding reference frames to said adaptive prediction means during adaptive prediction processing of successive ones of said dependent frames. .Iadd.
6. An adaptive decoding apparatus for decoding a video signal encoded by the apparatus of claim 1, comprising:
decoding means for receiving and decoding the encoded reference frames, the prediction error values and a prediction mode signal that identifies which of said plurality of combinations of said preceding and succeeding reference frames were used during encoding to obtain said optimum prediction signal;
prediction signal generating means responsive to the decoded prediction mode signal for reconstructing said optimum prediction signal;
means for combining said prediction error values and said optimum prediction signal for each dependent frame to generate display information corresponding thereto; and
means for outputting the decoded reference frames and the display information for each dependent frame in proper sequence to produce a video signal. .Iaddend. .Iadd.7. A decoding system for decoding video signals that have been encoded in an encoder by arranging said video signals into spaced-apart reference frames and dependent frames located therebetween; said reference frames being output from said encoder in encoded form and used therein to implement one of a plurality of prediction modes for adaptively predicting the display information in each of said dependent frames based upon the degree of correlation between each dependent frame and the reference frames which immediately precede and follow said dependent frame; said encoder thereby generating and outputting therefrom an encoded frame signal for each dependent frame and a prediction mode signal for identifying the prediction mode used to generate said frame signal, said decoding system comprising:
decoding means for receiving and decoding each of the encoded reference frames, and the frame signals and the prediction mode signals associated with each dependent frame;
processing means responsive to said prediction mode signal for reconstructing the display information for each dependent frame from its respective decoded frame signal and the reference frames which precede and follow said dependent frame; and
means for outputting the decoded reference frames and the reconstructed display information generated for each of said dependent frames in proper sequence to produce a video signal. .Iaddend. .Iadd.8. The decoding system in accordance with claim 7, wherein said processing means includes
memory means for storing the two decoded reference frames which respectively precede and follow each dependent frame; and
means responsive to the prediction mode signal for generating and combining weighted values of said display information from said two reference frames to reconstruct the predicted display information for said dependent frame.
.Iaddend. .Iadd.9. The decoding system in accordance with claim 8, including means for combining said predicted display information with said decoded frame signal to produce an output representing the display information for said dependent frame. .Iaddend. .Iadd.10. The decoding system in accordance with claim 8, wherein said weighted values are generated by multiplying said display information in the preceding reference frame by a first weighting coefficient and said display information in the following reference frame by a second weighting coefficient. .Iaddend. .Iadd.11. The decoding system in accordance with claim 10, wherein, in response to a first prediction mode signal, the first and second weighting coefficients are non-zero. .Iaddend. .Iadd.12. The decoding system in accordance with claim 11 wherein the weighting coefficients are selected such that the reference frame temporally closer to the dependent frame is given a greater weight than the other reference frame. .Iaddend. .Iadd.13. The decoding system in accordance with claim 10, wherein, in response to a second prediction mode signal, the second weighting coefficient is effectively zero. .Iaddend. .Iadd.14. The decoding system in accordance with claim 10, wherein, in response to a third prediction mode signal, the first weighting coefficient is
effectively zero. .Iaddend. .Iadd.15. The decoding system in accordance with claim 7, wherein, in response to a fourth prediction mode signal, the decoded frame signal is output to represent the display information of said dependent frame. .Iaddend. .Iadd.16. A method for decoding and generating a video signal from video signals that have been encoded in an encoder by arranging said video signals into spaced-apart reference frames and dependent frames located therebetween, said reference frames being output from said encoder in encoded form and used therein to implement one of a plurality of prediction modes for adaptively predicting the display information in each of said dependent frames based upon the degree of correlation between each of said dependent frames and the reference frames which immediately precede and follow each of said dependent frames, said encoder generating and outputting therefrom an encoded frame signal for each of said dependent frames and a prediction mode signal for identifying the prediction mode used to generate said encoded frame signal, said method comprising the steps of:
(a) receiving and decoding each of said encoded reference frames, said encoded frame signals and said prediction mode signal associated with each dependent frame;
(b) reconstructing the display information for each dependent frame from the corresponding decoded frame signal and the decoded reference frames which precede and follow said dependent frame in accordance with said associated prediction mode signal; and
(c) outputting the decoded reference frames and the reconstructed display information generated for each of said dependent frames in proper sequence
to generate a video signal. .Iaddend. .Iadd.17. The method of claim 16 wherein step (b) includes the steps of storing the display information of the two decoded reference frames which respectively precede and follow said dependent frame and combining weighted values of said display information with said decoded frame signal to reconstruct the display information for said dependent frame. .Iaddend. .Iadd.18. The method in accordance with claim 17, wherein said weighted values are generated by multiplying said display information of the preceding reference frame by a first weighting coefficient and said display information in the following reference frame by a second weighting coefficient. .Iaddend. .Iadd.19. The method in accordance with claim 18, wherein, in response to a first prediction mode signal, the first and second weighting coefficients are non-zero. .Iaddend. .Iadd.20. The method in accordance with claim 19, wherein the weighting coefficients are selected such that the reference frame temporally closer to the dependent frame is given a greater weight
than the other reference frame. .Iaddend. .Iadd.21. The method in accordance with claim 18, wherein, in response to a second prediction mode signal, the second weighting coefficient is effectively zero. .Iaddend. .Iadd.22. The method in accordance with claim 18, wherein, in response to a third prediction mode signal, the first weighting coefficient is effectively zero. .Iaddend. .Iadd.23. The method in accordance with claim 16, wherein, in response to a fourth prediction mode signal, the decoded frame signal is added to a fixed data value to reconstruct the display information of said dependent frame. .Iaddend.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US07/997,238 USRE35158E (en) | 1989-04-27 | 1992-12-28 | Apparatus for adaptive inter-frame predictive encoding of video signal |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP1-108419 | 1989-04-27 | ||
JP10841989A JPH07109990B2 (en) | 1989-04-27 | 1989-04-27 | Adaptive interframe predictive coding method and decoding method |
US07/514,015 US4982285A (en) | 1989-04-27 | 1990-04-26 | Apparatus for adaptive inter-frame predictive encoding of video signal |
US07/997,238 USRE35158E (en) | 1989-04-27 | 1992-12-28 | Apparatus for adaptive inter-frame predictive encoding of video signal |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US07/465,747 Continuation-In-Part US4985768A (en) | 1989-01-20 | 1990-01-18 | Inter-frame predictive encoding system with encoded and transmitted prediction error |
US07/514,015 Reissue US4982285A (en) | 1989-04-27 | 1990-04-26 | Apparatus for adaptive inter-frame predictive encoding of video signal |
Publications (1)
Publication Number | Publication Date |
---|---|
USRE35158E true USRE35158E (en) | 1996-02-20 |
Family
ID=14484288
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US07/514,015 Ceased US4982285A (en) | 1989-04-27 | 1990-04-26 | Apparatus for adaptive inter-frame predictive encoding of video signal |
US07/997,238 Expired - Lifetime USRE35158E (en) | 1989-04-27 | 1992-12-28 | Apparatus for adaptive inter-frame predictive encoding of video signal |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US07/514,015 Ceased US4982285A (en) | 1989-04-27 | 1990-04-26 | Apparatus for adaptive inter-frame predictive encoding of video signal |
Country Status (5)
Country | Link |
---|---|
US (2) | US4982285A (en) |
EP (2) | EP0395440B1 (en) |
JP (1) | JPH07109990B2 (en) |
DE (2) | DE69031045T2 (en) |
HK (2) | HK1000484A1 (en) |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6317459B1 (en) * | 1997-03-14 | 2001-11-13 | Microsoft Corporation | Digital video signal encoder and encoding method |
US20010053183A1 (en) * | 1998-04-02 | 2001-12-20 | Mcveigh Jeffrey S. | Method and apparatus for simplifying field prediction motion estimation |
US6351545B1 (en) | 1999-12-14 | 2002-02-26 | Dynapel Systems, Inc. | Motion picture enhancing system |
US6408029B1 (en) | 1998-04-02 | 2002-06-18 | Intel Corporation | Method and apparatus for simplifying real-time data encoding |
US6449352B1 (en) * | 1995-06-20 | 2002-09-10 | Matsushita Electric Industrial Co., Ltd. | Packet generating method, data multiplexing method using the same, and apparatus for coding and decoding of the transmission data |
US20030113026A1 (en) * | 2001-12-17 | 2003-06-19 | Microsoft Corporation | Skip macroblock coding |
US6584226B1 (en) | 1997-03-14 | 2003-06-24 | Microsoft Corporation | Method and apparatus for implementing motion estimation in video compression |
US6647425B1 (en) | 1997-07-03 | 2003-11-11 | Microsoft Corporation | System and method for selecting the transmission bandwidth of a data stream sent to a client based on personal attributes of the client's user |
US20040008899A1 (en) * | 2002-07-05 | 2004-01-15 | Alexandros Tourapis | Optimization techniques for data compression |
US20040126030A1 (en) * | 1998-11-30 | 2004-07-01 | Microsoft Corporation | Coded block pattern decoding with spatial prediction |
US20050013497A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Intraframe and interframe interlace coding and decoding |
US20050013498A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Coding of motion vector information |
US20050013365A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Advanced bi-directional predictive coding of video frames |
US20050013372A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Extended range motion vectors |
US20050041738A1 (en) * | 2003-07-18 | 2005-02-24 | Microsoft Corporation | DC coefficient signaling at small quantization step sizes |
US20050053298A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Four motion vector coding and decoding in bi-directionally predicted interlaced pictures |
US20050053294A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Chroma motion vector derivation |
US20050053143A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Motion vector block pattern coding and decoding |
US20050053137A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Predicting motion vectors for fields of forward-predicted interlaced video frames |
US20050053140A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Signaling macroblock mode information for macroblocks of interlaced forward-predicted fields |
US20050053296A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Bitplane coding for macroblock field/frame coding type information |
US20050053295A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Chroma motion vector derivation for interlaced forward-predicted fields |
US20050198364A1 (en) * | 1997-03-17 | 2005-09-08 | Microsoft Corporation | Methods and apparatus for communication media commands and data using the HTTP protocol |
US20060072662A1 (en) * | 2002-01-25 | 2006-04-06 | Microsoft Corporation | Improved Video Coding |
US7046734B2 (en) | 1998-04-02 | 2006-05-16 | Intel Corporation | Method and apparatus for performing real-time data encoding |
US20060159166A1 (en) * | 2005-01-14 | 2006-07-20 | Nader Mohsenian | Use of out of order encoding to improve video quality |
US20060280253A1 (en) * | 2002-07-19 | 2006-12-14 | Microsoft Corporation | Timestamp-Independent Motion Vector Prediction for Predictive (P) and Bidirectionally Predictive (B) Pictures |
US20070014358A1 (en) * | 2002-06-03 | 2007-01-18 | Microsoft Corporation | Spatiotemporal prediction for bidirectionally predictive(B) pictures and motion vector prediction for multi-picture reference motion compensation |
US20070116117A1 (en) * | 2005-11-18 | 2007-05-24 | Apple Computer, Inc. | Controlling buffer states in video compression coding to enable editing and distributed encoding |
US20070116115A1 (en) * | 2005-11-18 | 2007-05-24 | Xin Tong | Video bit rate control method |
US20070116437A1 (en) * | 2005-11-18 | 2007-05-24 | Apple Computer, Inc. | Region-based processing of predicted pixels |
US7224731B2 (en) | 2002-06-28 | 2007-05-29 | Microsoft Corporation | Motion estimation/compensation for screen capture video |
US7408990B2 (en) | 1998-11-30 | 2008-08-05 | Microsoft Corporation | Efficient motion vector coding for video compression |
US20090003446A1 (en) * | 2007-06-30 | 2009-01-01 | Microsoft Corporation | Computing collocated macroblock information for direct mode macroblocks |
WO2009073762A1 (en) * | 2007-12-07 | 2009-06-11 | The Hong Kong University Of Science And Technology | Intra frame encoding using programmable graphics hardware |
US7577200B2 (en) | 2003-09-07 | 2009-08-18 | Microsoft Corporation | Extended range variable length coding/decoding of differential motion vector information |
US7616692B2 (en) | 2003-09-07 | 2009-11-10 | Microsoft Corporation | Hybrid motion vector prediction for interlaced forward-predicted fields |
US7620106B2 (en) | 2003-09-07 | 2009-11-17 | Microsoft Corporation | Joint coding and decoding of a reference field selection and differential motion vector information |
US7623574B2 (en) | 2003-09-07 | 2009-11-24 | Microsoft Corporation | Selecting between dominant and non-dominant motion vector predictor polarities |
US20090300203A1 (en) * | 2008-05-30 | 2009-12-03 | Microsoft Corporation | Stream selection for enhanced media streaming |
US8031777B2 (en) | 2005-11-18 | 2011-10-04 | Apple Inc. | Multipass video encoding and rate control using subsampling of frames |
US8189666B2 (en) | 2009-02-02 | 2012-05-29 | Microsoft Corporation | Local picture identifier and computation of co-located information |
US8780997B2 (en) | 2005-11-18 | 2014-07-15 | Apple Inc. | Regulation of decode-side processing based on perceptual masking |
US9077960B2 (en) | 2005-08-12 | 2015-07-07 | Microsoft Corporation | Non-zero coefficient block pattern coding |
US10554985B2 (en) | 2003-07-18 | 2020-02-04 | Microsoft Technology Licensing, Llc | DC coefficient signaling at small quantization step sizes |
Families Citing this family (82)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5086439A (en) * | 1989-04-18 | 1992-02-04 | Mitsubishi Denki Kabushiki Kaisha | Encoding/decoding system utilizing local properties |
JPH07109990B2 (en) | 1989-04-27 | 1995-11-22 | 日本ビクター株式会社 | Adaptive interframe predictive coding method and decoding method |
USRE35910E (en) * | 1989-05-11 | 1998-09-29 | Matsushita Electric Industrial Co., Ltd. | Moving image signal encoding apparatus and decoding apparatus |
DE69025364T2 (en) * | 1989-08-05 | 1996-09-05 | Matsushita Electric Ind Co Ltd | Image coding method |
JP3159309B2 (en) * | 1989-09-27 | 2001-04-23 | ソニー株式会社 | Video signal encoding method and video signal encoding device |
JPH07112284B2 (en) * | 1990-01-20 | 1995-11-29 | 日本ビクター株式会社 | Predictive encoding device and decoding device |
JPH03256485A (en) * | 1990-03-06 | 1991-11-15 | Victor Co Of Japan Ltd | Motion vector detecting circuit |
JP2885322B2 (en) * | 1990-03-27 | 1999-04-19 | 日本ビクター株式会社 | Inter-field prediction encoding device and decoding device |
JP2969782B2 (en) * | 1990-05-09 | 1999-11-02 | ソニー株式会社 | Encoded data editing method and encoded data editing device |
DE69131808T2 (en) | 1990-07-31 | 2000-03-16 | Fujitsu Ltd | Process and device for image data processing |
US5875266A (en) * | 1990-07-31 | 1999-02-23 | Fujitsu Limited | Image data processing a method and apparatus |
US5134480A (en) * | 1990-08-31 | 1992-07-28 | The Trustees Of Columbia University In The City Of New York | Time-recursive deinterlace processing for television-type signals |
JP2514111B2 (en) * | 1990-12-28 | 1996-07-10 | 日本ビクター株式会社 | Interframe coded output data amount control method and image coded output data amount control method |
JP2855861B2 (en) * | 1991-01-16 | 1999-02-10 | 日本ビクター株式会社 | Inter-frame / inter-field predictive encoding apparatus and method |
JPH04322593A (en) * | 1991-04-22 | 1992-11-12 | Victor Co Of Japan Ltd | Picture coder and its decoder |
AU657510B2 (en) * | 1991-05-24 | 1995-03-16 | Apple Inc. | Improved image encoding/decoding method and apparatus |
CA2068751C (en) * | 1991-05-24 | 1998-05-19 | Tokumichi Murakami | Image coding system |
JP3002019B2 (en) * | 1991-07-04 | 2000-01-24 | 富士通株式会社 | Image coding transmission system with cell discard compensation function |
EP0522835B1 (en) * | 1991-07-12 | 1997-09-17 | Sony Corporation | Decoding apparatus for image signal |
JP2699703B2 (en) * | 1991-07-31 | 1998-01-19 | 松下電器産業株式会社 | Motion compensation prediction method and image signal encoding method using the same |
JP3115651B2 (en) * | 1991-08-29 | 2000-12-11 | シャープ株式会社 | Image coding device |
US5487086A (en) * | 1991-09-13 | 1996-01-23 | Comsat Corporation | Transform vector quantization for adaptive predictive coding |
US5475501A (en) * | 1991-09-30 | 1995-12-12 | Sony Corporation | Picture encoding and/or decoding method and apparatus |
JPH0595540A (en) * | 1991-09-30 | 1993-04-16 | Sony Corp | Dynamic picture encoder |
JPH0591328A (en) * | 1991-09-30 | 1993-04-09 | Ricoh Co Ltd | Picture processor |
JP2991833B2 (en) * | 1991-10-11 | 1999-12-20 | 松下電器産業株式会社 | Interlace scanning digital video signal encoding apparatus and method |
JP2830883B2 (en) * | 1991-10-31 | 1998-12-02 | 日本ビクター株式会社 | Video encoding device and decoding device therefor |
US5257324A (en) * | 1991-11-01 | 1993-10-26 | The United States Of America As Represented By The Secretary Of The Navy | Zero-time-delay video processor circuit |
JPH05130593A (en) * | 1991-11-05 | 1993-05-25 | Matsushita Electric Ind Co Ltd | Encoding device |
USRE39276E1 (en) | 1991-11-08 | 2006-09-12 | Matsushita Electric Industrial Co., Ltd. | Method for determining motion compensation |
JP2962012B2 (en) * | 1991-11-08 | 1999-10-12 | 日本ビクター株式会社 | Video encoding device and decoding device therefor |
US5369449A (en) * | 1991-11-08 | 1994-11-29 | Matsushita Electric Industrial Co., Ltd. | Method for predicting move compensation |
USRE39279E1 (en) | 1991-11-08 | 2006-09-12 | Matsushita Electric Industrial Co., Ltd. | Method for determining motion compensation |
DE69333896T2 (en) * | 1992-01-29 | 2006-07-27 | Mitsubishi Denki K.K. | Apparatus and method for recording / reproducing video information |
US6870884B1 (en) * | 1992-01-29 | 2005-03-22 | Mitsubishi Denki Kabushiki Kaisha | High-efficiency encoder and video information recording/reproducing apparatus |
JPH05236466A (en) * | 1992-02-25 | 1993-09-10 | Nec Corp | Device and method for inter-frame predictive image encoding for motion compensation |
FR2688369B1 (en) * | 1992-03-03 | 1996-02-09 | Thomson Csf | VERY LOW-RATE IMAGE CODING METHOD AND CODING-DECODING DEVICE USING THE SAME. |
US5461423A (en) * | 1992-05-29 | 1995-10-24 | Sony Corporation | Apparatus for generating a motion vector with half-pixel precision for use in compressing a digital motion picture signal |
DE4220750A1 (en) * | 1992-06-29 | 1994-01-05 | Daimler Benz Ag | Interpolative, predictive image data compression system - provides movement compensation for each image block using estimation block obtained by various different prediction methods |
US5298992A (en) * | 1992-10-08 | 1994-03-29 | International Business Machines Corporation | System and method for frame-differencing based video compression/decompression with forward and reverse playback capability |
US5353061A (en) * | 1992-10-08 | 1994-10-04 | International Business Machines Corporation | System and method for frame-differencing video compression/decompression using perceptually-constant information and image analysis |
JP3133517B2 (en) * | 1992-10-15 | 2001-02-13 | シャープ株式会社 | Image region detecting device, image encoding device using the image detecting device |
US5473366A (en) * | 1992-11-17 | 1995-12-05 | Canon Kabushiki Kaisha | Television-telephone apparatus having a message-keeping function and an automatic response transmission function |
JP3358835B2 (en) * | 1992-12-14 | 2002-12-24 | ソニー株式会社 | Image coding method and apparatus |
US5400075A (en) * | 1993-01-13 | 1995-03-21 | Thomson Consumer Electronics, Inc. | Adaptive variable length encoder/decoder |
JP3275423B2 (en) * | 1993-03-04 | 2002-04-15 | キヤノン株式会社 | Recording device |
US5915040A (en) * | 1993-03-29 | 1999-06-22 | Canon Kabushiki Kaisha | Image processing apparatus |
JP2947389B2 (en) * | 1993-07-12 | 1999-09-13 | 日本ビクター株式会社 | Image processing memory integrated circuit |
WO1995004432A1 (en) * | 1993-07-30 | 1995-02-09 | British Telecommunications Plc | Coding image data |
GB9315775D0 (en) * | 1993-07-30 | 1993-09-15 | British Telecomm | Processing image data |
US5559722A (en) * | 1993-11-24 | 1996-09-24 | Intel Corporation | Process, apparatus and system for transforming signals using pseudo-SIMD processing |
US5592226A (en) * | 1994-01-26 | 1997-01-07 | Btg Usa Inc. | Method and apparatus for video data compression using temporally adaptive motion interpolation |
US5706386A (en) * | 1994-05-24 | 1998-01-06 | Sony Corporation | Image information recording method and apparatus, image information reproducing method and apparatus and editing method and system |
NO942080D0 (en) * | 1994-06-03 | 1994-06-03 | Int Digital Tech Inc | Picture Codes |
JPH08223577A (en) * | 1994-12-12 | 1996-08-30 | Sony Corp | Moving image coding method and device therefor and moving image decoding method and device therefor |
DE69535007T2 (en) * | 1994-12-20 | 2006-12-21 | Matsushita Electric Industrial Co., Ltd., Kadoma | Method and device for object-based predictive coding and transmission of digital images and decoding device |
GB2301970B (en) * | 1995-06-06 | 2000-03-01 | Sony Uk Ltd | Motion compensated video processing |
JP3788823B2 (en) | 1995-10-27 | 2006-06-21 | 株式会社東芝 | Moving picture encoding apparatus and moving picture decoding apparatus |
US5768537A (en) * | 1996-02-22 | 1998-06-16 | International Business Machines Corporation | Scalable MPEG2 compliant video encoder |
GB9607645D0 (en) * | 1996-04-12 | 1996-06-12 | Snell & Wilcox Ltd | Processing of video signals prior to compression |
JP3676525B2 (en) * | 1996-10-30 | 2005-07-27 | 日本ビクター株式会社 | Moving picture coding / decoding apparatus and method |
KR100335608B1 (en) * | 1996-11-09 | 2002-10-12 | 삼성전자 주식회사 | Compression and/or restoration method of shape information and encoder and/or decoder using the same |
US6792154B1 (en) | 1999-10-07 | 2004-09-14 | World Multicast.com, Inc | Video compression system and method using time |
DE19961090B4 (en) * | 1999-12-17 | 2005-02-17 | Rohde & Schwarz Gmbh & Co. Kg | Method for increasing the signal-to-noise ratio of an unknown useful signal in the case of a radio-frequency signal |
US7266150B2 (en) | 2001-07-11 | 2007-09-04 | Dolby Laboratories, Inc. | Interpolation of video compression frames |
US8111754B1 (en) | 2001-07-11 | 2012-02-07 | Dolby Laboratories Licensing Corporation | Interpolation of video compression frames |
KR100508798B1 (en) | 2002-04-09 | 2005-08-19 | 엘지전자 주식회사 | Method for predicting bi-predictive block |
US7227998B2 (en) * | 2002-06-11 | 2007-06-05 | Canon Kabushiki Kaisha | Image processing apparatus, control method of the same, computer program, and computer-readable storage medium |
US7020200B2 (en) * | 2002-08-13 | 2006-03-28 | Lsi Logic Corporation | System and method for direct motion vector prediction in bi-predictive video frames and fields |
US7813429B2 (en) * | 2002-08-13 | 2010-10-12 | Lsi Corporation | System and method for segmentation of macroblocks |
US7542297B2 (en) | 2004-09-03 | 2009-06-02 | Entorian Technologies, Lp | Optimized mounting area circuit module system and method |
US8234577B1 (en) * | 2005-05-23 | 2012-07-31 | Glance Networks, Inc. | Method and apparatus for the transmission of changed host display information |
CN101815224A (en) * | 2005-07-22 | 2010-08-25 | 三菱电机株式会社 | Picture coding device and method and picture decoding apparatus and method |
US8488889B2 (en) | 2005-07-22 | 2013-07-16 | Mitsubishi Electric Corporation | Image encoder and image decoder, image encoding method and image decoding method, image encoding program and image decoding program, and computer readable recording medium recorded with image encoding program and computer readable recording medium recorded with image decoding program |
US8509551B2 (en) | 2005-07-22 | 2013-08-13 | Mitsubishi Electric Corporation | Image encoder and image decoder, image encoding method and image decoding method, image encoding program and image decoding program, and computer readable recording medium recording with image encoding program and computer readable recording medium recorded with image decoding program |
KR100846512B1 (en) | 2006-12-28 | 2008-07-17 | 삼성전자주식회사 | Method and apparatus for video encoding and decoding |
US7813922B2 (en) * | 2007-01-30 | 2010-10-12 | Nokia Corporation | Audio quantization |
US20090175547A1 (en) * | 2008-01-08 | 2009-07-09 | Kabushiki Kaisha Toshiba | Image processing apparatus and image processing method |
WO2010017166A2 (en) | 2008-08-04 | 2010-02-11 | Dolby Laboratories Licensing Corporation | Overlapped block disparity estimation and compensation architecture |
CN107077856B (en) | 2014-08-28 | 2020-07-14 | 诺基亚技术有限公司 | Audio parameter quantization |
CN105898343B (en) * | 2016-04-07 | 2019-03-12 | 广州盈可视电子科技有限公司 | A kind of net cast, terminal net cast method and apparatus |
CN106210745A (en) * | 2016-08-31 | 2016-12-07 | 成都市和平科技有限责任公司 | A kind of intelligent jpeg image coding/decoding system and method |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4383272A (en) * | 1981-04-13 | 1983-05-10 | Bell Telephone Laboratories, Incorporated | Video signal interpolation using motion estimation |
US4651207A (en) * | 1984-03-05 | 1987-03-17 | Ant Nachrichtentechnik Gmbh | Motion adaptive interpolation of television image sequences |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS54114920A (en) * | 1978-02-28 | 1979-09-07 | Kokusai Denshin Denwa Co Ltd | Television signal adaptive forecasting encoding system |
US4575756A (en) * | 1983-07-26 | 1986-03-11 | Nec Corporation | Decoder for a frame or field skipped TV signal with a representative movement vector used for individual vectors |
JPS62213494A (en) * | 1986-03-14 | 1987-09-19 | Kokusai Denshin Denwa Co Ltd <Kdd> | Motion compensation system for animation picture signal |
JPS62214792A (en) * | 1986-03-14 | 1987-09-21 | Fujitsu Ltd | Difference encoder |
JPH082106B2 (en) * | 1986-11-10 | 1996-01-10 | 国際電信電話株式会社 | Hybrid coding method for moving image signals |
JP2712298B2 (en) * | 1988-05-28 | 1998-02-10 | ソニー株式会社 | High-efficiency code decoding device |
JPH07109990B2 (en) | 1989-04-27 | 1995-11-22 | 日本ビクター株式会社 | Adaptive interframe predictive coding method and decoding method |
-
1989
- 1989-04-27 JP JP10841989A patent/JPH07109990B2/en not_active Expired - Lifetime
-
1990
- 1990-04-26 US US07/514,015 patent/US4982285A/en not_active Ceased
- 1990-04-27 EP EP19900304637 patent/EP0395440B1/en not_active Expired - Lifetime
- 1990-04-27 DE DE69031045T patent/DE69031045T2/en not_active Expired - Lifetime
- 1990-04-27 EP EP19930116909 patent/EP0584840B1/en not_active Expired - Lifetime
- 1990-04-27 DE DE69012405T patent/DE69012405T2/en not_active Expired - Lifetime
-
1992
- 1992-12-28 US US07/997,238 patent/USRE35158E/en not_active Expired - Lifetime
-
1997
- 1997-11-01 HK HK97102071A patent/HK1000484A1/en not_active IP Right Cessation
- 1997-11-08 HK HK97102140A patent/HK1000538A1/en not_active IP Right Cessation
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4383272A (en) * | 1981-04-13 | 1983-05-10 | Bell Telephone Laboratories, Incorporated | Video signal interpolation using motion estimation |
US4651207A (en) * | 1984-03-05 | 1987-03-17 | Ant Nachrichtentechnik Gmbh | Motion adaptive interpolation of television image sequences |
Non-Patent Citations (4)
Title |
---|
"15/30 Mb/s Motion-Compensated Interframe, Interfield and Intrafield Adaptive Prediction Coding" (Oct. '85); Bulletin of the Society of Television Engineers (Japan); vol. 39, No. 10. |
"Adaptive Hybrid Transform/Predictive Image Coding" (Mar. '87); Document D-1115 of the 70th Anniversary National Convention of the Society of Information and Communication Engineers (Japan). |
15/30 Mb/s Motion Compensated Interframe, Interfield and Intrafield Adaptive Prediction Coding (Oct. 85); Bulletin of the Society of Television Engineers (Japan); vol. 39, No. 10. * |
Adaptive Hybrid Transform/Predictive Image Coding (Mar. 87); Document D 1115 of the 70th Anniversary National Convention of the Society of Information and Communication Engineers (Japan). * |
Cited By (140)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6449352B1 (en) * | 1995-06-20 | 2002-09-10 | Matsushita Electric Industrial Co., Ltd. | Packet generating method, data multiplexing method using the same, and apparatus for coding and decoding of the transmission data |
US7139313B2 (en) | 1997-03-14 | 2006-11-21 | Microsoft Corporation | Digital video signal encoder and encoding method |
US7154951B2 (en) | 1997-03-14 | 2006-12-26 | Microsoft Corporation | Motion video signal encoder and encoding method |
US20050220188A1 (en) * | 1997-03-14 | 2005-10-06 | Microsoft Corporation | Digital video signal encoder and encoding method |
US6584226B1 (en) | 1997-03-14 | 2003-06-24 | Microsoft Corporation | Method and apparatus for implementing motion estimation in video compression |
US6317459B1 (en) * | 1997-03-14 | 2001-11-13 | Microsoft Corporation | Digital video signal encoder and encoding method |
US6707852B1 (en) * | 1997-03-14 | 2004-03-16 | Microsoft Corporation | Digital video signal encoder and encoding method |
US7072396B2 (en) | 1997-03-14 | 2006-07-04 | Microsoft Corporation | Motion video signal encoder and encoding method |
US6937657B2 (en) | 1997-03-14 | 2005-08-30 | Microsoft Corporation | Motion video signal encoder and encoding method |
US20050198364A1 (en) * | 1997-03-17 | 2005-09-08 | Microsoft Corporation | Methods and apparatus for communication media commands and data using the HTTP protocol |
US20060041917A1 (en) * | 1997-03-17 | 2006-02-23 | Microsoft Corporation | Techniques for automatically detecting protocols in a computer network |
US7761585B2 (en) | 1997-03-17 | 2010-07-20 | Microsoft Corporation | Techniques for automatically detecting protocols in a computer network |
US7664871B2 (en) | 1997-03-17 | 2010-02-16 | Microsoft Corporation | Methods and apparatus for communication media commands and data using the HTTP protocol |
US6647425B1 (en) | 1997-07-03 | 2003-11-11 | Microsoft Corporation | System and method for selecting the transmission bandwidth of a data stream sent to a client based on personal attributes of the client's user |
US6574278B1 (en) | 1998-04-02 | 2003-06-03 | Intel Corporation | Method and apparatus for performing real-time data encoding |
US20010053183A1 (en) * | 1998-04-02 | 2001-12-20 | Mcveigh Jeffrey S. | Method and apparatus for simplifying field prediction motion estimation |
US6408029B1 (en) | 1998-04-02 | 2002-06-18 | Intel Corporation | Method and apparatus for simplifying real-time data encoding |
US7046734B2 (en) | 1998-04-02 | 2006-05-16 | Intel Corporation | Method and apparatus for performing real-time data encoding |
US7215384B2 (en) | 1998-04-02 | 2007-05-08 | Intel Corporation | Method and apparatus for simplifying field prediction motion estimation |
US7263127B1 (en) | 1998-04-02 | 2007-08-28 | Intel Corporation | Method and apparatus for simplifying frame-based motion estimation |
US7231091B2 (en) | 1998-09-21 | 2007-06-12 | Intel Corporation | Simplified predictive video encoder |
US20050265615A1 (en) * | 1998-09-21 | 2005-12-01 | Michael Keith | Simplified predictive video encoder |
US8582903B2 (en) | 1998-11-30 | 2013-11-12 | Microsoft Corporation | Efficient macroblock header coding for video compression |
US20060110059A1 (en) * | 1998-11-30 | 2006-05-25 | Microsoft Corporation | Efficient macroblock header coding for video compression |
US7054494B2 (en) | 1998-11-30 | 2006-05-30 | Microsoft Corporation | Coded block pattern decoding with spatial prediction |
US7127114B2 (en) | 1998-11-30 | 2006-10-24 | Microsoft Corporation | Coded block pattern encoding with spatial prediction |
US20040126030A1 (en) * | 1998-11-30 | 2004-07-01 | Microsoft Corporation | Coded block pattern decoding with spatial prediction |
US7289673B2 (en) | 1998-11-30 | 2007-10-30 | Microsoft Corporation | Decoding macroblock type and coded block pattern information |
US7408990B2 (en) | 1998-11-30 | 2008-08-05 | Microsoft Corporation | Efficient motion vector coding for video compression |
US8290288B2 (en) | 1998-11-30 | 2012-10-16 | Microsoft Corporation | Encoding macroblock type and coded block pattern information |
US6904174B1 (en) | 1998-12-11 | 2005-06-07 | Intel Corporation | Simplified predictive video encoder |
US6351545B1 (en) | 1999-12-14 | 2002-02-26 | Dynapel Systems, Inc. | Motion picture enhancing system |
US20030113026A1 (en) * | 2001-12-17 | 2003-06-19 | Microsoft Corporation | Skip macroblock coding |
US8428374B2 (en) | 2001-12-17 | 2013-04-23 | Microsoft Corporation | Skip macroblock coding |
US10368065B2 (en) | 2001-12-17 | 2019-07-30 | Microsoft Technology Licensing, Llc | Skip macroblock coding |
US7379607B2 (en) | 2001-12-17 | 2008-05-27 | Microsoft Corporation | Skip macroblock coding |
US7555167B2 (en) | 2001-12-17 | 2009-06-30 | Microsoft Corporation | Skip macroblock coding |
US9538189B2 (en) | 2001-12-17 | 2017-01-03 | Microsoft Technology Licensing, Llc | Skip macroblock coding |
US20090262835A1 (en) * | 2001-12-17 | 2009-10-22 | Microsoft Corporation | Skip macroblock coding |
US9774852B2 (en) | 2001-12-17 | 2017-09-26 | Microsoft Technology Licensing, Llc | Skip macroblock coding |
US8781240B2 (en) | 2001-12-17 | 2014-07-15 | Microsoft Corporation | Skip macroblock coding |
US9088785B2 (en) | 2001-12-17 | 2015-07-21 | Microsoft Technology Licensing, Llc | Skip macroblock coding |
US20070110326A1 (en) * | 2001-12-17 | 2007-05-17 | Microsoft Corporation | Skip macroblock coding |
US7200275B2 (en) | 2001-12-17 | 2007-04-03 | Microsoft Corporation | Skip macroblock coding |
US20060072662A1 (en) * | 2002-01-25 | 2006-04-06 | Microsoft Corporation | Improved Video Coding |
US8406300B2 (en) | 2002-01-25 | 2013-03-26 | Microsoft Corporation | Video coding |
US8638853B2 (en) | 2002-01-25 | 2014-01-28 | Microsoft Corporation | Video coding |
US7646810B2 (en) | 2002-01-25 | 2010-01-12 | Microsoft Corporation | Video coding |
US9888237B2 (en) | 2002-01-25 | 2018-02-06 | Microsoft Technology Licensing, Llc | Video coding |
US10284843B2 (en) | 2002-01-25 | 2019-05-07 | Microsoft Technology Licensing, Llc | Video coding |
US20100135390A1 (en) * | 2002-01-25 | 2010-06-03 | Microsoft Corporation | Video coding |
US20090245373A1 (en) * | 2002-01-25 | 2009-10-01 | Microsoft Corporation | Video coding |
US10116959B2 (en) | 2002-06-03 | 2018-10-30 | Microsoft Technology Licesning, LLC | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
US8374245B2 (en) | 2002-06-03 | 2013-02-12 | Microsoft Corporation | Spatiotemporal prediction for bidirectionally predictive(B) pictures and motion vector prediction for multi-picture reference motion compensation |
US9185427B2 (en) | 2002-06-03 | 2015-11-10 | Microsoft Technology Licensing, Llc | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
US8873630B2 (en) | 2002-06-03 | 2014-10-28 | Microsoft Corporation | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
US20070014358A1 (en) * | 2002-06-03 | 2007-01-18 | Microsoft Corporation | Spatiotemporal prediction for bidirectionally predictive(B) pictures and motion vector prediction for multi-picture reference motion compensation |
US9571854B2 (en) | 2002-06-03 | 2017-02-14 | Microsoft Technology Licensing, Llc | Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation |
US7224731B2 (en) | 2002-06-28 | 2007-05-29 | Microsoft Corporation | Motion estimation/compensation for screen capture video |
US20040008899A1 (en) * | 2002-07-05 | 2004-01-15 | Alexandros Tourapis | Optimization techniques for data compression |
US7280700B2 (en) | 2002-07-05 | 2007-10-09 | Microsoft Corporation | Optimization techniques for data compression |
US8774280B2 (en) | 2002-07-19 | 2014-07-08 | Microsoft Corporation | Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures |
US8379722B2 (en) | 2002-07-19 | 2013-02-19 | Microsoft Corporation | Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures |
US20060280253A1 (en) * | 2002-07-19 | 2006-12-14 | Microsoft Corporation | Timestamp-Independent Motion Vector Prediction for Predictive (P) and Bidirectionally Predictive (B) Pictures |
US10063863B2 (en) | 2003-07-18 | 2018-08-28 | Microsoft Technology Licensing, Llc | DC coefficient signaling at small quantization step sizes |
US20050013372A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Extended range motion vectors |
US7499495B2 (en) | 2003-07-18 | 2009-03-03 | Microsoft Corporation | Extended range motion vectors |
US20050013497A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Intraframe and interframe interlace coding and decoding |
US7426308B2 (en) | 2003-07-18 | 2008-09-16 | Microsoft Corporation | Intraframe and interframe interlace coding and decoding |
US8687697B2 (en) | 2003-07-18 | 2014-04-01 | Microsoft Corporation | Coding of motion vector information |
US10554985B2 (en) | 2003-07-18 | 2020-02-04 | Microsoft Technology Licensing, Llc | DC coefficient signaling at small quantization step sizes |
US10659793B2 (en) | 2003-07-18 | 2020-05-19 | Microsoft Technology Licensing, Llc | DC coefficient signaling at small quantization step sizes |
US9148668B2 (en) | 2003-07-18 | 2015-09-29 | Microsoft Technology Licensing, Llc | Coding of motion vector information |
US20050041738A1 (en) * | 2003-07-18 | 2005-02-24 | Microsoft Corporation | DC coefficient signaling at small quantization step sizes |
US7738554B2 (en) | 2003-07-18 | 2010-06-15 | Microsoft Corporation | DC coefficient signaling at small quantization step sizes |
US8917768B2 (en) | 2003-07-18 | 2014-12-23 | Microsoft Corporation | Coding of motion vector information |
US20050013498A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Coding of motion vector information |
US9313509B2 (en) | 2003-07-18 | 2016-04-12 | Microsoft Technology Licensing, Llc | DC coefficient signaling at small quantization step sizes |
US20050013365A1 (en) * | 2003-07-18 | 2005-01-20 | Microsoft Corporation | Advanced bi-directional predictive coding of video frames |
US7609763B2 (en) | 2003-07-18 | 2009-10-27 | Microsoft Corporation | Advanced bi-directional predictive coding of video frames |
US20090168890A1 (en) * | 2003-09-07 | 2009-07-02 | Microsoft Corporation | Predicting motion vectors for fields of forward-predicted interlaced video frames |
US20050053143A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Motion vector block pattern coding and decoding |
US7620106B2 (en) | 2003-09-07 | 2009-11-17 | Microsoft Corporation | Joint coding and decoding of a reference field selection and differential motion vector information |
US7623574B2 (en) | 2003-09-07 | 2009-11-24 | Microsoft Corporation | Selecting between dominant and non-dominant motion vector predictor polarities |
US20050053298A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Four motion vector coding and decoding in bi-directionally predicted interlaced pictures |
US20050053300A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Bitplane coding of prediction mode information in bi-directionally predicted interlaced pictures |
US20050053292A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Advanced bi-directional predictive coding of interlaced video |
US7630438B2 (en) | 2003-09-07 | 2009-12-08 | Microsoft Corporation | Direct mode motion vectors for Bi-directionally predicted interlaced pictures |
US7606308B2 (en) | 2003-09-07 | 2009-10-20 | Microsoft Corporation | Signaling macroblock mode information for macroblocks of interlaced forward-predicted fields |
US7664177B2 (en) | 2003-09-07 | 2010-02-16 | Microsoft Corporation | Intra-coded fields for bi-directional frames |
US7606311B2 (en) | 2003-09-07 | 2009-10-20 | Microsoft Corporation | Macroblock information signaling for interlaced frames |
US7680185B2 (en) | 2003-09-07 | 2010-03-16 | Microsoft Corporation | Self-referencing bi-directionally predicted frames |
US7599438B2 (en) | 2003-09-07 | 2009-10-06 | Microsoft Corporation | Motion vector block pattern coding and decoding |
US7590179B2 (en) | 2003-09-07 | 2009-09-15 | Microsoft Corporation | Bitplane coding of prediction mode information in bi-directionally predicted interlaced pictures |
US7577200B2 (en) | 2003-09-07 | 2009-08-18 | Microsoft Corporation | Extended range variable length coding/decoding of differential motion vector information |
US7092576B2 (en) | 2003-09-07 | 2006-08-15 | Microsoft Corporation | Bitplane coding for macroblock field/frame coding type information |
US7852936B2 (en) | 2003-09-07 | 2010-12-14 | Microsoft Corporation | Motion vector prediction in bi-directionally predicted interlaced field-coded pictures |
US7924920B2 (en) | 2003-09-07 | 2011-04-12 | Microsoft Corporation | Motion vector coding and decoding in interlaced frame coded pictures |
US20050053294A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Chroma motion vector derivation |
US7099515B2 (en) | 2003-09-07 | 2006-08-29 | Microsoft Corporation | Bitplane coding and decoding for AC prediction status information |
US20050053137A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Predicting motion vectors for fields of forward-predicted interlaced video frames |
US8064520B2 (en) | 2003-09-07 | 2011-11-22 | Microsoft Corporation | Advanced bi-directional predictive coding of interlaced video |
US20050053140A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Signaling macroblock mode information for macroblocks of interlaced forward-predicted fields |
US20050053293A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Motion vector coding and decoding in interlaced frame coded pictures |
US20050053146A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Prediction mode switching in macroblocks of bi-directionally predicted interlaced frame-coded pictures |
US7567617B2 (en) | 2003-09-07 | 2009-07-28 | Microsoft Corporation | Predicting motion vectors for fields of forward-predicted interlaced video frames |
US20050053296A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Bitplane coding for macroblock field/frame coding type information |
US20050053149A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Direct mode motion vectors for Bi-directionally predicted interlaced pictures |
US20050053156A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Bitplane coding and decoding for AC prediction status information |
US20050053295A1 (en) * | 2003-09-07 | 2005-03-10 | Microsoft Corporation | Chroma motion vector derivation for interlaced forward-predicted fields |
US7529302B2 (en) | 2003-09-07 | 2009-05-05 | Microsoft Corporation | Four motion vector coding and decoding in bi-directionally predicted interlaced pictures |
US7616692B2 (en) | 2003-09-07 | 2009-11-10 | Microsoft Corporation | Hybrid motion vector prediction for interlaced forward-predicted fields |
US7352905B2 (en) | 2003-09-07 | 2008-04-01 | Microsoft Corporation | Chroma motion vector derivation |
US8625669B2 (en) | 2003-09-07 | 2014-01-07 | Microsoft Corporation | Predicting motion vectors for fields of forward-predicted interlaced video frames |
US7317839B2 (en) | 2003-09-07 | 2008-01-08 | Microsoft Corporation | Chroma motion vector derivation for interlaced forward-predicted fields |
US20060159166A1 (en) * | 2005-01-14 | 2006-07-20 | Nader Mohsenian | Use of out of order encoding to improve video quality |
US7826530B2 (en) * | 2005-01-14 | 2010-11-02 | Broadcom Corporation | Use of out of order encoding to improve video quality |
US9077960B2 (en) | 2005-08-12 | 2015-07-07 | Microsoft Corporation | Non-zero coefficient block pattern coding |
US8295343B2 (en) | 2005-11-18 | 2012-10-23 | Apple Inc. | Video bit rate control method |
US8780997B2 (en) | 2005-11-18 | 2014-07-15 | Apple Inc. | Regulation of decode-side processing based on perceptual masking |
US20070116117A1 (en) * | 2005-11-18 | 2007-05-24 | Apple Computer, Inc. | Controlling buffer states in video compression coding to enable editing and distributed encoding |
US20070116115A1 (en) * | 2005-11-18 | 2007-05-24 | Xin Tong | Video bit rate control method |
US9049451B2 (en) | 2005-11-18 | 2015-06-02 | Apple Inc. | Region-based processing of predicted pixels |
US20070116437A1 (en) * | 2005-11-18 | 2007-05-24 | Apple Computer, Inc. | Region-based processing of predicted pixels |
US10382750B2 (en) | 2005-11-18 | 2019-08-13 | Apple Inc. | Region-based processing of predicted pixels |
US8031777B2 (en) | 2005-11-18 | 2011-10-04 | Apple Inc. | Multipass video encoding and rate control using subsampling of frames |
US9706201B2 (en) | 2005-11-18 | 2017-07-11 | Apple Inc. | Region-based processing of predicted pixels |
US8233535B2 (en) | 2005-11-18 | 2012-07-31 | Apple Inc. | Region-based processing of predicted pixels |
US20090003446A1 (en) * | 2007-06-30 | 2009-01-01 | Microsoft Corporation | Computing collocated macroblock information for direct mode macroblocks |
US8254455B2 (en) | 2007-06-30 | 2012-08-28 | Microsoft Corporation | Computing collocated macroblock information for direct mode macroblocks |
US20090147849A1 (en) * | 2007-12-07 | 2009-06-11 | The Hong Kong University Of Science And Technology | Intra frame encoding using programmable graphics hardware |
WO2009073762A1 (en) * | 2007-12-07 | 2009-06-11 | The Hong Kong University Of Science And Technology | Intra frame encoding using programmable graphics hardware |
US20090300203A1 (en) * | 2008-05-30 | 2009-12-03 | Microsoft Corporation | Stream selection for enhanced media streaming |
US8370887B2 (en) | 2008-05-30 | 2013-02-05 | Microsoft Corporation | Media streaming with enhanced seek operation |
US7949775B2 (en) | 2008-05-30 | 2011-05-24 | Microsoft Corporation | Stream selection for enhanced media streaming |
US7925774B2 (en) | 2008-05-30 | 2011-04-12 | Microsoft Corporation | Media streaming using an index file |
US20090300204A1 (en) * | 2008-05-30 | 2009-12-03 | Microsoft Corporation | Media streaming using an index file |
US8819754B2 (en) | 2008-05-30 | 2014-08-26 | Microsoft Corporation | Media streaming with enhanced seek operation |
US20090297123A1 (en) * | 2008-05-30 | 2009-12-03 | Microsoft Corporation | Media streaming with enhanced seek operation |
US8189666B2 (en) | 2009-02-02 | 2012-05-29 | Microsoft Corporation | Local picture identifier and computation of co-located information |
Also Published As
Publication number | Publication date |
---|---|
US4982285A (en) | 1991-01-01 |
DE69031045T2 (en) | 1997-11-06 |
JPH02285816A (en) | 1990-11-26 |
EP0395440A3 (en) | 1991-02-06 |
EP0395440B1 (en) | 1994-09-14 |
DE69012405T2 (en) | 1995-02-23 |
JPH07109990B2 (en) | 1995-11-22 |
DE69031045D1 (en) | 1997-08-14 |
EP0584840B1 (en) | 1997-07-09 |
EP0395440A2 (en) | 1990-10-31 |
EP0584840A2 (en) | 1994-03-02 |
HK1000484A1 (en) | 1998-03-27 |
EP0584840A3 (en) | 1994-03-23 |
HK1000538A1 (en) | 1998-04-03 |
DE69012405D1 (en) | 1994-10-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
USRE35158E (en) | Apparatus for adaptive inter-frame predictive encoding of video signal | |
US5089889A (en) | Apparatus for inter-frame predictive encoding of video signal | |
KR950011199B1 (en) | Progressive coding system | |
RU2123769C1 (en) | Method and device for encoding images and information medium for storing images | |
EP0618731B1 (en) | High efficiency encoding of picture signals | |
JP3314929B2 (en) | Video signal encoding circuit | |
US5386234A (en) | Interframe motion predicting method and picture signal coding/decoding apparatus | |
USRE37222E1 (en) | Video signal transmitting system | |
US5438374A (en) | System and method for filtering video signals | |
KR100225542B1 (en) | Method and apparatus for image signal encoding | |
KR100345968B1 (en) | High efficient coder for picture signal, decoder and recording medium | |
US5191414A (en) | Interfield predictive encoder and decoder for reproducing a signal subjected to predictive encoding by encoder into an image signal | |
US6256349B1 (en) | Picture signal encoding method and apparatus, picture signal transmitting method, picture signal decoding method and apparatus and recording medium | |
JPS61118085A (en) | Coding system and device for picture signal | |
JPH07112284B2 (en) | Predictive encoding device and decoding device | |
PL175445B1 (en) | Method of encoding moving images, method of decoding moving images, moving image recording medium and moving image encoding apparatus | |
KR100256859B1 (en) | Apparatus and method of coding/decoding moving picture and strange medium storing moving picture | |
JP3900534B2 (en) | Moving picture coding apparatus and coding method | |
EP0541287B1 (en) | Video signal coding apparatus and decoding apparatus | |
JPH0514876A (en) | Moving image encoding system | |
JP2630022B2 (en) | Motion compensated interframe coding device | |
JP3653745B2 (en) | Encoding apparatus and method, and encoding / decoding apparatus and method | |
JP2921755B2 (en) | Predictive coding device for interlaced image signal | |
JP3168723B2 (en) | Video signal encoding device | |
KR100233419B1 (en) | Motion vector transmitting method, motion vector transmitting apparatus, motion vector decoding method and motion vector decoding apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 8 |
|
FPAY | Fee payment |
Year of fee payment: 12 |