EP2466893A1 - Video decoding method - Google Patents

Video decoding method Download PDF

Info

Publication number
EP2466893A1
EP2466893A1 EP20120157322 EP12157322A EP2466893A1 EP 2466893 A1 EP2466893 A1 EP 2466893A1 EP 20120157322 EP20120157322 EP 20120157322 EP 12157322 A EP12157322 A EP 12157322A EP 2466893 A1 EP2466893 A1 EP 2466893A1
Authority
EP
European Patent Office
Prior art keywords
encoding
image
decoding
interpolation
objective
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP20120157322
Other languages
German (de)
French (fr)
Inventor
Shohei Saito
Masashi Takahashi
Muneaki Yamaguchi
Hiroaki Ito
Koichi Hamada
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Consumer Electronics Co Ltd
Original Assignee
Hitachi Consumer Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Consumer Electronics Co Ltd filed Critical Hitachi Consumer Electronics Co Ltd
Publication of EP2466893A1 publication Critical patent/EP2466893A1/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/137Motion inside a coding unit, e.g. average field, frame or block difference
    • H04N19/139Analysis of motion vectors, e.g. their magnitude, direction, variance or reliability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the present invention relates to techniques of encoding and decoding video data.
  • an internationally standardized encoding standard as typified by the MPEG (Moving Picture Experts Group) standard has hitherto been available.
  • the H.264/AVC (Advanced Video Encoding) standard for example, especially has high encoding efficiencies and has been utilized widely as a standard for moving picture compression in ground digital broadcasting, digital video camera, next generation encoding media, cellular phones and so on.
  • the data thus compressed pursuant to the standard as above is decoded in a television receiver, a DVD player and the like and the thus decoded video data is displayed on a display.
  • JP-A-2003-333540 discloses the frame rate conversion to be carried out by using a motion amount (motion vector) obtained by decoding an encoding stream and the decoded image as well in order to eliminate a blur in moving picture and an unnatural motion which occur when displaying the decoded video data.
  • a motion amount motion vector
  • a frame rate conversion process is applied to the decoded video data.
  • the frame rate conversion process presupposes that a motion vector and a difference image are transmitted from the encoding side to the decoding side and fails to contribute to reduction in the amount of transmission data, raising a problem that improvements in data compression rate are insufficient.
  • the present invention has been made in the light of the above problem and its object is to improve the data compression rate.
  • the video encoding apparatus comprises, for example, a video input unit 101 for inputting videos, an area division unit 102 for dividing an input video into encoding objective areas, an encoding unit 103 for encoding input video data divided by the area division unit and locally decoding the data, an interpolation (the term "interpolation" being used generally to signify interpolation per se and extrapolation inclusive excepting the case where the two is used distinctively as will be described later) image generation unit 104 for thinning out images (encoded images) locally decoded by the encoding unit 103 in time direction and generating interpolation images adapted to interpolate the thinned out images, a mode selection unit 105 for selecting either an encoded image and an interpolated image, a encoded data memory unit 106 for recording encoded image data and flag data, and a variable-length encoding unit 107 for
  • the video input unit 101 rearranges input videos in order of their encoding.
  • order of display is rearranged to order of encoding in accordance with the picture type.
  • an encoding objective frame is divided into encoding objective areas.
  • the size of divisional area may be in a unit of block such as a square or rectangular area or alternatively may be in a unit of object extracted by using a method of watershed process. Divided videos in the area division unit 102 are transmitted to the encoding unit 103.
  • the encoding unit 103 includes, for example, a subtracter 201 for calculating the difference between an image as a result of division by the area division unit 102 and a predictive image selected by an in-screen/inter-screen predictive image selector 208, a frequency converter/quantizer 202 for frequency-converting and quantizing the difference data generated by the subtracter 201, an inverse quantizer/inverse-frequency converter 203 for inverse quantizing and inverse-frequency converting data outputted from the frequency converter/quantizer 202, an adder 204 for adding the data decoded by the inverse quantizer/inverse-frequency converter 203 to the predictive image selected by the in-screen/inter-screen predictive image selector 208, a decoded image memory 205 for storing the sum image from the adder 204, an in-screen predictor 206 for generating a predictive image on the basis of pixels peripheral of the encoding objective area, an inter-screen predictor 207 for
  • the difference image is frequency-converted by using a DCT (Discrete Cosine transform) and a wavelet conversion and then, a coefficient after the frequency conversion is quantized.
  • Data after quantization is transmitted to the mode selection unit 105 and the inverse quantizer/inverse-frequency converter 203.
  • the inverse quantizer/inverse-frequency converter 203 a process inverse to that carried out in the frequency converter/quantizer 202 is conducted.
  • the adder 204 adds a predictive image selected by the in-screen/inter-screen predictive image selector 208 to a difference image generated through the inverse quantization/inverse-frequency conversion by means of the inverse quantizer/inverse-frequency converter 203, generating a decoded image.
  • the thus generated decoded image is stored in the decoded image memory 205.
  • a predictive image is generated by using pixels of peripheral areas finished with decoding which have been stored in the decoded image memory 205.
  • a predictive image is generated through a process for matching between data inside the frame finished with decoding which has been stored in the decoded image memory 205 and the input image.
  • the decoded image memory 205 then transmits the decoded image to the interpolation image generation unit 104.
  • the interpolation image generation unit 104 includes, for example, an interpolation frame decider 301, a motion searcher 302 and an interpolation pixel generator 303.
  • interpolation frame decider 301 a frame to be interpolated (interpolation frame) and a frame to be normally encoded without subject to interpolation (encoding frame) are determined in a unit of frame on the basis of, for example, the picture type.
  • Fig. 4 showing a specified example of interpolation frame determination by means of the interpolation frame decider 301 in interpolation image generation unit 104.
  • abscissa represents order of inputting images during encoding and order of displaying images during decoding.
  • order of encoding process during encoding and order of decoding process during decoding are as shown in Fig. 4 . More particularly, a B picture undergoes encoding process and decoding process after a P picture whose display order is later than that of the B picture.
  • the interpolation pixel generator 303 in embodiment 1 generates, on the basis of a plurality of pictures to be subjected to an encoding process precedently (during decoding, subjected to a decoding process precedently), pixels of frames representing pictures each of which intervenes between the plurality of pictures in order of display.
  • the interpolation pixel generation process by the interpolation pixel generator 303 according to embodiment 1 is a process suited for the B picture which is preceded and succeeded in order of display by respective pictures finished with encoding or decoding during encoding process or decoding process.
  • the interpolation frame decider 301 determines, for example, the B picture as an interpolation frame and the I picture and P picture as encoding objective frames, as shown in Fig. 4 . Then, it is possible for the interpolation pixel generator 303 to generate a pixel positional value in terms of matrix element coordinates (hereinafter, simply referred to as a pixel value) of the B picture on the basis of the I picture and P picture which are forwardly and backwardly closest to the B picture, respectively.
  • a pixel value matrix element coordinates
  • the number of sheets of B pictures to be inserted between I or P pictures may be increased when a difference in brightness or color between frames is calculated and the difference is small, exhibiting a high correlation between the frames.
  • the B pictures may be interpolation frames and the I picture and P picture may be encoding objective frames.
  • the interpolation pixel generator 303 may generate a pixel value of each B picture through the interpolation process on the basis of the I picture and P picture which are forwardly and backwardly closest to the B picture.
  • the motion searcher 302 has a predictive error calculator 601 and a motion vector decider 602. After the interpolation frame decider 301 has determined an interpolation frame, the motion searcher 302 makes a search for a motion necessary to calculate the pixel value of the interpolation frame.
  • a motion search method an area matching method widely used in general may be utilized.
  • the predictive error calculator 601 first determines, in respect of an interpolation objective pixel 501 of interpolation frame n, a predictive error absolute value sum SAD n (x,y) indicated by equation (1) by using a pixel value f n-1 (x-dx,y-dy) of a pixel 500 inside an encoding objective frame n-1 which precedes the interpolation frame n in order of display and a pixel value f n+1 (x+dx,y+dy)of a pixel 502 inside an encoding objective frame n+1 which succeeds the interpolation frame n in order of display.
  • the pixels 500 and 502 are so determined as to lie on the same straight line as the interpolation objective pixel 501(x,y) in a frame of space and time.
  • R represents the size of an image area to which the interpolation objective pixel belongs
  • n represents a frame number
  • x,y represent pixel coordinates
  • dx, dy i, j represent inter-pixel differences
  • a,b represent number of the image area the interpolation objective pixel belongs to.
  • the motion vector decider 602 determines a combination (dxo,dyo) of values by which the predictive error absolute value sum SAD n (x,y) in equation (1) is minimized and calculates a motion vector connecting a pixel of coordinates (x-dxo,y-dyo) inside the encoding objective frame n-1 which precedes the interpolation frame n in order of display and a pixel (x+dx 0 ,y+dy 0 ) inside the encoding objective frame n+1 which succeeds the interpolation frame n in order of display.
  • the interpolation pixel generator 303 calculates an average of the pixel value f n-1 (x-dx 0 ,y-dy 0 ) of the pixel inside the encoding objective frame preceding the interpolation frame and the pixel value f n+1 (x+dx 0 ,y+dy 0 ) of the pixel inside the encoding objective frame succeeding the interpolation frame to generate a pixel value f n (x,y) of the interpolation objective pixel (x,y) by using equation (2).
  • a pixel of the interpolation frame can be generated from pixel values inside the encoding objective frames which are positioned before and after the interpolation objective frame in order of display, respectively.
  • the interpolation pixel value is calculated from the simple average value but the interpolation pixel calculation method according to the present invention is not limited to that based on the simple average value. For example, if the time distance between encoding objective frame n-1 and interpolation frame n is not equal to the time distance between interpolation frame n and encoding objective frame n+1, the respective pixel values may be multiplied by weight coefficients complying with the respective time distances and thereafter, the resulting products may be added together.
  • any method may be employed provided that the pixel value can be calculated from a function having a variable represented by pixel value f n-1 (x-dx 0 ,y-dy 0 ) on the encoding objective frame n-1 and a variable represented by pixel value f n+1 (x+dx 0 ,y+dy 0 ) on the encoding objective frame n+1.
  • the mode selection unit 105 makes a decision as to which one of the encoding objective image the encoding unit 103 generates and the interpolation image formed of the interpolation pixel the interpolation image generation unit 104 generates is to be selected.
  • the mode selection unit 105 calculates pursuant to, for example, equation (3) a difference f'(SAD n (a,b) between a predictive error calculated by the motion searcher 302 and a predictive error of an area peripheral of the encoding objective area (S701).
  • n represents frame number
  • a,b represent number of image area to which the interpolation objective pixel belongs
  • k,l represent a variable meaning the difference in number between the peripheral image area and the image area the interpolation objective pixel belongs to.
  • step 702 If the condition is met in step 702, the interpolation image is selected (S703). At that time, the process ends without outputting header information indicative of the kind of the prediction area, motion vector and predictive error data (S705). On the other hand, if the condition is not met in step S702, the encoding objective image is selected (S704). At that time, the header information indicative of the kind of the prediction area, motion vector and predictive error data are outputted to the encoded data memory unit 106 and then the process ends.
  • the header information indicative of the kind of predictive area, motion vector and predictive error data are included in an encoding stream as in the case of the normal encoding technique.
  • a decoded image can be generated without resort to the above data through the interpolation process explained in connection with Fig. 15 and therefore, these pieces of data are not included in the encoding stream.
  • the encoding data amount can be reduced, realizing improvements in compression rate.
  • a partial area may be selected for an encoding image and the other area may be selected for an interpolation image.
  • the area concerned may be in a unit of block, for example.
  • a shadowed area indicates an area in which an encoding objective image is selected and an unshadowed area indicates an area in which an interpolation image is selected.
  • Illustrated in Fig. 8A is a frame encoded in accordance with the conventional encoding technique. Since no interpolation image area exists in the conventional encoding technique, all areas provide encoding objective images. In the example of Fig. 8A , all of 24 areas are dedicated to encoding images. Then, in the conventional encoding technique, header information indicative of the kind of predictive area and information such as motion vector and predictive error data are stored, in respect of all of the frames in Fig. 8A as a rule, in an encoding stream.
  • the encoding stream for frames encoded with the conventional encoding technique is illustrated as shown in Fig. 8B .
  • the header information indicative of the kind of predictive area and the information such as motion vector and predictive error data are stored in the encoding stream.
  • Fig. 8C an example of a frame encoded in accordance with the video encoding apparatus and method according to embodiment 1 is as exemplified in Fig. 8C .
  • encoding objective images are selected in only 8 out of 24 areas and interpolation images are selected in the remaining 16 areas.
  • an encoding stream corresponding to the Fig. 8C example is formed as illustrated in Fig. 8D .
  • the header information indicative of the kind of predictive area and the information such as motion vector and predictive error data are included in an encoding stream every 8 areas representing encoding objective areas.
  • the amount of encoding data to be included in the encoding stream can be reduced as compared to that in the conventional encoding technique, thereby materializing improvements in the encoding compression rate.
  • Figs. 9A, 9B and 9C and Figs. 8A, 8B, 8C and 8D a process for encoding a motion vector executed in the variable-length encoding unit 107 of video encoding apparatus according to embodiment 1 of the invention will be described.
  • a motion predictive vector is calculated from a median of motion vectors in areas peripheral of the encoding objective area and only a difference between the motion vector in the encoding objective area and the motion predictive vector is handled as encoding data, thus reducing the data amount.
  • a predictive motion vector (PMV) is calculated
  • a difference vector (DMV) between a motion vector (MV) in the encoding objective area and the predictive motion vector (PMV) is calculated and the difference vector (DMV) is treated as encoding data.
  • PMV predictive motion vector
  • DMV difference vector
  • encoding objective image areas and interpolation image areas coexist as shown in Fig. 8C and therefore, for calculation of the predictive motion vector (PMV), a method different from the conventional encoding technique under H.264 standard is adopted.
  • a predictive motion vector (PMV) for an encoding objective area X is calculated by using a median of motion vectors used for encoding processes in areas A, B and C which are close to the encoding objective area X and which have been encoded in advance of the encoding objective area X.
  • PMV predictive motion vector
  • a process in common to the encoding and decoding processes needs to be executed.
  • the motion vector encoding process in embodiment 1 of the invention is a process to be applied to only an encoding objective image area out of encoding objective image area and interpolation image area.
  • the interpolation image area a motion search is carried out for interpolation image on the decoding side and therefore the motion vector encoding process is unnecessary.
  • a predictive motion vector is calculated by using a median of motion vectors (MV A ,MV B ,MV C ) used for the encoding process in the peripheral areas A, B and C as in the case of the conventional H.264 standard.
  • interpolation image areas are included in areas peripheral of the encoding objective area X as shown in Figs. 9B and 9C .
  • the motion vector is not encoded for the interpolation image area, that is, the motion vector used in the encoding process is not transmitted to the decoding side.
  • This accounts for the fact that with the motion vector used in the encoding process utilized for calculation of a predictive motion vector (PMV), calculation of the predictive motion vector (PMV) cannot be carried out in decoding. Therefore, calculation of a predictive motion vector (PMV) is executed in embodiment 1 as below.
  • a motion vector used in the interpolation image generation process that is, the motion vectors (MVC A ,MVC B ,MVC C ) calculated by the motion searcher 302 of interpolation image generation unit 104 are used. If the motion search in motion searcher 302 is carried out in a unit of pixel, a plurality of motion vectors exist in each area and so the predictive motion vector PMV is calculated by using a mean value of the plural motion vectors. Then, a median of the motion vectors (MVC A ,MVC B ,MVC C )is calculated as a predictive motion vector (PMV).
  • a motion vector MV used in the encoding process is used for the encoding image area and a motion vector MVC used in the interpolation image generation process is used for the interpolation image area and a median of these motion vectors is calculated as a predictive motion vector (PMV).
  • PMV predictive motion vector
  • the peripheral areas A and C are encoding objective image areas and the peripheral area B is an interpolation image area.
  • a median of the motion vectors (MV A ,MVC B ,MV C ) is calculated as a predictive motion vector (PMV).
  • a motion vector of the encoding image area may be selected preferentially and used.
  • the MVC B of the peripheral area B representing the interpolation image area is not used but a motion vector MV D used in the encoding process of peripheral area D is used.
  • a median of the motion vectors (MV A , MV C , MV D ) is calculated as a predictive motion vector (PMV).
  • an average value of motion vectors MV of the two areas may be used as a predictive motion vector (PMV). If one of the peripheral areas A,B,C and D is an encoding objective image area, one motion vector MV may be used by itself as a predictive motion vector (PMV).
  • the data compression rate can be improved.
  • the video decoding apparatus comprises, for example, a variable-length decoding unit 1001 for decoding encoded data transmitted from the encoding side, a parsing unit 1002 for parsing data subjected to variable-length decoding, a mode deciding unit 1009 for making a decision, on the basis of the result of the parsing by means of the parsing unit 1002 and the result of the predictive error calculation by mans of an interpolation image generation unit 1007, as to whether a decoding process or an interpolation image generation process is to be carried out, an inverse quantizing/inverse-frequency converting unit 1003 for causing data transmitted from the parsing unit 1002 to be applied with inverse quantization/inverse-frequency conversion, an adder 1004 for adding data outputted from the inverse quantizing/inverse-frequency converting unit 1003 to a predictive image generated by a motion compensation unit 1006, a decoded image memory unit 1005 for storing data
  • the interpolation image generation unit 1007 includes a motion searcher 1101 and an interpolation pixel generator 1102.
  • the motion searcher 1101 performs a process similar to that by the motion searcher 302 in Fig. 3 and the interpolation pixel generator 1102 performs a process similar to that by the interpolation pixel generator 303 in Fig. 3 .
  • the motion searcher 1101 has a predictive error calculator 601 and a motion vector decider 602 and as in the course of encoding process, executes a predictive error calculation process and a motion vector calculation process.
  • the predictive error calculation process and motion vector calculation process and the interpolation image generation process by means of the motion searcher 302 and interpolation pixel generator 303 are the same as those already described previously in connection with Fig. 5 and will not be described herein.
  • Fig. 12 flow of process in the video decoding method conducted with the video decoding apparatus according to embodiment 1 will be described.
  • the process proceeds in respect of, for example, each area.
  • the encoding stream is decoded by means of the variable-length decoding unit 1001 and is then sent to the parsing unit 1002 (S1201).
  • the parsing unit 1002 the decoded stream data is sorted in parsing and the encoded data is transmitted to the inverse quantizing/inverse-frequency converting unit 1003 and interpolation image generation unit 1007 (S1202).
  • the picture type of the encoding objective frame is decided to make a decision as to whether the encoding objective frame is an encoding frame or an interpolation frame (S1203). If the encoding objective frame is an interpolation frame, the interpolation image generation unit 1007 performs a motion search process in respect of a decoding objective area by using plural decoded image areas which precedes and succeeds the objective frame in order of display time (S1204). Through a process similar to that effected by the motion searcher 302 in Fig. 3 , the motion searcher 1101 calculates a minimum predictive error absolute value sum SAD n (a,b) and determines a motion vector.
  • the mode decider 1009 calculates a difference f'(SAD n (a,b)) between the predictive error absolute value sum calculated by the motion searcher 1101 and a predictive error absolute value sum peripheral of the decoding objective area (S1205). Subsequently, the mode decider 1009 decides whether the minimum predictive error absolute value sum SAD n (a,b) calculated by the motion searcher 1101 is less than a threshold value S 1 or whether the difference f'(SAD n (a,b)) from the peripheral predictive error absolute value sum is greater than a threshold value S 2 (S1206).
  • the decoding objective area is determined to be an interpolation image area. In the other case, the decoding objective area is determined as an area which has been encoded as an encoding objective image area.
  • the interpolation pixel generator 1102 of interpolation image generation unit 1007 generates an interpolation pixel, and an image is generated through a process for generation of an interpolation image and stored in the decoded image memory unit 1007 (S1207).
  • the inverse quantizing /inverse-frequency converting unit 1003 applies an inverse quantization/inverse-frequency conversion process to the encoded data obtained from the parsing unit 1002 and decodes difference data (S1208). Thereafter, the motion compensation unit 1006 conducts a motion compensation process by suing header information obtained from the parsing unit 1002 and the motion vector, generating a predictive image (S1209).
  • the adder 1004 adds the predictive image generated by the motion compensation unit 1006 and the difference data outputted from the inverse quantizing/inverse-frequency converting unit 1003 to generate a decoded image which in turn is stored in the decoded image memory unit 1005 (S1210).
  • the output unit 1008 outputs the interpolation image generated in step 1207 or the decoded image generated in step 1210 (S1211), ending the process.
  • the motion compensation unit 1006 calculates a predictive motion vector (PMV) on the basis of motion vectors of areas peripheral of the decoding objective area, adds it to a difference vector (DMV) to be stored in the encoding data to thereby generate a motion vector (MV) of the decoding objective area and performs a motion compensation process on the basis of the motion vector (MV).
  • PMV predictive motion vector
  • DMV difference vector
  • MV motion vector
  • the calculation process for the predictive motion vector (PMV) can be executed through a process similar to the calculation process for the predictive motion vector (PMV) on the encoding side as has been explained in connection with Fig. 9A to 9C and will not be described herein.
  • data encoded through the encoding method capable of improving the data compression rate as compared to the conventional encoding apparatus and method can be decoded suitably.
  • encoded data improved in data compression rate can be generated and the encoded data can be decoded preferably.
  • Embodiment 2 of the invention differs from embodiment 1 in that flag data indicating whether an encoding objective image is selected or an interpolation image is selected in respect of each encoding objective area on the encoding side is included in an encoding stream. This enables the decoding side to easily makes a decision as to whether an encoding image or an interpolation image is selected in respect of the decoding objective area. As a result, the process during decoding can be simplified, reducing the amount of processing. Embodiment 2 will be described in greater detail hereinafter.
  • the mode selection unit 105 in Fig. 1 in the video encoding apparatus of embodiment 1 is replaced with a mode selection unit 1304 in Fig. 13 .
  • the construction and operation of the remaining components are the same as those in embodiment 1 and will not be described herein.
  • a difference absolute value calculator 1301 calculates a difference between an input video divided by the area division unit 102 and an interpolation image generated by the interpolation image generation unit 104.
  • a difference absolute value calculator 1302 calculates a difference between the input video divided by the area division unit 102 and an encoding objective image generated by the encoding unit 103.
  • a decider 1303 a smaller one of the difference absolute values calculated by the difference absolute value calculators 1301 and 1302 is selected, so that a decision flag (mode decision flag) is outputted.
  • the mode decision flag may be "0" when the encoding objective image is selected and "1" when the interpolation image is selected.
  • Illustrated in Figs. 14A and 14B is an example of data stored in the encoded data memory unit 106 of the video encoding apparatus in embodiment 2.
  • flag data of one bit indicating that either the encoding objective image and the interpolation image is selected in respect of each encoding objective area is added. More particularly, in the encoding stream outputted from the video encoding apparatus of embodiment 2, the flag data indicating that either the encoding objective image and the interpolation image is selected in respect of each encoding objective area is included.
  • the flag data indicating that either the encoding image and the interpolation image is selected in respect of each encoding objective area is included in the output encoding stream. This enables the decoding side to easily decide in respect of the decoding objective area whether the encoding objective image area is selected or the interpolation image area is selected. Accordingly, the process during decoding can be simplified and the processing amount can be reduced.
  • flag data indicating whether an encoding objective image or an interpolation image is selected in respect of each encoding objective area is included as shown in Figs. 14A and 14B , and the encoding stream is inputted to the video decoding apparatus according to embodiment 2.
  • the encoding stream is decoded by means of the variable-length decoding unit 1001 and sent to the parsing unit 1002 (S1501).
  • the parsing unit 1002 the decoded stream data is sorted in parsing and header information and a mode decision flag are transmitted to the mode decision unit 1009 whereas the encoded data is transmitted to the inverse quantizing/inverse-frequency converting unit 1003 (S1502).
  • the encoding objective frame is decided, in accordance with the picture type of the encoding objective frame, as to whether to be an encoding frame or an interpolation frame (S1503).
  • the mode decision unit 1009 decides in respect of a decoding objective area whether the mode decision flag transmitted from the parsing unit 1002 is 1 or 0 (S1504). With the mode decision flag being 1 (indicative of an area for which an interpolation image is selected), the decoding objective area is determined to correspond to an interpolation image area. When the mode decision flag is 0 (indicating an area for which an encoding image is selected), the decoding objective area is determined to correspond to an area which has been encoded as an encoding objective image area.
  • the motion searcher 1101 of interpolation image generation unit 1007 makes a motion search (S1505).
  • the interpolation pixel generator 1102 generates an interpolation pixel and an image is generated through a process for generation of an interpolation image and stored in the decoded image memory unit 1005 (S1506).
  • the inverse quantizing/inverse-frequency converting unit 1003 applies an inverse quantization/inverse-frequency conversion process to the encoded data acquired from the parsing unit 1002 and decodes difference data (S1507).
  • the motion compensation unit 1006 executes a motion compensation process by using the header information captured from the parsing unit 1002 and the motion vector and creates a predictive image (S1508).
  • the adder 1004 adds the predictive image generated by the motion compensation unit 1006 and the difference data delivered out of the inverse quantizing/inverse-frequency converting unit 1003, generating a decoded image which in turn is stored in the decoded image memory unit 1005 (S1509).
  • the output unit 1008 outputs the interpolation image generated in the step S1207 or the decoded image generated in the step S1210 (S1510), ending the process.
  • the decoding objective area can be decided as to whether to correspond to an area for which the encoding image is selected or an area for which the interpolation image is selected. Accordingly, the process during decoding can be simplified and the processing amount can be reduced.
  • encoded data improved in data compression rate can be generated and the encoded data can be decoded preferably.
  • the interpolation image generation unit 104 on the basis of a plurality of pictures which undergo the encoding process in advance (during decoding, the decoding process is carried out in advance), the interpolation image generation unit 104 generates a pixel of a frame representing a picture preceding and succeeding the plurality of pictures in order of display is generated through the interpolation process (particularly signifying interpolation per se).
  • a process of interpolation discriminating from the interpolation per se (hereinafter referred to as extrapolation) is added through which on the basis of a plurality of pictures which undergo the encoding process in advance (during decoding, the decoding process is carried out in advance), a pixel of a frame representing a picture preceding or succeeding the plurality of pictures in order of display is generated through the extrapolation process.
  • the video encoding apparatus is constructed by adding, to the interpolation image generation unit 104 of the video encoding apparatus of embodiment 1, operation of interpolation image generation process based on backward extrapolation and an extrapolation direction decision unit 1805 (see Fig. 18 ) is so added as to follow the interpolation image generation unit 104.
  • the construction and operation of the remaining components are similar to those in embodiment 1 and will not be described herein.
  • the extrapolation process to be added herein is sorted into two types, namely, a forward extrapolation process and a backward extrapolation process. With respect to the respective types, operation in the interpolation image generation unit 104 of video encoding apparatus will be described.
  • a motion search to be described below is carried out in the motion searcher 302.
  • a predictive error absolute value sum SAD n (a,b) indicated in equation (4) is determined.
  • a pixel value f n-2 (x-2dx,y-2dy) of a pixel 1700 on the encoding frame 1601 and a pixel value f n-1 (x-dx,y-dy) of a pixel 1701 on the encoding frame 1602 are used.
  • R represents the size of an objective area to which the interpolation objective pixel belongs. Then, the pixel 1700 on encoding frame 1601 and the pixel 1701 on encoding frame 1602 are so determined as to lie on the same straight line as the extrapolation objective pixel 1702 on the extrapolation objective frame 1603 in a frame of space and time.
  • SAD n a ⁇ b ⁇ i , j ⁇ R f n - 1 ⁇ x - dx + i , y - dy + j - f n - 2 ⁇ x - 2 ⁇ dx + i , y - 2 ⁇ dy + j
  • the above-described forward extrapolation process can be applicable provided that the two preceding encoding frames in order of display are encoded/decoded in advance and therefore, it can also be applied to the case of an extrapolation objective frame 1603 (P picture) as shown at (b) in Fig. 16 .
  • an extrapolation image of an extrapolation objective frame 1603 is generated by using two encoding frames 1604 and 1605 which succeeds the extrapolation objective frame 1603 in order of display.
  • a motion search to be described below is carried out in the motion searcher 302.
  • a predictive error absolute value sum SAD n (x,y) indicated in equation (5) is determined.
  • a pixel value f n+1 (x+dx,y+dy) of a pixel 1711 on the encoding frame 1604 and a pixel value f n+2 (x+2dx,y+2dy) of a pixel 1712 on the encoding frame 1605 are used.
  • R represents the size of an objective area to which the extrapolation objective pixel belongs.
  • the pixel 1711 on encoding frame 1604 and the pixel 1712 on encoding frame 1605 are so determined as to lie on the same straight line as the extrapolation objective pixel 1710 on the extrapolation objective frame 1603 in a frame of space and time.
  • SAD n a ⁇ b ⁇ i , j ⁇ R f n - 1 ⁇ x + dx + i , y + dy + j - f n + 1 ⁇ x + 2 ⁇ dx + i , y + 2 ⁇ dy + j
  • interpolation image generation unit 104 the aforementioned two kinds of extrapolation process and the interpolation process similar to that in embodiment 1 are carried out, generating three kinds of interpolation images.
  • a motion search method is decided.
  • the process in the interpolation direction decision unit 1805 will be described. Firstly, a difference absolute value between an interpolation image generated by performing a bi-directional motion search described in embodiment 1 and an input image is calculated by means of a difference absolute value calculator 1801. Subsequently, a difference absolute value between an interpolation image generated by performing a forward motion search described in the present embodiment and the input image is calculated by means of a difference absolute value calculator 1802. Also, a difference absolute value between an interpolation image generated by performing a backward motion search and the input image is calculated by means of a difference absolute value calculator 1803.
  • a motion search direction decider 1804 selects an interpolation image for which the difference between input image and interpolation image is small and outputs the selected result as a motion search direction decision flag.
  • the motion search direction decision flag may provide data of 2 bits including 00 indicative of bi-direction, 01 indicative of forward direction and 10 indicative of backward direction. The thus generated motion search direction decision flag is transmitted to the encoded data memory unit 106.
  • Illustrated in Figs. 19A and 19B is an example of data to be stored in the encoded data memory unit 106.
  • flag data for deciding which direction the interpolation image is generated from is added in an interpolation pixel area.
  • the flag data indicative of the interpolation direction in which the interpolation image is generated in respect of an area for which the interpolation image is selected is included.
  • a P picture can also be made to be an interpolation objective frame, thus decreasing the data.
  • the forward extrapolation for generating an interpolation image from two forward encoding objective frames and the backward extrapolation for generating an interpolation image from two backward encoding objective frames as well can be executed and improvements in picture quality can therefore be expected.
  • the picture quality is degraded considerably in an area in which when the interpolation image is generated only bi-directionally, the background is concealed by the foreground and cannot be seen (occlusion area) but through the forward or backward extrapolation, the problem of quality degradation can be solved.
  • the video encoding apparatus and method according to embodiment 3 includes the flag data indicative of the interpolation direction for generation of an interpolation image in the output encoding stream. This ensures that the kinds of interpolation process executed on the decoding side can be increased and in addition to the B picture, the P picture can also be an interpolation objective frame, making it possible to more reduce the data. Further, the high picture quality of the B picture interpolation image can be achieved.
  • the motion search unit 2005 in the decoding apparatus of embodiment 3 includes a motion search method decider 2001, a motion searcher 2002, a predictive error calculator 2003 and a motion vector decider 2004.
  • the motion search method decider 2001 determines a search method of bi-directional, forward direction or backward direction motion in accordance with information of a motion search direction decision flag sent from the parsing unit 1002. After a motion search method has been determined, motion search, predictive error calculation and motion vector decision are carried out in the motion searcher 2002, predictive error calculator 2003 and motion vector decider 2004, respectively.
  • the bi-directional search can be conducted similarly to that in embodiment 1 and the forward direction search and backward direction search can be processed similarly to those by the video encoding apparatus of the present embodiment.
  • variable-length decoding unit 1001 decodes an encoding stream in a variable-length fashion and sends it to the parsing unit 1002 (S2101).
  • the parsing unit 1002 sorts decoded stream data in parsing and transmits encoded data to the inverse qunatizing /inverse-frequency converting unit 1003 and the interpolation image generation unit 1007 (S2102). Subsequently, the parsing unit 1002 decides the picture type of the encoded objective frame (S2103).
  • the motion search method decider 2001 decides a motion search method using one of motion search directions of bi-direction, forward direction and backward direction, on the basis of a motion search direction decision flag transmitted from the parsing unit 1002 (S2104).
  • a motion search is carried out in the motion searcher 2005 (S2105).
  • the motion searcher 2005 calculates a predictive error absolute value sum and a motion vector and besides, through a process similar to that executed by the motion searcher 1101 of embodiment 1, calculates a predictive error difference absolute value sum (S2106).
  • the interpolation image generator 1102 generates an interpolation pixel through a process similar to that in embodiment 1 (S2108).
  • the inverse quantizing/inverse-frequency converting unit 1003 carries out inverse quantization/inverse-frequency conversion, the result is added with data from the motion compensation unit 1006 and the resulting sum data is stored in the decoded image memory unit 1005.
  • the motion compensation unit 1006 carries out motion compensation (S2109).
  • the motion compensation unit 1006 makes a motion compensation, generates a decoded image and stores it in the decoded image memory unit 1005 (S2111).
  • the decoded image or the interpolation image generated through the above method is outputted to the video display unit 1008 (S2111), thus ending the process.
  • a plurality of kinds of interpolation processes can be employed adaptively by performing the process using the motion search direction decision flag included in the encoding stream. Further, it is sufficient to execute the motion search process on the decoding side only once in respect of the plural kinds interpolation processes and therefore the processing amount can be decreased to a great extent.
  • encoding data improved in data compression rate can be generated and the encoding data can be decoded suitably.
  • the video encoding apparatus of embodiment 4 adds to the video encoding apparatus of embodiment 1 the mode selection unit 1304 of embodiment 2 and the motion searcher 302 and interpolation direction decision unit 1805 of embodiment 3. Namely, the video encoding apparatus of embodiment 4 outputs an encoding stream including a mode decision flag and a motion search direction flag.
  • FIGs. 22A and 22B An example of data to be stored in the encoded data memory unit 106 in embodiment 4 is illustrated in Figs. 22A and 22B .
  • a mode decision flag for deciding whether the area is an encoding image area or an interpolation image area is added and further, in the interpolation image area, a motion search direction decision flag is added which makes a decision as to whether the bi-directional, forward or backward motion search method is to be executed.
  • the video encoding apparatus and method can be realized which can attain simplifying the process and reducing the processing amount during decoding, that is, the effects of embodiment 2 and making the B picture as well as the P picture an interpolation objective frame to more reduce the data amount and improving the picture quality of the B picture, that is, the effects of embodiment 3.
  • the encoding stream is decoded by means of the variable-length decoding unit 1001 and is then sent to the parsing unit 1002 (S2301). Subsequently, in the parsing unit 1002, the decoded stream data is sorted in parsing and a mode decision flag, a motion search direction decision flag and encoded data are transmitted to the inverse quantizing/inverse-frequency converting unit 1003 and interpolation image generation unit 1007 (S2302).
  • the encoding objective frame is decided, on the basis of the picture type of the encoding objective frame, as to whether to be an encoding frame or an interpolation frame (S2303). If the encoding objective frame is an interpolation frame, it is decided in respect of a decoding objective area whether the mode decision flag transmitted from the parsing unit 1002 is 1 (indicative of the decoding objective area being an interpolation image) or not (S2304).
  • the motion search method decider 2001 decides a motion search direction for the interpolation process on the basis of the motion search direction decision flag transmitted from the parsing unit 1002 (S2305), the motion searcher 2002, predictive error calculator 2003 and motion vector decider 2004 determine motion search, predictive error calculation and motion vector, respectively, (S2306) and the interpolation pixel generator 1102 generates an interpolation pixel by using the determined motion vector, thus generating an interpolation image (S2307).
  • the inverse quantizing/inverse-frequency converting unit 1003 carries out inverse quantization/inverse-frequency conversion, adds data from the motion compensation unit 1006 and stores the resulting data in the decoded image memory unit 1005. Subsequently, by using the data stored in the decoded image memory unit 1005, the motion compensation unit 1006 carries out motion compensation (S2309). By using the decoded image stored in the decoded image memory unit 1005 and the motion vector transmitted from the parsing unit 1002, the motion compensation unit 1006 carries out motion compensation, generates a decoded image and stores it in the decoded image memory unit 1005 (S2310). The decoded image or the interpolation image generated through the above method is outputted to the video display unit 1008 (S2311), thus ending the process.
  • a video decoding apparatus and method can be realized which can attain simplifying the process during decoding and reducing the processing amount, that is, the effects of embodiment 2 and can deal with a plurality of kinds of interpolation processes by performing the process using the motion search direction decision flag included in the encoding stream, so that it is sufficient to execute the motion search process only once on the decoding side in respect of the plural kinds interpolation processes and therefore the processing amount can be decreased to a great extent as represented by the effects of embodiment 3.
  • encoded data improved in data compression rate can be generated and the encoded data can be decoded suitably.
  • the video encoding apparatus according to embodiment 5 is constructed similarly to the video encoding apparatus of embodiment 2 but while in embodiment 2 the mode selection unit 1304 generates a mode decision flag in respect of each image block, a mode selection unit 1304 in embodiment 5 generates, when a plurality of blocks in which the decoding objective area is of an interpolation image (interpolation image mode block) are in succession, a flag indicative of the number of succession of the interpolation image mode blocks (interpolation image mode succession block number flag) and outputs an encoding stream including a single interpolation image mode succession block number flag in respect of the plural successive interpolation image mode blocks.
  • Individual constituents and contents of individual processes in the video encoding apparatus according to embodiment 5 are similar to those described in connection with embodiments 1 and 2 and will not be described herein.
  • an interpolation image mode exceptive mode flag indicating that the block is of a mode other than the interpolation image mode is generated and outputted.
  • the interpolation image mode exceptive mode flag may simply indicate a mode other than the interpolation image mode but alternatively, may indicate the kind per se of encoding mode (macro-block type and the like).
  • FIGs. 24A and 24B Illustrated at (a) in Fig. 24B is data generated by the video encoding apparatus in embodiment 2 and illustrated at (b) in Fig. 24B is data generated by the video encoding apparatus in embodiment 5.
  • an interpolation image mode succession block number flag 2401 indicates a numerical number "4" which demonstrates that four blocks a, b, c and d are in succession to constitute an interpolation image mode block.
  • an interpolation image mode succession block number flag 2402 indicates a numerical number "1” which demonstrates that a block "e” alone constitutes an interpolation image mode block.
  • an interpolation image mode succession block number flag 2403 indicating "5" demonstrates that five blocks f, g, h, i and j in succession constitute an interpolation image mode block.
  • the mode of plural blocks can be indicated by a single flag by using the interpolation image mode succession block number flag and the encoded data amount can be reduced.
  • a flag is detected in S1504 in Fig. 15 so as to be decided as to whether to be an interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag. If the detected flag is the interpolation image mode succession block number flag, the interpolation image generation process following S1505 is carried out in respect of consecutive blocks of the number indicated by the interpolation image mode succession block number flag. If the flag is the interpolation image exceptive mode flag, the video decoding process following S1507 is carried out in respect of a block to which the flag corresponds. Thus, when the flag is the interpolation image mode succession block number flag and it indicates a numerical value of 2 or more, the image generation process can be determined for the plural blocks through one decision process.
  • the process during decoding can be more simplified than in embodiment 2, reducing the processing amount.
  • the image generation process can be determined in respect of a plurality blocks through the single decision process by making correspondence with the interpolation image mode succession block number flag included in the encoding stream.
  • this can ensure that simplification of the process during decoding and reduction in the processing amount can be attained more extensively than those in embodiment 2.
  • video encoding apparatus according to embodiment 6 of the invention will be described.
  • the construction of video encoding apparatus according to embodiment 6 is similarly to that of the video encoding apparatus of embodiment 4 but while in embodiment 4 the mode selection unit 1304 generates a mode decision flag and a motion search direction decision flag in respect of each image block, a mode selection unit 1304 in embodiment 6 generates, like embodiment 5, an interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag and generates, in respect of an interpolation image mode succession block number flag, a motion search direction decision flag.
  • a detailed description of the motion search direction decision flag is the same as that in embodiments 3 and 4 and will not be given herein.
  • interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag is the same as that in embodiment 5 and will not given herein.
  • Individual constituents and contents of individual processes in the video encoding apparatus according to embodiment 6 are similar to those described in connection with embodiments 1 to 5 and will not be described herein.
  • FIG. 25A and 25B An example of data to be stored in the encoded data memory unit 106 of the video encoding apparatus in embodiment 6 is illustrated in Figs. 25A and 25B . Illustrated at (a) in Fig. 25B is data generated by the video encoding apparatus in embodiment 4 and at (b) in Fig. 25B is data generated by the video encoding apparatus in embodiment 6. Similarly to the illustration at (b) in Fig. 24B , a numerical number indicated at arrow shown at (b) Fig. 25B in correspondence with the interpolation image mode succession block number flag depicts an example of the number of successive interpolation image mode blocks indicated by the interpolation image mode succession block number flag. More specifically, in the example at (b) in Fig.
  • an interpolation image mode succession block number flag 2401 indicates a numerical number "4" which demonstrates that four blocks a, b, c and d are in succession to constitute an interpolation image mode block. This resembles embodiment 5.
  • the motion search direction decision flag is generated every interpolation image mode succession block number flag and therefore, following the interpolation image mode succession block number 2501, a motion search direction decision flag 2502 is inserted.
  • an interpolation image is generated through a motion search method determined by a motion search direction indicated by a motion search direction decision flag 2502 accompanying the interpolation image mode succession block number flag 2501.
  • the motion search direction decision flag is inserted in the encoding data in respect of each interpolation image mode succession block number.
  • the data amount can be more reduced than in the data in embodiment 4 shown at (a) in Fig. 25B in which the mode decision flag and the motion search direction decision flag are added every block.
  • Embodiment 6 is similar to embodiment 5 in that the interpolation image mode exceptive mode flag is inserted for respective ones of blocks in which the decoding objective area corresponds to an encoded image.
  • the mode and motion search direction of plural blocks can each be indicated by a single flag by using the interpolation image mode succession block number flag and the encoding data amount can be reduced.
  • the interpolation image generation process following S2305 is carried out and at that time, in S2305, a motion search method is determined for each block on the basis of a motion search direction decision flag and a motion search in S2306 is carried out. But with the mode decision flag being "0" in S2304, the video decoding process following S2308 is carried out.
  • a flag is detected in S2304 in Fig. 23 so as to be decided as to whether to be an interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag. If the detected flag is the interpolation image mode succession block number flag, the interpolation image generation process following S2305 is carried out in respect of consecutive blocks of the number indicated by the interpolation image mode succession block number flag. At that time, in S2305, on the basis of a motion search direction decision flag accompanying the interpolation image mode succession block number flag, the motion search method in interpolation image generation for the plural consecutive blocks is determined. In S2306, the motion search is conducted through the motion search method determined for the plural consecutive blocks.
  • an interpolation image is generated on the basis of the search result. If, in S2304, the flag is the interpolation image mode exceptive mode flag, the video decoding process following S1507 is carried out in respect of a block to which the flag corresponds.
  • the image generation process can be determined for the plural blocks through the single decision process when the flag is the interpolation image mode succession block number flag and indicates a numerical number of 2 or more.
  • plural image generation processes can be determined in respect of plural blocks through a single decision process by dealing with plural kinds of interpolation processes and dealing with the interpolation image mode succession block number flag included in the encoding stream.
  • this can ensure that simplification of the process and reduction in the processing amount during decoding can be attained more extensively than those in embodiment 4.
  • the first modification will be described by making reference to Fig. 26 .
  • Illustrated in Fig. 26 is an interpolation image generation method in the first modification.
  • the encoding/decoding objective frame is a single B picture existing between reference frames.
  • f n represents an encoding/decoding objective frame
  • f n-1 a reference frame finished with encoding/decoding which precedes in order of display and is positioned most closely to the encoding/decoding objective frame
  • f n+1 a reference frame finished with encoding/decoding which succeeds in order of display and is positioned most closely to the encoding/decoding objective frame.
  • searching a motion vector MV(u,v) and calculating an interpolation pixel value f n (x,y) are materialized through the following methods.
  • Motion search in the first modification is carried out in a unit of block.
  • the motion search is started from the left-above end in the frame f n-1 and from the right-below end in the frame f n+1 so as to make a search in a right/left and above/below symmetrical fashion.
  • a total of absolute error sums (SAD) of two blocks is calculated and a combination of blocks for which the SAD is minimal and the MV is also minimal.
  • the motion search is carried out on, for example, a plane of 1/4 pixel accuracy. On the 1/4 pixel accuracy plane, the block size for motion search is set to 64 ⁇ 64 pixels and by skipping 4 pixels, 16 pixels are used as sampling points.
  • the motion search range is referenced to the center of the encoding objective block.
  • a motion vector MV(u,v) between the frame f n-1 and the frame f n+1 is used and calculation is executed pursuant to equation (6).
  • f n x ⁇ y f n - 1 ⁇ x - 1 2 ⁇ u , y - 1 2 ⁇ v + f n + 1 ⁇ x + 1 2 ⁇ u , y + 1 2 ⁇ v / 2
  • the f n (x,y) is calculated in terms of an average value of pixels on the reference frames f n-1 and f n+1 representing the start and end points of MV(u,v), respectively.
  • the encoding/decoding objective frame is a single B picture positioned centrally of the plural reference frames and is temporally equidistant from the two reference frames.
  • the coefficient 1/2 by which u and v are multiplied in equation (6) may be changed in accordance with the bias.
  • the pixel values on individual reference frames f n-1 and f n+1 may be multiplied by coefficients complying with respective temporal distance biases. Then, the closer the temporal distance to the reference frame, the larger the coefficient becomes.
  • the motion vector MV(u,v) and the interpolation pixel value f n (x,y) in the first modification can be obtained through the search method and calculation method described as above, respectively.
  • the encoding/decoding objective frame is either of two B pictures existing between reference frames.
  • the motion search is carried out only once for the two existing B pictures.
  • f n represents a first encoding/decoding objective frame
  • f n+1 represents a second encoding/decoding objective frame
  • f n-1 represents a reference frame finished with encoding/decoding which precedes the encoding/decoding objective frame in order of display and is positioned most closely thereto
  • f n+2 represents a reference frame finished with encoding/decoding which succeeds the encoding/decoding objective frame and is positioned most closely thereto
  • f c represents a virtual central picture.
  • searching a motion vector MV(u,v) and calculating an interpolation pixel value f n (x,y) of the first encoding/decoding objective frame and an interpolation pixel value f n+1 (x,y) of the second encoding/decoding objective frame are materialized through the following methods.
  • the center of the motion search range is so defined as to be centered on an encoding/decoding objective block position (x,y) of the virtual central picture f c .
  • the remaining details of calculation of the motion vector MV(u,v) are similar to those in the first modification and will not be described herein.
  • An interpolation pixel value f n (x,y) of the first encoding/decoding objective frame and an interpolation pixel value f n+1 (x,y) of the second encoding/decoding objective frame can be calculated by using the motion vector MV(u,v) between the frames f n-1 and f n+2 from equations (7) and (8), respectively.
  • Fig. 27B The calculation method pursuant to equation (7) will be described with reference to Fig. 27B .
  • the illustration in Fig. 27A is shown in plane form.
  • calculation of a pixel value at position (x, y) of the first encoding/decoding frame f n is carried out through motion search referenced to the position (x,y) of the virtual center picture.
  • the first encoding/decoding objective frame f n is distant from the reference frame f n-1 by 1/3 of temporal distance between the reference frame f n-1 and the reference frame f n+2 and is distant from the reference frame f n+2 by 2/3 thereof.
  • the pixel value of the first encoding/decoding objective frame f n (x,y) is calculated by referencing to the position (x,y) of the first encoding/decoding objective frame f n and by multiplying, by weight coefficients complying with the temporal distances to the individual reference frames, respectively, a pixel value of pixel on reference frame f n-1 , indicated by using 1/3MV resulting from multiplication of the motion vector MV by 1/3, and a pixel value of pixel on the reference frame f n+2 , indicated by using 2/3W resulting from multiplication of the motion vector MV by 2/3 and by summing the resultant product values.
  • the weight coefficient may become larger proportionately and in the example of Fig. 27B , the pixel value of pixel on the reference frame f n-1 is multiplied by 2/3 and the pixel value of pixel on the reference frame f n+1 is multiplied by 1/3.
  • the calculation method pursuant to equation (8) will be described with reference to Fig. 27C .
  • the calculation method pursuant to equation (8) is similar to that pursuant to equation (7) in that the motion vector MV(u,v) is used, the position (x,y) of the second encoding/decoding objective frame f n+1 is referenced to, that a pixel on the reference frame is selected by using a motion vector resulting from multiplication of the motion vector MV(u,v) by a coefficient in accordance with the temporal distance from the encoding/decoding objective frame to the reference frame and that the selected pixel values are multiplied by weight coefficients complying with the temporal distances to the reference frame and added together.
  • Fig. 27C differs from Fig. 27B only in that the relation of temporal distance from the encoding/decoding objective frame to the respective reference frames differs and the coefficient by which the motion vector MV(u,v) is multiplied is different and so a detailed description will be omitted.
  • the coefficient may be changed in accordance with the temporal distance to the reference frame.
  • interpolation pixel values can be calculated through one motion search in respect of the individual pixels at the same position on the two encoding/decoding objective frames, respectively, which are positioned between the reference frames.
  • a third modification will be described with reference to Fig. 28 .
  • the first and second modifications are generalized, indicating an instance where m sheets of B pictures exist between two reference frames.
  • Fig. 28 m B pictures from f 1 (first B picture) to f m (m-th B picture) are inserted between reference frames f A and f B .
  • f C represents a virtual central picture which provides the reference as in the case of the second modification when calculating a motion vector MV(u,v).
  • an interpolation pixel value f k (x,y) can be calculated from equation (9).
  • f k x ⁇ y m + 1 - k ⁇ f A ⁇ x - k m + 1 ⁇ u , y - k m + 1 ⁇ v + k ⁇ f B ⁇ x + m + 1 - k m + 1 ⁇ u , y + m + 1 - k m + 1 ⁇ v / m + 1
  • the calculation method pursuant to equation (9) is also similar to that pursuant to equation (7) or (8) in that the motion vector MV(u,v) is used, that the position (x,y) of the encoding/decoding objective frame f k is referenced to, that a pixel on the reference frame is selected by using a motion vector resulting from multiplication of the motion vector MV(u,v) by a coefficient in accordance with the temporal distance from the encoding/decoding objective frame to the reference frame and that the selected pixel value is multiplied by weight coefficients complying with the temporal distances to the reference frames and added together.
  • the method for calculating the interpolation pixel value f k (x,y) of the encoding/decoding objective frames as above is employed in the third modification.
  • an interpolation pixel value can be calculated through one motion search in respect of individual pixels at the same position on m encoding/decoding objective frames positioned between the reference frames.
  • an interpolation image is generated through the interpolation process based on the motion prediction between the reference images and therefore, they may be expressed as an inter-reference image predictive frame, an inter-reference image motion predictive area, an inter-reference image motion predictive mode and an inter-reference image motion predictive mode block, respectively.
  • the video encoding/decoding technique using the interpolation image namely, image encoding/decoding technique based on the inter-reference image motion prediction described in connection with the foregoing embodiments is advantageous over the conventional technique as will be described below.
  • the skipping mode and direct mode for predictive generation of motion information from motion information of an encoded block is adopted.
  • the skipping mode and direct mode does not need transmission of motion information and therefore, it is a technique effective for reducing the encoding amount. In the skipping mode and direct mode, however, the accuracy of prediction of the motion information will sometimes be degraded.
  • a motion vector of a block (anchor block) at the same position as an encoding objective block inside a reference image immediately succeeding an encoding objective image in order of display and in the case of an image in which the anchor block is encoded inside the screen no motion information can be acquired, thus degrading the prediction accuracy.
  • a motion vector of a block peripheral of an encoding objective block and in the case of images in which individual peripheral blocks move differently the spatial correlation of the motion information decreases, thus degrading the prediction accuracy.
  • a block having a high correlation with a forward reference image and a backward reference image is detected and its detected motion vector is used. Accordingly, even in an image liable to be degraded in predictive accuracy in the skip mode and direct mode, that is, in an image in which the encoding objective block is a mobile image and the anchor block is of an image encoded inside the screen, degradation in prediction accuracy can be suppressed.
  • a motion vector is predicted without using a motion vector of a block peripheral of an encoding objective block. Therefore, even in an image liable to be degraded in prediction accuracy in the skip mode and direct mode, that is, an image the peripheral blocks of which move differently, degradation in prediction accuracy can be suppressed.

Abstract

The invention provides a method of decoding videos comprising the steps ofdeciding (S1504), on the basis of a mode decision flag included in an encoding stream, whether an image of a decoding objective area is to be generated through decoding motion search process using an image finished with decoding or through a motion compensation process using data included in the encoding stream; and generating a decoded image by switching over, in accordance with the result of decision in said decision step (S1504), a motion search process using an image finished with decoding to a motion compensation process using data included in the encoding stream and vice versa.

Description

    BACKGROUND OF THE INVENTION
  • The present invention relates to techniques of encoding and decoding video data.
  • In connection with encoding and decoding techniques for compressing and transmitting video data, an internationally standardized encoding standard as typified by the MPEG (Moving Picture Experts Group) standard has hitherto been available. Among the internationally standardized encoding standards, the H.264/AVC (Advanced Video Encoding) standard, for example, especially has high encoding efficiencies and has been utilized widely as a standard for moving picture compression in ground digital broadcasting, digital video camera, next generation encoding media, cellular phones and so on. The data thus compressed pursuant to the standard as above is decoded in a television receiver, a DVD player and the like and the thus decoded video data is displayed on a display.
  • Then, JP-A-2003-333540 discloses the frame rate conversion to be carried out by using a motion amount (motion vector) obtained by decoding an encoding stream and the decoded image as well in order to eliminate a blur in moving picture and an unnatural motion which occur when displaying the decoded video data.
  • In the technique described in the aforementioned Patent Document, a frame rate conversion process is applied to the decoded video data. The frame rate conversion process, however, presupposes that a motion vector and a difference image are transmitted from the encoding side to the decoding side and fails to contribute to reduction in the amount of transmission data, raising a problem that improvements in data compression rate are insufficient.
  • SUMMARY OF THE INVENTION
  • The present invention has been made in the light of the above problem and its object is to improve the data compression rate.
  • To accomplish the above object, a video decoding method according to the independent claim is proposed. Dependent claims relate to preferred embodiments. Embodiments of the present invention can be constructed as recited in, for example, the attached claims or combinations thereof.
  • Thus, according to the present invention, it is possible to improve the data compression rate.
  • Other objects, features and advantages of the invention will become apparent from the following description of the embodiments of the invention taken in conjunction with the accompanying drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • Fig. 1 is a schematic diagram showing the configuration of a video encoding apparatus according to embodiment 1 of the invention.
    • Fig. 2 is a block diagram showing the construction of an encoding unit in Fig. 1.
    • Fig. 3 is a block diagram showing the construction of an interpolation image generation unit in Fig. 1.
    • Fig. 4 is a diagram showing an example where interpolation frames and encoding frames are determined in accordance with the picture type in embodiment 1.
    • Fig. 5 is a diagram showing an example of a method for motion search by means of a motion searcher in embodiment 1.
    • Fig. 6 is a diagram showing the construction of the motion searcher.
    • Fig. 7 is a flowchart of operation in the Fig. 1 interpolation image generation unit.
    • Figs. 8A, 8B, 8C and 8D show an example of data to be stored in an encoded data memory unit.
    • Figs. 9A, 9B and 9C show examples of motion predictive vectors.
    • Fig. 10 is a schematic block diagram showing the configuration of a video decoding apparatus in embodiment 1.
    • Fig. 11 is a block diagram showing the construction of an interpolation image generation unit in Fig. 10.
    • Fig. 12 is a flowchart showing operation in the video decoding apparatus in embodiment 1.
    • Fig. 13 is a block diagram showing the construction of a mode selecting unit according to embodiment 2 of the invention.
    • Figs. 14A and 14B are diagrams showing an example of data to be stored in the encoded data memory unit in embodiment 2.
    • Fig. 15 is a flowchart showing operation in a video decoding apparatus in embodiment 2.
    • Fig. 16 is a diagram showing an example where interpolation frames and encoding frames are determined in accordance with the picture type in embodiment 3.
    • Fig. 17 is a diagram showing an example of a motion search method in a motion searcher in embodiment 3.
    • Fig. 18 is a block diagram showing the construction of an interpolation direction decision unit in embodiment 3.
    • Figs. 19A and 19B are diagrams showing an example of data to be stored in the encoded data memory unit in embodiment 3.
    • Fig. 20 is a block diagram showing the construction of a motion search unit in the video decoding apparatus in embodiment 3.
    • Fig. 21 is a flowchart of operation in the video decoding apparatus in embodiment 3.
    • Figs. 22A and 22B are diagrams showing an example of data to be stored in the decoded data memory unit in embodiment 4.
    • Fig. 23 is a flowchart showing operation in the video decoding apparatus in embodiment 4.
    • Figs. 24A and 24B are diagrams showing an example of data to be stored in the encoded data memory unit in embodiment 5.
    • Figs. 25A and 25B are diagrams showing an example of data to be stored in the encoded data memory unit in embodiment 6.
    • Fig. 26 is a diagram showing a first modification of interpolation image generation method.
    • Figs. 27A, 27B and 27C are diagrams showing an example of a second modification of the interpolation image generation method.
    • Fig. 28 is a diagram showing an example of a third modification of the interpolation image generation method.
    DETAILED DESCRIPTION OF THE EMBODIMENTS
  • Embodiments of the present invention will now be described with reference to the accompanying drawings.
  • [Embodiment 1]
  • Referring first to Fig. 1, there is illustrated an example of a video encoding apparatus according to embodiment 1 of the invention. The video encoding apparatus according to embodiment 1 comprises, for example, a video input unit 101 for inputting videos, an area division unit 102 for dividing an input video into encoding objective areas, an encoding unit 103 for encoding input video data divided by the area division unit and locally decoding the data, an interpolation (the term "interpolation" being used generally to signify interpolation per se and extrapolation inclusive excepting the case where the two is used distinctively as will be described later) image generation unit 104 for thinning out images (encoded images) locally decoded by the encoding unit 103 in time direction and generating interpolation images adapted to interpolate the thinned out images, a mode selection unit 105 for selecting either an encoded image and an interpolated image, a encoded data memory unit 106 for recording encoded image data and flag data, and a variable-length encoding unit 107 for encoding data stored in the encoded data memory unit 106 in a variable-length fashion and outputting an encoding stream. Details of operation in the individual processing units of video encoding apparatus according to embodiment 1 will be described hereunder.
  • Firstly, the video input unit 101 rearranges input videos in order of their encoding. In the rearrangement of order, order of display is rearranged to order of encoding in accordance with the picture type. Next, in the area division unit 102, an encoding objective frame is divided into encoding objective areas. The size of divisional area may be in a unit of block such as a square or rectangular area or alternatively may be in a unit of object extracted by using a method of watershed process. Divided videos in the area division unit 102 are transmitted to the encoding unit 103.
  • Then, the construction of encoding unit 103 is illustrated in detail in Fig. 2. The encoding unit 103 includes, for example, a subtracter 201 for calculating the difference between an image as a result of division by the area division unit 102 and a predictive image selected by an in-screen/inter-screen predictive image selector 208, a frequency converter/quantizer 202 for frequency-converting and quantizing the difference data generated by the subtracter 201, an inverse quantizer/inverse-frequency converter 203 for inverse quantizing and inverse-frequency converting data outputted from the frequency converter/quantizer 202, an adder 204 for adding the data decoded by the inverse quantizer/inverse-frequency converter 203 to the predictive image selected by the in-screen/inter-screen predictive image selector 208, a decoded image memory 205 for storing the sum image from the adder 204, an in-screen predictor 206 for generating a predictive image on the basis of pixels peripheral of the encoding objective area, an inter-screen predictor 207 for detecting an image (reference image) close to the encoding objective area from an area belonging to a frame different from the encoding objective area so that the detected image may be generated as a predictive image, and the in-screen/inter-screen predictive image selector 208 for selecting one of the in-screen predictive and inter-screen predictive images which has a higher encoding efficiency.
  • Details of operation in the individual processors in encoding unit 103 will now be described. In the frequency converter/quantizer 202, the difference image is frequency-converted by using a DCT (Discrete Cosine transform) and a wavelet conversion and then, a coefficient after the frequency conversion is quantized. Data after quantization is transmitted to the mode selection unit 105 and the inverse quantizer/inverse-frequency converter 203. In the inverse quantizer/inverse-frequency converter 203, a process inverse to that carried out in the frequency converter/quantizer 202 is conducted. Next, the adder 204 adds a predictive image selected by the in-screen/inter-screen predictive image selector 208 to a difference image generated through the inverse quantization/inverse-frequency conversion by means of the inverse quantizer/inverse-frequency converter 203, generating a decoded image. The thus generated decoded image is stored in the decoded image memory 205. In the inter-screen predictor 206, a predictive image is generated by using pixels of peripheral areas finished with decoding which have been stored in the decoded image memory 205. Further, in the inter-screen predictor 207, a predictive image is generated through a process for matching between data inside the frame finished with decoding which has been stored in the decoded image memory 205 and the input image. The decoded image memory 205 then transmits the decoded image to the interpolation image generation unit 104.
  • Turning now to Fig. 3, details of construction of the interpolation image generation unit 104 are illustrated. The interpolation image generation unit 104 includes, for example, an interpolation frame decider 301, a motion searcher 302 and an interpolation pixel generator 303. In the interpolation frame decider 301, a frame to be interpolated (interpolation frame) and a frame to be normally encoded without subject to interpolation (encoding frame) are determined in a unit of frame on the basis of, for example, the picture type.
  • Reference will now made to Fig. 4 showing a specified example of interpolation frame determination by means of the interpolation frame decider 301 in interpolation image generation unit 104. In Fig. 4, abscissa represents order of inputting images during encoding and order of displaying images during decoding. On the contrary, order of encoding process during encoding and order of decoding process during decoding are as shown in Fig. 4. More particularly, a B picture undergoes encoding process and decoding process after a P picture whose display order is later than that of the B picture.
  • As will be described later with reference to Fig. 5, the interpolation pixel generator 303 in embodiment 1 generates, on the basis of a plurality of pictures to be subjected to an encoding process precedently (during decoding, subjected to a decoding process precedently), pixels of frames representing pictures each of which intervenes between the plurality of pictures in order of display. Namely, the interpolation pixel generation process by the interpolation pixel generator 303 according to embodiment 1 is a process suited for the B picture which is preceded and succeeded in order of display by respective pictures finished with encoding or decoding during encoding process or decoding process. In the example of Fig. 4, at the time of encoding process or decoding process of B picture 402, I picture 401 of preceding order of display and P picture 403 of succeeding order of display have already been finished with encoding or decoding. Further, at the time of encoding process or decoding process of B picture 404, P picture 403 of preceding order of display and P picture 405 of succeeding order of display have already been finished with encoding or decoding.
  • Accordingly, the interpolation frame decider 301 determines, for example, the B picture as an interpolation frame and the I picture and P picture as encoding objective frames, as shown in Fig. 4. Then, it is possible for the interpolation pixel generator 303 to generate a pixel positional value in terms of matrix element coordinates (hereinafter, simply referred to as a pixel value) of the B picture on the basis of the I picture and P picture which are forwardly and backwardly closest to the B picture, respectively.
  • While in the Fig. 4 example a picture structure is set up in which a single sheet of B picture is inserted between the I picture and P picture and between the P picture and P picture, the number of sheets of B pictures to be inserted between I or P pictures may be increased when a difference in brightness or color between frames is calculated and the difference is small, exhibiting a high correlation between the frames. In this case, too, the B pictures may be interpolation frames and the I picture and P picture may be encoding objective frames. Then, the interpolation pixel generator 303 may generate a pixel value of each B picture through the interpolation process on the basis of the I picture and P picture which are forwardly and backwardly closest to the B picture.
  • Next, by making reference to Fig. 6, the construction of motion searcher 302 will be described in detail. As shown in Fig. 6, the motion searcher 302 has a predictive error calculator 601 and a motion vector decider 602. After the interpolation frame decider 301 has determined an interpolation frame, the motion searcher 302 makes a search for a motion necessary to calculate the pixel value of the interpolation frame. As a motion search method, an area matching method widely used in general may be utilized.
  • Next, by making reference to Fig. 5, details of the process for generation of a pixel of an interpolation frame by means of the predictive difference calculator 601 and motion vector decider 602 included in the motion searcher 302 and interpolation pixel generator 303 as well will be described.
  • In connection with Fig. 5, the predictive error calculator 601 first determines, in respect of an interpolation objective pixel 501 of interpolation frame n, a predictive error absolute value sum SADn(x,y) indicated by equation (1) by using a pixel value fn-1(x-dx,y-dy) of a pixel 500 inside an encoding objective frame n-1 which precedes the interpolation frame n in order of display and a pixel value fn+1(x+dx,y+dy)of a pixel 502 inside an encoding objective frame n+1 which succeeds the interpolation frame n in order of display. Here, the pixels 500 and 502 are so determined as to lie on the same straight line as the interpolation objective pixel 501(x,y) in a frame of space and time. In equation (1), R represents the size of an image area to which the interpolation objective pixel belongs, n represents a frame number, x,y represent pixel coordinates, dx, dy, i, j represent inter-pixel differences and a,b represent number of the image area the interpolation objective pixel belongs to. SAD n a b = i , j R f n - 1 x - dx + i , y - dy + j - f n + 1 x - dx + i , y + dy + j
    Figure imgb0001
  • Next, the motion vector decider 602 determines a combination (dxo,dyo) of values by which the predictive error absolute value sum SADn(x,y) in equation (1) is minimized and calculates a motion vector connecting a pixel of coordinates (x-dxo,y-dyo) inside the encoding objective frame n-1 which precedes the interpolation frame n in order of display and a pixel (x+dx0,y+dy0) inside the encoding objective frame n+1 which succeeds the interpolation frame n in order of display.
  • After completion of the motion vector calculation, the interpolation pixel generator 303 calculates an average of the pixel value fn-1(x-dx0,y-dy0) of the pixel inside the encoding objective frame preceding the interpolation frame and the pixel value fn+1(x+dx0,y+dy0) of the pixel inside the encoding objective frame succeeding the interpolation frame to generate a pixel value fn(x,y) of the interpolation objective pixel (x,y) by using equation (2). f n x y = f n - 1 x - dx 0 , y - dy 0 - f n + 1 x + dx 0 , y + dy 0 2
    Figure imgb0002
  • According to the pixel generation process for interpolation frame described above with reference to Fig. 5, a pixel of the interpolation frame can be generated from pixel values inside the encoding objective frames which are positioned before and after the interpolation objective frame in order of display, respectively.
  • In the example pursuant to equation (2), the interpolation pixel value is calculated from the simple average value but the interpolation pixel calculation method according to the present invention is not limited to that based on the simple average value. For example, if the time distance between encoding objective frame n-1 and interpolation frame n is not equal to the time distance between interpolation frame n and encoding objective frame n+1, the respective pixel values may be multiplied by weight coefficients complying with the respective time distances and thereafter, the resulting products may be added together. In other words, any method may be employed provided that the pixel value can be calculated from a function having a variable represented by pixel value fn-1(x-dx0,y-dy0) on the encoding objective frame n-1 and a variable represented by pixel value fn+1(x+dx0,y+dy0) on the encoding objective frame n+1.
  • Turning now to Fig. 7, details of mode selection process by the mode selection unit 105 will be described. In respect of individual areas of plural divisional areas of the interpolation frame, the mode selection unit 105 makes a decision as to which one of the encoding objective image the encoding unit 103 generates and the interpolation image formed of the interpolation pixel the interpolation image generation unit 104 generates is to be selected.
  • Firstly, in respect of the encoding objective area, the mode selection unit 105 calculates pursuant to, for example, equation (3) a difference f'(SADn(a,b) between a predictive error calculated by the motion searcher 302 and a predictive error of an area peripheral of the encoding objective area (S701). In equation (3), n represents frame number, a,b represent number of image area to which the interpolation objective pixel belongs and k,l represent a variable meaning the difference in number between the peripheral image area and the image area the interpolation objective pixel belongs to. f SAD n a b = k = - 1 1 i = - 1 1 SAD n a + k , b + l - SAD n a b
    Figure imgb0003
  • Subsequently, it is decided whether the minimum predictive error absolute value sum SADn(a,b) determined pursuant to equation (1) by means of the motion searcher 302 is less than a threshold value S1 or it is decided whether the predictive error difference absolute value sum f'(SADn(a,b))indicated by equation (3) is greater than a threshold value S2 (S702). This decision is necessary because when the predictive error absolute value sum SADn(a,b) is small, the reliability of results of motion detection during interpolation image generation is considered to be high and besides, when the predictive error difference absolute value sum f'(SADn(a,b)) is large, many encoding variables are generated for a normal encoding objective image but a slight degradation in picture quality of an area of complicated pattern is hardly perceived visually and therefore selection of the interpolation image is considered to be advantageous.
  • If the condition is met in step 702, the interpolation image is selected (S703). At that time, the process ends without outputting header information indicative of the kind of the prediction area, motion vector and predictive error data (S705). On the other hand, if the condition is not met in step S702, the encoding objective image is selected (S704). At that time, the header information indicative of the kind of the prediction area, motion vector and predictive error data are outputted to the encoded data memory unit 106 and then the process ends.
  • In other words, with the encoding objective image selected, the header information indicative of the kind of predictive area, motion vector and predictive error data are included in an encoding stream as in the case of the normal encoding technique. Contrary thereto, with the interpolation image selected, a decoded image can be generated without resort to the above data through the interpolation process explained in connection with Fig. 15 and therefore, these pieces of data are not included in the encoding stream. For the above reason, when the interpolation image is selected, the encoding data amount can be reduced, realizing improvements in compression rate.
  • While in the foregoing the mode selection for the encoding objective image and interpolation image has been described by way of example of selection in a unit of frame. But, of the B picture selected as the interpolation frame, a partial area may be selected for an encoding image and the other area may be selected for an interpolation image. The area concerned may be in a unit of block, for example.
  • Next, with reference to Figs. 8A, 8B, 8C and 8D, an example of comparison of the encoded data amount of a frame encoded on the basis of the prior art encoding technique with that of a frame encoded by means of the video encoding apparatus and method according to embodiment 1 will be described. In Figs. 8A, 8B, 8C and 8D, a shadowed area indicates an area in which an encoding objective image is selected and an unshadowed area indicates an area in which an interpolation image is selected.
  • Illustrated in Fig. 8A is a frame encoded in accordance with the conventional encoding technique. Since no interpolation image area exists in the conventional encoding technique, all areas provide encoding objective images. In the example of Fig. 8A, all of 24 areas are dedicated to encoding images. Then, in the conventional encoding technique, header information indicative of the kind of predictive area and information such as motion vector and predictive error data are stored, in respect of all of the frames in Fig. 8A as a rule, in an encoding stream. Here, the encoding stream for frames encoded with the conventional encoding technique is illustrated as shown in Fig. 8B. In the example of Fig. 8B, in respect of all areas of 24 encoding objective images, the header information indicative of the kind of predictive area and the information such as motion vector and predictive error data are stored in the encoding stream.
  • Contrarily, an example of a frame encoded in accordance with the video encoding apparatus and method according to embodiment 1 is as exemplified in Fig. 8C. In the example of Fig. 8C, encoding objective images are selected in only 8 out of 24 areas and interpolation images are selected in the remaining 16 areas. Then, an encoding stream corresponding to the Fig. 8C example is formed as illustrated in Fig. 8D. Namely, in the encoding process based on the video encoding apparatus and method according to embodiment 1, there is no need of providing for the encoding side the header information indicative of the kind of predictive area and the information such as motion vector and predictive error data in respect of areas for which the interpolation image is selected and these areas are not included in the encoding stream. In the example of Fig. 8D, the header information indicative of the type of predictive area and the information such as motion vector and predictive error data are included in an encoding stream every 8 areas representing encoding objective areas.
  • Thus, in the video encoding apparatus and method according to embodiment 1, the amount of encoding data to be included in the encoding stream can be reduced as compared to that in the conventional encoding technique, thereby materializing improvements in the encoding compression rate.
  • Referring now to Figs. 9A, 9B and 9C and Figs. 8A, 8B, 8C and 8D, a process for encoding a motion vector executed in the variable-length encoding unit 107 of video encoding apparatus according to embodiment 1 of the invention will be described.
  • Firstly, in a process for encoding a motion vector in an encoding objective area pursuant to the H.264 standard covering the conventional encoding technique, a motion predictive vector is calculated from a median of motion vectors in areas peripheral of the encoding objective area and only a difference between the motion vector in the encoding objective area and the motion predictive vector is handled as encoding data, thus reducing the data amount.
  • In the variable-length encoding unit 107 according to embodiment 1, too, a predictive motion vector (PMV) is calculated, a difference vector (DMV) between a motion vector (MV) in the encoding objective area and the predictive motion vector (PMV) is calculated and the difference vector (DMV) is treated as encoding data. But, in a frame to be encoded in accordance with the video encoding apparatus and method according to embodiment 1, encoding objective image areas and interpolation image areas coexist as shown in Fig. 8C and therefore, for calculation of the predictive motion vector (PMV), a method different from the conventional encoding technique under H.264 standard is adopted.
  • A specified example of the technique based on the conventional H.264 standard will first be described with reference to Fig. 9A. Under the H.264 standard, a predictive motion vector (PMV) for an encoding objective area X is calculated by using a median of motion vectors used for encoding processes in areas A, B and C which are close to the encoding objective area X and which have been encoded in advance of the encoding objective area X. For calculation of the predictive motion vector, a process in common to the encoding and decoding processes needs to be executed.
  • Here, the process for encoding the motion vector in embodiment 1 of the invention will be described. The motion vector encoding process in embodiment 1 of the invention is a process to be applied to only an encoding objective image area out of encoding objective image area and interpolation image area. For the interpolation image area, a motion search is carried out for interpolation image on the decoding side and therefore the motion vector encoding process is unnecessary.
  • Here, in the motion vector encoding process in embodiment 1 of the invention, depending on whether blocks A,B,C and D close to the encoding objective area X shown in Figs. 9A, 9B and 9C are encoding objective image areas, respectively, or interpolation image areas, respectively, a process for calculation of a predictive vector used for motion vector encoding process is changed. Detailed process for respective cases will be described hereunder.
  • Firstly, when any peripheral areas A,B and C are encoding objective image areas, a predictive motion vector is calculated by using a median of motion vectors (MVA,MVB,MVC) used for the encoding process in the peripheral areas A, B and C as in the case of the conventional H.264 standard.
  • Next, an instance will be described in which interpolation image areas are included in areas peripheral of the encoding objective area X as shown in Figs. 9B and 9C. As described previously, the motion vector is not encoded for the interpolation image area, that is, the motion vector used in the encoding process is not transmitted to the decoding side. This accounts for the fact that with the motion vector used in the encoding process utilized for calculation of a predictive motion vector (PMV), calculation of the predictive motion vector (PMV) cannot be carried out in decoding. Therefore, calculation of a predictive motion vector (PMV) is executed in embodiment 1 as below.
  • Firstly, in an instance where areas peripheral of the encoding objective area X are all occupied by interpolation image areas as shown in Fig. 9B, a motion vector used in the interpolation image generation process, that is, the motion vectors (MVCA,MVCB,MVCC) calculated by the motion searcher 302 of interpolation image generation unit 104 are used. If the motion search in motion searcher 302 is carried out in a unit of pixel, a plurality of motion vectors exist in each area and so the predictive motion vector PMV is calculated by using a mean value of the plural motion vectors. Then, a median of the motion vectors (MVCA,MVCB,MVCC)is calculated as a predictive motion vector (PMV).
  • Next, in an instance where the A,B and C areas peripheral of the encoding objective area X are partly encoding objective image areas and partly an interpolation image area as shown in Fig. 9C, a motion vector MV used in the encoding process is used for the encoding image area and a motion vector MVC used in the interpolation image generation process is used for the interpolation image area and a median of these motion vectors is calculated as a predictive motion vector (PMV).
  • Namely, in the Fig. 9C example, the peripheral areas A and C are encoding objective image areas and the peripheral area B is an interpolation image area. In this case, as shown at (1) in Fig. 9C, a median of the motion vectors (MVA,MVCB,MVC) is calculated as a predictive motion vector (PMV).
  • As a modified example of calculation of a predictive motion vector (PMV) in the case where A,B and C areas peripheral of the encoding objective area X is partly encoding objective image areas and partly an interpolation image area as shown in Fig. 9C, a motion vector of the encoding image area may be selected preferentially and used. For example, when a peripheral area D positioned left above the encoding objective area X is an encoding image area in the Fig. 9C example, the MVCB of the peripheral area B representing the interpolation image area is not used but a motion vector MVD used in the encoding process of peripheral area D is used. Then, a median of the motion vectors (MVA, MVC, MVD) is calculated as a predictive motion vector (PMV).
  • If two of the peripheral areas A,B,C and D are encoding objective image areas, an average value of motion vectors MV of the two areas may be used as a predictive motion vector (PMV). If one of the peripheral areas A,B,C and D is an encoding objective image area, one motion vector MV may be used by itself as a predictive motion vector (PMV).
  • By preferentially selecting a motion vector of an encoding objective image area in this manner, an influence the error in search between the motion search in the interpolation image generation process on the encoding side and the motion search in the interpolation image generation process on the decoding side has can be reduced.
  • As described above, according to the video encoding apparatus and method according to embodiment 1, the data compression rate can be improved.
  • Reference will now made to Fig. 10 to describe a video decoding apparatus according to embodiment 1. The video decoding apparatus according to embodiment 1 comprises, for example, a variable-length decoding unit 1001 for decoding encoded data transmitted from the encoding side, a parsing unit 1002 for parsing data subjected to variable-length decoding, a mode deciding unit 1009 for making a decision, on the basis of the result of the parsing by means of the parsing unit 1002 and the result of the predictive error calculation by mans of an interpolation image generation unit 1007, as to whether a decoding process or an interpolation image generation process is to be carried out, an inverse quantizing/inverse-frequency converting unit 1003 for causing data transmitted from the parsing unit 1002 to be applied with inverse quantization/inverse-frequency conversion, an adder 1004 for adding data outputted from the inverse quantizing/inverse-frequency converting unit 1003 to a predictive image generated by a motion compensation unit 1006, a decoded image memory unit 1005 for storing data outputted from the adder 1004, the motion compensation unit 1006 being operative to mutually compensate pieces of data stored in the decoded image memory unit 1005 for their motions, the interpolation image generation unit 1007 being operative to perform a motion search process and an interpolation pixel generation process by using the pieces of data obtained from the parsing unit 1002 and decoded image memory unit 1005 to thereby generate an interpolation image, and an output unit 1008 for outputting to a video display unit either of the interpolation image generated by the interpolation image generation unit 1007 and the decoded image delivered out of the adder 1004.
  • Details of operation in the individual processing units in the video decoding apparatus according to embodiment 1 will be described hereunder.
  • Firstly, by making reference to Fig. 11, details of the interpolation image generation unit 1007 will be described. The interpolation image generation unit 1007 includes a motion searcher 1101 and an interpolation pixel generator 1102. The motion searcher 1101 performs a process similar to that by the motion searcher 302 in Fig. 3 and the interpolation pixel generator 1102 performs a process similar to that by the interpolation pixel generator 303 in Fig. 3. Like the motion searcher 302, the motion searcher 1101 has a predictive error calculator 601 and a motion vector decider 602 and as in the course of encoding process, executes a predictive error calculation process and a motion vector calculation process. The predictive error calculation process and motion vector calculation process and the interpolation image generation process by means of the motion searcher 302 and interpolation pixel generator 303 are the same as those already described previously in connection with Fig. 5 and will not be described herein.
  • Turning now to Fig. 12, flow of process in the video decoding method conducted with the video decoding apparatus according to embodiment 1 will be described. The process proceeds in respect of, for example, each area. Firstly, the encoding stream is decoded by means of the variable-length decoding unit 1001 and is then sent to the parsing unit 1002 (S1201). Subsequently, in the parsing unit 1002, the decoded stream data is sorted in parsing and the encoded data is transmitted to the inverse quantizing/inverse-frequency converting unit 1003 and interpolation image generation unit 1007 (S1202). Thereafter, in the paring unit 1002, the picture type of the encoding objective frame is decided to make a decision as to whether the encoding objective frame is an encoding frame or an interpolation frame (S1203). If the encoding objective frame is an interpolation frame, the interpolation image generation unit 1007 performs a motion search process in respect of a decoding objective area by using plural decoded image areas which precedes and succeeds the objective frame in order of display time (S1204). Through a process similar to that effected by the motion searcher 302 in Fig. 3, the motion searcher 1101 calculates a minimum predictive error absolute value sum SADn(a,b) and determines a motion vector. Next, the mode decider 1009 calculates a difference f'(SADn(a,b)) between the predictive error absolute value sum calculated by the motion searcher 1101 and a predictive error absolute value sum peripheral of the decoding objective area (S1205). Subsequently, the mode decider 1009 decides whether the minimum predictive error absolute value sum SADn(a,b) calculated by the motion searcher 1101 is less than a threshold value S1 or whether the difference f'(SADn(a,b)) from the peripheral predictive error absolute value sum is greater than a threshold value S2(S1206). With the predictive error absolute value sum SADn(a,b) determined as being less than the threshold value S1 or with the predictive error difference absolute value sum f'(SADn(a,b))determined as being greater than the threshold value S2, the decoding objective area is determined to be an interpolation image area. In the other case, the decoding objective area is determined as an area which has been encoded as an encoding objective image area.
  • Now, when the decoding objective area is determined as an interpolation image area by means of the mode decider 1009, the interpolation pixel generator 1102 of interpolation image generation unit 1007 generates an interpolation pixel, and an image is generated through a process for generation of an interpolation image and stored in the decoded image memory unit 1007 (S1207).
  • On the other hand, if the encoding objective frame is not an interpolation frame (as being an encoding frame) or in case the mode decision unit 1009 determines that the decoding objective area is an area encoded as an encoding objective image area, the inverse quantizing /inverse-frequency converting unit 1003 applies an inverse quantization/inverse-frequency conversion process to the encoded data obtained from the parsing unit 1002 and decodes difference data (S1208). Thereafter, the motion compensation unit 1006 conducts a motion compensation process by suing header information obtained from the parsing unit 1002 and the motion vector, generating a predictive image (S1209). Subsequently, the adder 1004 adds the predictive image generated by the motion compensation unit 1006 and the difference data outputted from the inverse quantizing/inverse-frequency converting unit 1003 to generate a decoded image which in turn is stored in the decoded image memory unit 1005 (S1210). Finally, the output unit 1008 outputs the interpolation image generated in step 1207 or the decoded image generated in step 1210 (S1211), ending the process.
  • To add, if the encoding objective area is based on inter-screen prediction in step 1209, the motion compensation unit 1006 calculates a predictive motion vector (PMV) on the basis of motion vectors of areas peripheral of the decoding objective area, adds it to a difference vector (DMV) to be stored in the encoding data to thereby generate a motion vector (MV) of the decoding objective area and performs a motion compensation process on the basis of the motion vector (MV). It is noted that the calculation process for the predictive motion vector (PMV) can be executed through a process similar to the calculation process for the predictive motion vector (PMV) on the encoding side as has been explained in connection with Fig. 9A to 9C and will not be described herein.
  • According to the video decoding apparatus and method of embodiment 1 described previously, data encoded through the encoding method capable of improving the data compression rate as compared to the conventional encoding apparatus and method can be decoded suitably.
  • According to the video encoding apparatus and method and the video decoding apparatus and method of embodiment 1 described in the foregoing, encoded data improved in data compression rate can be generated and the encoded data can be decoded preferably.
  • [Embodiment 2]
  • Next, embodiment 2 of the present invention will be described. Embodiment 2 of the invention differs from embodiment 1 in that flag data indicating whether an encoding objective image is selected or an interpolation image is selected in respect of each encoding objective area on the encoding side is included in an encoding stream. This enables the decoding side to easily makes a decision as to whether an encoding image or an interpolation image is selected in respect of the decoding objective area. As a result, the process during decoding can be simplified, reducing the amount of processing. Embodiment 2 will be described in greater detail hereinafter.
  • In a video encoding apparatus according to embodiment 2, the mode selection unit 105 in Fig. 1 in the video encoding apparatus of embodiment 1 is replaced with a mode selection unit 1304 in Fig. 13. The construction and operation of the remaining components are the same as those in embodiment 1 and will not be described herein.
  • Firstly, in the mode selection unit 1304, a difference absolute value calculator 1301 calculates a difference between an input video divided by the area division unit 102 and an interpolation image generated by the interpolation image generation unit 104. Similarly, a difference absolute value calculator 1302 calculates a difference between the input video divided by the area division unit 102 and an encoding objective image generated by the encoding unit 103. Next, in a decider 1303, a smaller one of the difference absolute values calculated by the difference absolute value calculators 1301 and 1302 is selected, so that a decision flag (mode decision flag) is outputted. For example, the mode decision flag may be "0" when the encoding objective image is selected and "1" when the interpolation image is selected.
  • Illustrated in Figs. 14A and 14B is an example of data stored in the encoded data memory unit 106 of the video encoding apparatus in embodiment 2. As will be seen from Figs. 14A and 14B, flag data of one bit indicating that either the encoding objective image and the interpolation image is selected in respect of each encoding objective area is added. More particularly, in the encoding stream outputted from the video encoding apparatus of embodiment 2, the flag data indicating that either the encoding objective image and the interpolation image is selected in respect of each encoding objective area is included. Through this, without resort to the calculation process and comparison process performed on the decoding side in respect of the predictive error absolute value sum SADn(a,b) and predictive error difference f'(SADn(a,b)) as in the case of embodiment 1, it is possible to decide whether the decoding objective area is an area for which an encoding objective image is selected or an interpolation image is selected. Consequently, the process during decoding can be simplified and the processing amount can be reduced.
  • According to the video encoding apparatus and method in embodiment 2 described as above, being different from embodiment 1, the flag data indicating that either the encoding image and the interpolation image is selected in respect of each encoding objective area is included in the output encoding stream. This enables the decoding side to easily decide in respect of the decoding objective area whether the encoding objective image area is selected or the interpolation image area is selected. Accordingly, the process during decoding can be simplified and the processing amount can be reduced.
  • Next, a video decoding apparatus accordingto embodiment 2 will be described. The decoding apparatus of embodiment 2 is constructed similarly to that shown in Fig. 10 in connection with embodiment 1 and will not therefore be described herein.
  • Flow of process in the video decoding apparatus in embodiment 2 will be described below with reference to Fig. 15.
  • In an encoding stream, flag data indicating whether an encoding objective image or an interpolation image is selected in respect of each encoding objective area is included as shown in Figs. 14A and 14B, and the encoding stream is inputted to the video decoding apparatus according to embodiment 2. Firstly, the encoding stream is decoded by means of the variable-length decoding unit 1001 and sent to the parsing unit 1002 (S1501). Subsequently, in the parsing unit 1002, the decoded stream data is sorted in parsing and header information and a mode decision flag are transmitted to the mode decision unit 1009 whereas the encoded data is transmitted to the inverse quantizing/inverse-frequency converting unit 1003 (S1502). Thereafter, in the parsing unit 1002 or mode decision unit 1009, the encoding objective frame is decided, in accordance with the picture type of the encoding objective frame, as to whether to be an encoding frame or an interpolation frame (S1503).
  • Here, if the encoding objective frame is an interpolation frame, the mode decision unit 1009 decides in respect of a decoding objective area whether the mode decision flag transmitted from the parsing unit 1002 is 1 or 0 (S1504). With the mode decision flag being 1 (indicative of an area for which an interpolation image is selected), the decoding objective area is determined to correspond to an interpolation image area. When the mode decision flag is 0 (indicating an area for which an encoding image is selected), the decoding objective area is determined to correspond to an area which has been encoded as an encoding objective image area.
  • Then, as the mode decision unit 1009 determines that the decoding objective area is an interpolation image area, the motion searcher 1101 of interpolation image generation unit 1007 makes a motion search (S1505). Subsequently, on the basis of a result of the motion search by means of the motion searcher 1101, the interpolation pixel generator 1102 generates an interpolation pixel and an image is generated through a process for generation of an interpolation image and stored in the decoded image memory unit 1005 (S1506).
  • On the other hand, in case the encoding objective frame is not an interpolation frame (instead, an encoding objective frame) or the mode decision unit 1009 determines that the decoding objective area corresponds to an area encoded as an encoding objective image area, the inverse quantizing/inverse-frequency converting unit 1003 applies an inverse quantization/inverse-frequency conversion process to the encoded data acquired from the parsing unit 1002 and decodes difference data (S1507). Next, the motion compensation unit 1006 executes a motion compensation process by using the header information captured from the parsing unit 1002 and the motion vector and creates a predictive image (S1508). Next, the adder 1004 adds the predictive image generated by the motion compensation unit 1006 and the difference data delivered out of the inverse quantizing/inverse-frequency converting unit 1003, generating a decoded image which in turn is stored in the decoded image memory unit 1005 (S1509). Finally, the output unit 1008 outputs the interpolation image generated in the step S1207 or the decoded image generated in the step S1210 (S1510), ending the process.
  • As set forth so far, according to the video decoding apparatus and method in embodiment 2, in addition to attainment of the effects attributable to embodiment 1, it is possible to attain such an advantage that without resort to the calculation process and comparison process for the predictive error absolute value sum SADn(a,b) and predictive error difference absolute value sum f'(SADn(a,b)) as performed in embodiment 1, the decoding objective area can be decided as to whether to correspond to an area for which the encoding image is selected or an area for which the interpolation image is selected. Accordingly, the process during decoding can be simplified and the processing amount can be reduced.
  • As set forth so far, according to the video encoding apparatus and method and video decoding apparatus and method in embodiment 2, encoded data improved in data compression rate can be generated and the encoded data can be decoded preferably.
  • [Embodiment 3]
  • Next, embodiment 3 of the present invention will be described. In embodiment 1 of the invention, on the basis of a plurality of pictures which undergo the encoding process in advance (during decoding, the decoding process is carried out in advance), the interpolation image generation unit 104 generates a pixel of a frame representing a picture preceding and succeeding the plurality of pictures in order of display is generated through the interpolation process (particularly signifying interpolation per se).
  • Contrarily, in embodiment 3 of the invention, a process of interpolation discriminating from the interpolation per se (hereinafter referred to as extrapolation) is added through which on the basis of a plurality of pictures which undergo the encoding process in advance (during decoding, the decoding process is carried out in advance), a pixel of a frame representing a picture preceding or succeeding the plurality of pictures in order of display is generated through the extrapolation process.
  • A description of detailed construction and operation will be given of the video encoding apparatus in embodiment 3 hereinafter.
  • Structurally, the video encoding apparatus according to embodiment 3 is constructed by adding, to the interpolation image generation unit 104 of the video encoding apparatus of embodiment 1, operation of interpolation image generation process based on backward extrapolation and an extrapolation direction decision unit 1805 (see Fig. 18) is so added as to follow the interpolation image generation unit 104. The construction and operation of the remaining components are similar to those in embodiment 1 and will not be described herein.
  • The extrapolation process to be added herein is sorted into two types, namely, a forward extrapolation process and a backward extrapolation process. With respect to the respective types, operation in the interpolation image generation unit 104 of video encoding apparatus will be described.
  • Firstly, the forward extrapolation process will be described. Here, an example will be described in which in an input video as shown at (a) in Fig. 16, an extrapolation image of an extrapolation objective frame 1603 (B picture) is generated by using two encoding frames 1601 and 1602 which precedes the extrapolation objective frame 1603 in order of display.
  • In this case, for the purpose of determining a pixel of the extrapolation objective frame, a motion search to be described below is carried out in the motion searcher 302. As shown at (a) in Fig. 17, by using pixel values of two encoding objective frames (1601, 1602) displayed precedently of the extrapolation objective frame 1603, a predictive error absolute value sum SADn(a,b) indicated in equation (4) is determined. Specifically, a pixel value fn-2(x-2dx,y-2dy) of a pixel 1700 on the encoding frame 1601 and a pixel value fn-1(x-dx,y-dy) of a pixel 1701 on the encoding frame 1602 are used. Here, R represents the size of an objective area to which the interpolation objective pixel belongs. Then, the pixel 1700 on encoding frame 1601 and the pixel 1701 on encoding frame 1602 are so determined as to lie on the same straight line as the extrapolation objective pixel 1702 on the extrapolation objective frame 1603 in a frame of space and time. SAD n a b = i , j R f n - 1 x - dx + i , y - dy + j - f n - 2 x - 2 dx + i , y - 2 dy + j
    Figure imgb0004
  • Next, a position (dx,dy) at which the predictive error absolute value sum indicated by equation (4) is minimized is determined and through a process similar to that in the interpolation pixel generation unit 303 described in connection with embodiment 1, an extrapolation objective pixel is generated.
  • As described above, generation of an extrapolation objective pixel based on the forward extrapolation process can be realized.
  • The above-described forward extrapolation process can be applicable provided that the two preceding encoding frames in order of display are encoded/decoded in advance and therefore, it can also be applied to the case of an extrapolation objective frame 1603 (P picture) as shown at (b) in Fig. 16.
  • Next, a backward extrapolation process will be described.
  • Here, an example will be described in which in the input video shown at (a) in Fig. 16, an extrapolation image of an extrapolation objective frame 1603 is generated by using two encoding frames 1604 and 1605 which succeeds the extrapolation objective frame 1603 in order of display.
  • In this case, for the purpose of determining a pixel of the extrapolation objective frame, a motion search to be described below is carried out in the motion searcher 302. As shown at (b) in Fig. 17, by using pixels inside the two encoding frames (1604, 1605) displayed backwardly of the extrapolation objective frame 1603, a predictive error absolute value sum SADn(x,y) indicated in equation (5) is determined. Specifically, a pixel value fn+1(x+dx,y+dy) of a pixel 1711 on the encoding frame 1604 and a pixel value fn+2(x+2dx,y+2dy) of a pixel 1712 on the encoding frame 1605 are used. Here, R represents the size of an objective area to which the extrapolation objective pixel belongs.
  • Here, the pixel 1711 on encoding frame 1604 and the pixel 1712 on encoding frame 1605 are so determined as to lie on the same straight line as the extrapolation objective pixel 1710 on the extrapolation objective frame 1603 in a frame of space and time. SAD n a b = i , j R f n - 1 x + dx + i , y + dy + j - f n + 1 x + 2 dx + i , y + 2 dy + j
    Figure imgb0005
  • Next, a position (dx,dy) at which the predictive error absolute value sum indicated by equation (5) is minimized is determined and through a process similar to that in the interpolation pixel generation unit 303 described in connection with embodiment 1, an extrapolation objective pixel is generated.
  • As described above, generation of an extrapolation objective pixel based on the backward extrapolation process can be realized.
  • In the interpolation image generation unit 104, the aforementioned two kinds of extrapolation process and the interpolation process similar to that in embodiment 1 are carried out, generating three kinds of interpolation images.
  • Next, in the interpolation direction decision unit 1805 shown in Fig. 18, a motion search method is decided. The process in the interpolation direction decision unit 1805 will be described. Firstly, a difference absolute value between an interpolation image generated by performing a bi-directional motion search described in embodiment 1 and an input image is calculated by means of a difference absolute value calculator 1801. Subsequently, a difference absolute value between an interpolation image generated by performing a forward motion search described in the present embodiment and the input image is calculated by means of a difference absolute value calculator 1802. Also, a difference absolute value between an interpolation image generated by performing a backward motion search and the input image is calculated by means of a difference absolute value calculator 1803. Thereafter, a motion search direction decider 1804 selects an interpolation image for which the difference between input image and interpolation image is small and outputs the selected result as a motion search direction decision flag. For example, the motion search direction decision flag may provide data of 2 bits including 00 indicative of bi-direction, 01 indicative of forward direction and 10 indicative of backward direction. The thus generated motion search direction decision flag is transmitted to the encoded data memory unit 106.
  • Illustrated in Figs. 19A and 19B is an example of data to be stored in the encoded data memory unit 106. As shown in Figs. 19A and 19B, flag data for deciding which direction the interpolation image is generated from is added in an interpolation pixel area. In other words, in an encoding stream outputted from the video encoding apparatus of embodiment 3, the flag data indicative of the interpolation direction in which the interpolation image is generated in respect of an area for which the interpolation image is selected is included.
  • In this manner, the kind of interpolation image generation methods can be increased and in addition to a B picture, a P picture can also be made to be an interpolation objective frame, thus decreasing the data.
  • Further, in the case of B picture, in addition to the bi-directional interpolation based on frames respectively preceding and succeeding the interpolation objective frame, the forward extrapolation for generating an interpolation image from two forward encoding objective frames and the backward extrapolation for generating an interpolation image from two backward encoding objective frames as well can be executed and improvements in picture quality can therefore be expected.
  • Especially, in the case of an image which moves differently in the background and the foreground, the picture quality is degraded considerably in an area in which when the interpolation image is generated only bi-directionally, the background is concealed by the foreground and cannot be seen (occlusion area) but through the forward or backward extrapolation, the problem of quality degradation can be solved.
  • As described above, differing from embodiment 1, the video encoding apparatus and method according to embodiment 3 includes the flag data indicative of the interpolation direction for generation of an interpolation image in the output encoding stream. This ensures that the kinds of interpolation process executed on the decoding side can be increased and in addition to the B picture, the P picture can also be an interpolation objective frame, making it possible to more reduce the data. Further, the high picture quality of the B picture interpolation image can be achieved.
  • Next, a video decoding apparatus according to embodiment 3 will be described. Structurally, in the decoding apparatus of embodiment 3, the motion searcher 1101 shown in Fig. 11 in embodiment 1 is replaced with a motion searcher 2005 in Fig. 20 and the remaining components are similar to those in embodiment 1 and a description will not be given of them.
  • The motion search unit 2005 in the decoding apparatus of embodiment 3 includes a motion search method decider 2001, a motion searcher 2002, a predictive error calculator 2003 and a motion vector decider 2004. The motion search method decider 2001 determines a search method of bi-directional, forward direction or backward direction motion in accordance with information of a motion search direction decision flag sent from the parsing unit 1002. After a motion search method has been determined, motion search, predictive error calculation and motion vector decision are carried out in the motion searcher 2002, predictive error calculator 2003 and motion vector decider 2004, respectively. The bi-directional search can be conducted similarly to that in embodiment 1 and the forward direction search and backward direction search can be processed similarly to those by the video encoding apparatus of the present embodiment.
  • Next, flow of the process in the video decoding apparatus of embodiment 3 will be described with reference to Fig. 21.
  • Firstly, the variable-length decoding unit 1001 decodes an encoding stream in a variable-length fashion and sends it to the parsing unit 1002 (S2101). Next, the parsing unit 1002 sorts decoded stream data in parsing and transmits encoded data to the inverse qunatizing /inverse-frequency converting unit 1003 and the interpolation image generation unit 1007 (S2102). Subsequently, the parsing unit 1002 decides the picture type of the encoded objective frame (S2103). If the encoded objective frame is an interpolation frame, the motion search method decider 2001 decides a motion search method using one of motion search directions of bi-direction, forward direction and backward direction, on the basis of a motion search direction decision flag transmitted from the parsing unit 1002 (S2104). After the motion search method has been determined, a motion search is carried out in the motion searcher 2005 (S2105). The motion searcher 2005 calculates a predictive error absolute value sum and a motion vector and besides, through a process similar to that executed by the motion searcher 1101 of embodiment 1, calculates a predictive error difference absolute value sum (S2106). Thereafter, when the predictive error absolute value sum is less than a threshold value S1 or the predictive error difference absolute value sum is greater than a threshold value S2, the interpolation image generator 1102 generates an interpolation pixel through a process similar to that in embodiment 1 (S2108). On the other hand, when the encoding objective frame is not an interpolation frame and the condition in S2107 is not met, the inverse quantizing/inverse-frequency converting unit 1003 carries out inverse quantization/inverse-frequency conversion, the result is added with data from the motion compensation unit 1006 and the resulting sum data is stored in the decoded image memory unit 1005. Subsequently, by using the data stored in the decoded image memory unit 1005, the motion compensation unit 1006 carries out motion compensation (S2109). By using the decoded image stored in the decoded image memory unit 1005 and the motion vector transmitted from the parsing unit 1002, the motion compensation unit 1006 makes a motion compensation, generates a decoded image and stores it in the decoded image memory unit 1005 (S2111). The decoded image or the interpolation image generated through the above method is outputted to the video display unit 1008 (S2111), thus ending the process.
  • As described above, according to the video decoding apparatus and method in embodiment 3, a plurality of kinds of interpolation processes can be employed adaptively by performing the process using the motion search direction decision flag included in the encoding stream. Further, it is sufficient to execute the motion search process on the decoding side only once in respect of the plural kinds interpolation processes and therefore the processing amount can be decreased to a great extent.
  • According to the video encoding apparatus and method and video decoding apparatus and method of embodiment 3 described so far, encoding data improved in data compression rate can be generated and the encoding data can be decoded suitably.
  • [Embodiment 4]
  • Next, a video encoding apparatus according to embodiment 4 of the invention will be described. The video encoding apparatus of embodiment 4 adds to the video encoding apparatus of embodiment 1 the mode selection unit 1304 of embodiment 2 and the motion searcher 302 and interpolation direction decision unit 1805 of embodiment 3. Namely, the video encoding apparatus of embodiment 4 outputs an encoding stream including a mode decision flag and a motion search direction flag.
  • Individual constituents and contents of individual processes are similar to those described in connection with embodiments 1, 2 and 3 and will not be described herein.
  • An example of data to be stored in the encoded data memory unit 106 in embodiment 4 is illustrated in Figs. 22A and 22B. As shown in Figs. 22A and 22B, in each divisional area, a mode decision flag for deciding whether the area is an encoding image area or an interpolation image area is added and further, in the interpolation image area, a motion search direction decision flag is added which makes a decision as to whether the bi-directional, forward or backward motion search method is to be executed.
  • In this manner, the video encoding apparatus and method can be realized which can attain simplifying the process and reducing the processing amount during decoding, that is, the effects of embodiment 2 and making the B picture as well as the P picture an interpolation objective frame to more reduce the data amount and improving the picture quality of the B picture, that is, the effects of embodiment 3.
  • Next, a video decoding apparatus of embodiment 4 will be described. The construction of the video decoding apparatus of embodiment 4 is similar to that of embodiment 3 and will not be described herein.
  • Turning now to Fig. 23, flow of process in the decoding objective area image in the decoding apparatus in embodiment 4 will be described. Firstly, the encoding stream is decoded by means of the variable-length decoding unit 1001 and is then sent to the parsing unit 1002 (S2301). Subsequently, in the parsing unit 1002, the decoded stream data is sorted in parsing and a mode decision flag, a motion search direction decision flag and encoded data are transmitted to the inverse quantizing/inverse-frequency converting unit 1003 and interpolation image generation unit 1007 (S2302). Thereafter, in the paring unit 1002, the encoding objective frame is decided, on the basis of the picture type of the encoding objective frame, as to whether to be an encoding frame or an interpolation frame (S2303). If the encoding objective frame is an interpolation frame, it is decided in respect of a decoding objective area whether the mode decision flag transmitted from the parsing unit 1002 is 1 (indicative of the decoding objective area being an interpolation image) or not (S2304). With the mode decision flag being 1, the motion search method decider 2001 decides a motion search direction for the interpolation process on the basis of the motion search direction decision flag transmitted from the parsing unit 1002 (S2305), the motion searcher 2002, predictive error calculator 2003 and motion vector decider 2004 determine motion search, predictive error calculation and motion vector, respectively, (S2306) and the interpolation pixel generator 1102 generates an interpolation pixel by using the determined motion vector, thus generating an interpolation image (S2307).
  • On the other hand, when the encoding objective frame is not an interpolation frame and the condition in S2107 is not met, the inverse quantizing/inverse-frequency converting unit 1003 carries out inverse quantization/inverse-frequency conversion, adds data from the motion compensation unit 1006 and stores the resulting data in the decoded image memory unit 1005. Subsequently, by using the data stored in the decoded image memory unit 1005, the motion compensation unit 1006 carries out motion compensation (S2309). By using the decoded image stored in the decoded image memory unit 1005 and the motion vector transmitted from the parsing unit 1002, the motion compensation unit 1006 carries out motion compensation, generates a decoded image and stores it in the decoded image memory unit 1005 (S2310). The decoded image or the interpolation image generated through the above method is outputted to the video display unit 1008 (S2311), thus ending the process.
  • As described above, according to the video decoding apparatus and method in embodiment 4, a video decoding apparatus and method can be realized which can attain simplifying the process during decoding and reducing the processing amount, that is, the effects of embodiment 2 and can deal with a plurality of kinds of interpolation processes by performing the process using the motion search direction decision flag included in the encoding stream, so that it is sufficient to execute the motion search process only once on the decoding side in respect of the plural kinds interpolation processes and therefore the processing amount can be decreased to a great extent as represented by the effects of embodiment 3.
  • According to the video encoding apparatus and method and video decoding apparatus and method of embodiment 4 described so far, encoded data improved in data compression rate can be generated and the encoded data can be decoded suitably.
  • [Embodiment 5]
  • Next, a video encoding apparatus according to embodiment 5 of the invention will be described. The video encoding apparatus according to embodiment 5 is constructed similarly to the video encoding apparatus of embodiment 2 but while in embodiment 2 the mode selection unit 1304 generates a mode decision flag in respect of each image block, a mode selection unit 1304 in embodiment 5 generates, when a plurality of blocks in which the decoding objective area is of an interpolation image (interpolation image mode block) are in succession, a flag indicative of the number of succession of the interpolation image mode blocks (interpolation image mode succession block number flag) and outputs an encoding stream including a single interpolation image mode succession block number flag in respect of the plural successive interpolation image mode blocks. Individual constituents and contents of individual processes in the video encoding apparatus according to embodiment 5 are similar to those described in connection with embodiments 1 and 2 and will not be described herein.
  • In respect of a block in which the decoding objective area corresponds to an encoding objective image, an interpolation image mode exceptive mode flag indicating that the block is of a mode other than the interpolation image mode is generated and outputted. The interpolation image mode exceptive mode flag may simply indicate a mode other than the interpolation image mode but alternatively, may indicate the kind per se of encoding mode (macro-block type and the like).
  • Now, an example of data in the encoded data memory unit 106 in the video encoding apparatus of embodiment 5 is illustrated in Figs. 24A and 24B. Illustrated at (a) in Fig. 24B is data generated by the video encoding apparatus in embodiment 2 and illustrated at (b) in Fig. 24B is data generated by the video encoding apparatus in embodiment 5.
  • In the data in embodiment 2 at (a) in Fig. 24B, there are many successive mode decision flags. Contrary thereto, in the data in embodiment 5 at (b) in Fig. 24B, only one interpolation image mode succession block number flag is inserted at a portion where the interpolation image mode blocks are in succession. At (b) in Fig. 24B, a numerical number designated by arrow in correspondence with the interpolation image mode succession block number flag shows an example of the number of successive interpolation image mode blocks indicated by the interpolation image mode succession block number flag. More specifically, in the example at (b) in Fig. 24B, an interpolation image mode succession block number flag 2401 indicates a numerical number "4" which demonstrates that four blocks a, b, c and d are in succession to constitute an interpolation image mode block. Similarly, an interpolation image mode succession block number flag 2402 indicates a numerical number "1" which demonstrates that a block "e" alone constitutes an interpolation image mode block. Again similarly, an interpolation image mode succession block number flag 2403 indicating "5" demonstrates that five blocks f, g, h, i and j in succession constitute an interpolation image mode block. By using the interpolation image mode succession block number flag in this manner, data in embodiment 5 shown at (b) in Fig. 24B can be reduced in data amount as compared to the data in embodiment 2 shown at(a) in Fig. 24B in which the mode decision flag is added every block. For each of the blocks in which the decoding objective area corresponds to an encoding objective image, the interpolation image mode exceptive mode flag is inserted.
  • As described above, according to the video encoding apparatus and method in embodiment 5, in addition to simplifying the process during decoding and reducing the processing amount, that is, the effects of embodiment 2, the mode of plural blocks can be indicated by a single flag by using the interpolation image mode succession block number flag and the encoded data amount can be reduced.
  • Next, a video decoding apparatus according to embodiment 5 of the invention will be described. The construction of video decoding apparatus according to embodiment 5 is similar to that of the video decoding apparatus of embodiment 2 and will not be described herein. But, flow of the process in the video decoding apparatus according to embodiment 5 of the invention differs from that of the process as shown in Fig. 15 in the video decoding apparatus of embodiment 2 in the following points. The other points are similar to those in the flow shown in Fig. 15 and will not be described with reference to the drawing. More particularly, when in embodiment 2 the mode decision flag is "1" in S1504 in Fig. 15, the interpolation image generation process following S1505 is carried out but with the mode decision flag being "0", the video decoding process following S1507 is carried out.
  • Contrarily, in embodiment 5, a flag is detected in S1504 in Fig. 15 so as to be decided as to whether to be an interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag. If the detected flag is the interpolation image mode succession block number flag, the interpolation image generation process following S1505 is carried out in respect of consecutive blocks of the number indicated by the interpolation image mode succession block number flag. If the flag is the interpolation image exceptive mode flag, the video decoding process following S1507 is carried out in respect of a block to which the flag corresponds. Thus, when the flag is the interpolation image mode succession block number flag and it indicates a numerical value of 2 or more, the image generation process can be determined for the plural blocks through one decision process.
  • In this manner, in the video decoding apparatus according to embodiment 5, the process during decoding can be more simplified than in embodiment 2, reducing the processing amount.
  • According to the video decoding apparatus and method in embodiment 5 described above, the image generation process can be determined in respect of a plurality blocks through the single decision process by making correspondence with the interpolation image mode succession block number flag included in the encoding stream. Advantageously, this can ensure that simplification of the process during decoding and reduction in the processing amount can be attained more extensively than those in embodiment 2.
  • [Embodiment 6]
  • Next, a video encoding apparatus according to embodiment 6 of the invention will be described. The construction of video encoding apparatus according to embodiment 6 is similarly to that of the video encoding apparatus of embodiment 4 but while in embodiment 4 the mode selection unit 1304 generates a mode decision flag and a motion search direction decision flag in respect of each image block, a mode selection unit 1304 in embodiment 6 generates, like embodiment 5, an interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag and generates, in respect of an interpolation image mode succession block number flag, a motion search direction decision flag. A detailed description of the motion search direction decision flag is the same as that in embodiments 3 and 4 and will not be given herein. Also, a detailed description of the interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag is the same as that in embodiment 5 and will not given herein. Individual constituents and contents of individual processes in the video encoding apparatus according to embodiment 6 are similar to those described in connection with embodiments 1 to 5 and will not be described herein.
  • An example of data to be stored in the encoded data memory unit 106 of the video encoding apparatus in embodiment 6 is illustrated in Figs. 25A and 25B. Illustrated at (a) in Fig. 25B is data generated by the video encoding apparatus in embodiment 4 and at (b) in Fig. 25B is data generated by the video encoding apparatus in embodiment 6. Similarly to the illustration at (b) in Fig. 24B, a numerical number indicated at arrow shown at (b) Fig. 25B in correspondence with the interpolation image mode succession block number flag depicts an example of the number of successive interpolation image mode blocks indicated by the interpolation image mode succession block number flag. More specifically, in the example at (b) in Fig. 25B, an interpolation image mode succession block number flag 2401 indicates a numerical number "4" which demonstrates that four blocks a, b, c and d are in succession to constitute an interpolation image mode block. This resembles embodiment 5. In embodiment 6, the motion search direction decision flag is generated every interpolation image mode succession block number flag and therefore, following the interpolation image mode succession block number 2501, a motion search direction decision flag 2502 is inserted. Here, for four blocks a, b, c and d which are indicated as a consecutive interpolation image mode block by the interpolation image mode succession block number flag 2501, an interpolation image is generated through a motion search method determined by a motion search direction indicated by a motion search direction decision flag 2502 accompanying the interpolation image mode succession block number flag 2501.
  • In the case of data in embodiment 5 shown at (b) in Fig. 25B, by using the interpolation image mode succession block number flag in this manner, the motion search direction decision flag is inserted in the encoding data in respect of each interpolation image mode succession block number. In this case, the data amount can be more reduced than in the data in embodiment 4 shown at (a) in Fig. 25B in which the mode decision flag and the motion search direction decision flag are added every block. Embodiment 6 is similar to embodiment 5 in that the interpolation image mode exceptive mode flag is inserted for respective ones of blocks in which the decoding objective area corresponds to an encoded image.
  • According to the video encoding apparatus and method in embodiment 6 as described above, in addition to simplifying the process and reducing the processing amount during decoding and beside, more reducing the data amount by making the P picture, in addition to the B picture, an interpolation objective frame and improving the picture quality of the B picture, that is, the effects of embodiment 4, the mode and motion search direction of plural blocks can each be indicated by a single flag by using the interpolation image mode succession block number flag and the encoding data amount can be reduced.
  • Next, a video decoding apparatus according to embodiment 6 of the invention will be described. The construction of video decoding apparatus according to embodiment 6 of the invention is similarly to that of the video decoding apparatus of embodiment 4 and will not be described herein. But, flow of the process in the video decoding apparatus according to embodiment 6 of the invention differs from that of the process in the video decoding apparatus of embodiment 4 shown in Fig. 23 in the following points. The other points are similar to those in the flow shown in Fig. 23 and will not be described with reference to the drawing. More particularly, when in embodiment 4 the mode decision flag is "1" in S2304 in Fig. 23, the interpolation image generation process following S2305 is carried out and at that time, in S2305, a motion search method is determined for each block on the basis of a motion search direction decision flag and a motion search in S2306 is carried out. But with the mode decision flag being "0" in S2304, the video decoding process following S2308 is carried out.
  • Contrarily, in embodiment 6, a flag is detected in S2304 in Fig. 23 so as to be decided as to whether to be an interpolation image mode succession block number flag or an interpolation image mode exceptive mode flag. If the detected flag is the interpolation image mode succession block number flag, the interpolation image generation process following S2305 is carried out in respect of consecutive blocks of the number indicated by the interpolation image mode succession block number flag. At that time, in S2305, on the basis of a motion search direction decision flag accompanying the interpolation image mode succession block number flag, the motion search method in interpolation image generation for the plural consecutive blocks is determined. In S2306, the motion search is conducted through the motion search method determined for the plural consecutive blocks. In S2307, an interpolation image is generated on the basis of the search result. If, in S2304, the flag is the interpolation image mode exceptive mode flag, the video decoding process following S1507 is carried out in respect of a block to which the flag corresponds.
  • In the above flow, while dealing with the plural kinds of interpolation processes through the process using the motion search direction decision flag, the image generation process can be determined for the plural blocks through the single decision process when the flag is the interpolation image mode succession block number flag and indicates a numerical number of 2 or more.
  • In the video decoding apparatus according to embodiment 6 can more simplify the process and reduce the processing amount during decoding than in embodiment 4, in addition to dealing with plural kinds of interpolation processes, that is, the effect of embodiment 4.
  • According to the video decoding apparatus and method in embodiment 6 described as above, plural image generation processes can be determined in respect of plural blocks through a single decision process by dealing with plural kinds of interpolation processes and dealing with the interpolation image mode succession block number flag included in the encoding stream. Advantageously, this can ensure that simplification of the process and reduction in the processing amount during decoding can be attained more extensively than those in embodiment 4.
  • It will be appreciated that an embodiment can be worked out by modifying the interpolation image generation methods in the foregoing individual embodiments into first to third modifications as below.
  • The first modification will be described by making reference to Fig. 26. Illustrated in Fig. 26 is an interpolation image generation method in the first modification. In the first modification, the encoding/decoding objective frame is a single B picture existing between reference frames. In Fig. 26, fn represents an encoding/decoding objective frame, fn-1 a reference frame finished with encoding/decoding which precedes in order of display and is positioned most closely to the encoding/decoding objective frame, and fn+1 a reference frame finished with encoding/decoding which succeeds in order of display and is positioned most closely to the encoding/decoding objective frame.
  • In the first modification, searching a motion vector MV(u,v) and calculating an interpolation pixel value fn(x,y) are materialized through the following methods.
  • Motion search in the first modification is carried out in a unit of block. For example, the motion search is started from the left-above end in the frame fn-1 and from the right-below end in the frame fn+1 so as to make a search in a right/left and above/below symmetrical fashion. A total of absolute error sums (SAD) of two blocks is calculated and a combination of blocks for which the SAD is minimal and the MV is also minimal. Here, the motion search is carried out on, for example, a plane of 1/4 pixel accuracy. On the 1/4 pixel accuracy plane, the block size for motion search is set to 64×64 pixels and by skipping 4 pixels, 16 pixels are used as sampling points. The motion search range is referenced to the center of the encoding objective block.
  • For calculation of an interpolation pixel value fn(x,y) inside the encoding/decoding objective frame in the first modification, a motion vector MV(u,v) between the frame fn-1 and the frame fn+1 is used and calculation is executed pursuant to equation (6). f n x y = f n - 1 x - 1 2 u , y - 1 2 v + f n + 1 x + 1 2 u , y + 1 2 v / 2
    Figure imgb0006
  • In equation (6), the fn(x,y) is calculated in terms of an average value of pixels on the reference frames fn-1 and fn+1 representing the start and end points of MV(u,v), respectively. The reason for this is that in the first modification the encoding/decoding objective frame is a single B picture positioned centrally of the plural reference frames and is temporally equidistant from the two reference frames. If there is a bias between the temporal distances from the both reference frames, the coefficient 1/2 by which u and v are multiplied in equation (6) may be changed in accordance with the bias. In this case, the smaller the temporal distance to the reference frame, the more the coefficient becomes small. In such an instance, the pixel values on individual reference frames fn-1 and fn+1 may be multiplied by coefficients complying with respective temporal distance biases. Then, the closer the temporal distance to the reference frame, the larger the coefficient becomes.
  • The motion vector MV(u,v) and the interpolation pixel value fn(x,y) in the first modification can be obtained through the search method and calculation method described as above, respectively.
  • Next, a second modification will be described with reference to Figs. 27A, 27B and 27C. In the second modification, the encoding/decoding objective frame is either of two B pictures existing between reference frames. In this case, the motion search is carried out only once for the two existing B pictures. In Fig. 27A, fn represents a first encoding/decoding objective frame, fn+1 represents a second encoding/decoding objective frame, fn-1 represents a reference frame finished with encoding/decoding which precedes the encoding/decoding objective frame in order of display and is positioned most closely thereto, fn+2 represents a reference frame finished with encoding/decoding which succeeds the encoding/decoding objective frame and is positioned most closely thereto, and fc represents a virtual central picture.
  • In the second modification, searching a motion vector MV(u,v) and calculating an interpolation pixel value fn(x,y) of the first encoding/decoding objective frame and an interpolation pixel value fn+1(x,y) of the second encoding/decoding objective frame are materialized through the following methods.
  • Firstly, for the motion search in the second modification, the center of the motion search range is so defined as to be centered on an encoding/decoding objective block position (x,y) of the virtual central picture fc. The remaining details of calculation of the motion vector MV(u,v) are similar to those in the first modification and will not be described herein.
  • An interpolation pixel value fn(x,y) of the first encoding/decoding objective frame and an interpolation pixel value fn+1(x,y) of the second encoding/decoding objective frame can be calculated by using the motion vector MV(u,v) between the frames fn-1 and fn+2 from equations (7) and (8), respectively. f n x y = 2 f n - 1 x - 1 3 u , y - 1 2 v + f n + 2 x + 2 3 u , y + 2 3 v / 3
    Figure imgb0007
    f n + 1 x y = f n - 1 x - 2 3 u , y - 2 3 v + 2 f n + 2 x + 1 3 u , y + 1 3 v / 3
    Figure imgb0008
  • The calculation method pursuant to equation (7) will be described with reference to Fig. 27B. In an example of Fig. 27B, the illustration in Fig. 27A is shown in plane form. In the example, calculation of a pixel value at position (x, y) of the first encoding/decoding frame fn is carried out through motion search referenced to the position (x,y) of the virtual center picture. In the example in Fig. 27B, the first encoding/decoding objective frame fn is distant from the reference frame fn-1 by 1/3 of temporal distance between the reference frame fn-1 and the reference frame fn+2 and is distant from the reference frame fn+2 by 2/3 thereof. Accordingly, in equation (7), the pixel value of the first encoding/decoding objective frame fn (x,y) is calculated by referencing to the position (x,y) of the first encoding/decoding objective frame fn and by multiplying, by weight coefficients complying with the temporal distances to the individual reference frames, respectively, a pixel value of pixel on reference frame fn-1, indicated by using 1/3MV resulting from multiplication of the motion vector MV by 1/3, and a pixel value of pixel on the reference frame fn+2, indicated by using 2/3W resulting from multiplication of the motion vector MV by 2/3 and by summing the resultant product values. Here, as the temporal distance to the reference frame becomes shorter, the weight coefficient may become larger proportionately and in the example of Fig. 27B, the pixel value of pixel on the reference frame fn-1 is multiplied by 2/3 and the pixel value of pixel on the reference frame fn+1 is multiplied by 1/3.
  • The calculation method pursuant to equation (8) will be described with reference to Fig. 27C. The calculation method pursuant to equation (8) is similar to that pursuant to equation (7) in that the motion vector MV(u,v) is used, the position (x,y) of the second encoding/decoding objective frame fn+1 is referenced to, that a pixel on the reference frame is selected by using a motion vector resulting from multiplication of the motion vector MV(u,v) by a coefficient in accordance with the temporal distance from the encoding/decoding objective frame to the reference frame and that the selected pixel values are multiplied by weight coefficients complying with the temporal distances to the reference frame and added together. Fig. 27C differs from Fig. 27B only in that the relation of temporal distance from the encoding/decoding objective frame to the respective reference frames differs and the coefficient by which the motion vector MV(u,v) is multiplied is different and so a detailed description will be omitted.
  • When, even in the case of two B pictures existing between the reference frames as in the case of the second modification, the temporal position of the individual B pictures is not positioned at 1/3 equidistance to the respective reference frames, the coefficient may be changed in accordance with the temporal distance to the reference frame.
  • Employed in the second modification are the aforementioned method for searching the motion vector MV(u,v) and method for calculating the interpolation pixel value fn(x,y) of the first encoding/decoding objective frame and the interpolation pixel value fn+1(x,y) of the second encoding/decoding objective frame.
  • Namely, according to the second modification, interpolation pixel values can be calculated through one motion search in respect of the individual pixels at the same position on the two encoding/decoding objective frames, respectively, which are positioned between the reference frames.
  • Next, a third modification will be described with reference to Fig. 28. In the third modification, the first and second modifications are generalized, indicating an instance where m sheets of B pictures exist between two reference frames. In Fig. 28, m B pictures from f1(first B picture) to fm(m-th B picture) are inserted between reference frames fA and fB. Here, fC represents a virtual central picture which provides the reference as in the case of the second modification when calculating a motion vector MV(u,v).
  • In the third modification, when a k-th B picture fk shown in Fig. 28 is an encoding/decoding objective frame, an interpolation pixel value fk(x,y) can be calculated from equation (9). f k x y = m + 1 - k × f A x - k m + 1 u , y - k m + 1 v + k × f B x + m + 1 - k m + 1 u , y + m + 1 - k m + 1 v / m + 1
    Figure imgb0009
  • The calculation method pursuant to equation (9) is also similar to that pursuant to equation (7) or (8) in that the motion vector MV(u,v) is used, that the position (x,y) of the encoding/decoding objective frame fk is referenced to, that a pixel on the reference frame is selected by using a motion vector resulting from multiplication of the motion vector MV(u,v) by a coefficient in accordance with the temporal distance from the encoding/decoding objective frame to the reference frame and that the selected pixel value is multiplied by weight coefficients complying with the temporal distances to the reference frames and added together.
  • The method for calculating the interpolation pixel value fk (x,y) of the encoding/decoding objective frames as above is employed in the third modification.
  • Namely, according to the third modification, an interpolation pixel value can be calculated through one motion search in respect of individual pixels at the same position on m encoding/decoding objective frames positioned between the reference frames.
  • In any of the interpolation image frame, interpolation image area, interpolation image mode and interpolation image mode block described in connection with the foregoing embodiments, an interpolation image is generated through the interpolation process based on the motion prediction between the reference images and therefore, they may be expressed as an inter-reference image predictive frame, an inter-reference image motion predictive area, an inter-reference image motion predictive mode and an inter-reference image motion predictive mode block, respectively.
  • The video encoding/decoding technique using the interpolation image, namely, image encoding/decoding technique based on the inter-reference image motion prediction described in connection with the foregoing embodiments is advantageous over the conventional technique as will be described below.
  • More particularly, in the bi-directional motion compensation prediction in the H.264/AVC, the skipping mode and direct mode for predictive generation of motion information from motion information of an encoded block is adopted. The skipping mode and direct mode does not need transmission of motion information and therefore, it is a technique effective for reducing the encoding amount. In the skipping mode and direct mode, however, the accuracy of prediction of the motion information will sometimes be degraded. For example, in the time direct mode utilizing the correlation of motion information in time direction, a motion vector of a block (anchor block) at the same position as an encoding objective block inside a reference image immediately succeeding an encoding objective image in order of display and in the case of an image in which the anchor block is encoded inside the screen, no motion information can be acquired, thus degrading the prediction accuracy. Also, in the space direct mode utilizing the correlation of motion information in spatial direction, a motion vector of a block peripheral of an encoding objective block and in the case of images in which individual peripheral blocks move differently, the spatial correlation of the motion information decreases, thus degrading the prediction accuracy.
  • Contrary thereto, in the image encoding/decoding technique using the interpolation image described in connection with the foregoing individual embodiments, namely, image encoding/decoding technique based on the inter-reference image motion prediction, a block having a high correlation with a forward reference image and a backward reference image is detected and its detected motion vector is used. Accordingly, even in an image liable to be degraded in predictive accuracy in the skip mode and direct mode, that is, in an image in which the encoding objective block is a mobile image and the anchor block is of an image encoded inside the screen, degradation in prediction accuracy can be suppressed.
  • Also, likewise, in the video encoding/decoding technique using an interpolation image described in connection with the foregoing embodiments, a motion vector is predicted without using a motion vector of a block peripheral of an encoding objective block. Therefore, even in an image liable to be degraded in prediction accuracy in the skip mode and direct mode, that is, an image the peripheral blocks of which move differently, degradation in prediction accuracy can be suppressed.
  • In other words, in the video encoding/decoding technique according to the individual embodiments of the present invention, improvements in data compression rate can be realized more preferentially than in the conventional skip mode and direct mode.
  • It should be further understood by those skilled in the art that although the foregoing description has been made on embodiments of the invention, the invention is not limited thereto and various changes and modifications may be made without departing from the spirit of the invention and the scope of the appended claims.
  • In particular, features, components and specific details of the structures of the above-described embodiments may be exchanged or combined to form further embodiments optimized for the respective application. As far as those modifications are readily apparent for an expert skilled in the art they shall be disclosed implicitly by the above description without specifying explicitly every possible combination, for the sake of conciseness of the present description.

Claims (5)

  1. A method of decoding videos comprising the steps of:
    deciding (S1504), on the basis of a mode decision flag included in an encoding stream, whether an image of a decoding objective area is to be generated through motion search process using an image finished with decoding or through a motion compensation process using data included in the encoding stream; and
    generating a decoded image by switching over, in accordance with the result of decision in said decision step (S1504), from a motion search process using an image finished with decoding to a motion compensation process using data included in the encoding stream and vice versa.
  2. A video decoding method according to claim 1, wherein when, in said decision step (S1504), a decoded image of said decoding objective area is so determined as to be generated through motion compensation using the data included in the encoding stream, the decoded image is generated by changing, in said image generation step, a method for calculation of a predictive vector depending on whether individual plural image areas adjacent to a decoding objective area of said decoding objective frame are areas which have been processed as encoding image areas during encoding or they have been processed as interpolation image areas during encoding and by performing a motion compensation.
  3. A video decoding method according to claim 1, when, in said decision step (S1504), a decoded image of said decoding objective area is so determined as to be generated through motion compensation using data included in the encoding stream and individual plural image areas adjacent to a decoding objective area of said decoding objective frame are areas which have been processed as interpolation image areas during encoding, the decoded image is generated (S1509), in said image generation step, by calculating a predictive vector on the basis of motion vectors used in an interpolation process during decoding of said plural adjacent image areas and performing a motion compensation by using said predictive vector.
  4. A video decoding method according to claim 1, wherein when, in said decision step (S1504), a decoded image of said decoding objective area is so determined as to be generated through motion compensation using data included in the encoding stream, part of said plural image areas adjacent to a decoding objective area of said decoding objective frame are areas which have been processed as encoding objective image areas during encoding and the rest of said plural adjacent image areas are areas which have been processed as interpolation image areas during encoding, the decoded image is generated (S1509), in said image generation step, by calculating a predictive vector on the basis of a motion vector used in the motion compensation during decoding in the area which has been processed as the encoding objective image area and a motion vector used in the interpolation process during decoding in the area which has been processed as the interpolation image area during encoding and by performing a motion compensation through the use of said predictive vector.
  5. A video decoding method according to any of claims 1-4 , when, in said decision step (S1504), an image of said decoding objective area is so determined as to be generated through motion search process using an image finished with decoding, the decoded image is generated by determining a motion search method on the basis of a motion search method decision flag included in the encoding stream, performing a motion search by using images of plural frames finished with decoding on the basis of the determined motion search method, and calculating a pixel value of the decoding on the basis of pixel values of pixels on said plural frames finished with decoding which have been indicated by the motion vector determined by the motion search.
EP20120157322 2008-11-26 2009-11-25 Video decoding method Withdrawn EP2466893A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2008300342 2008-11-26
JP2009089678A JP5325638B2 (en) 2008-11-26 2009-04-02 Image decoding method
EP20090177079 EP2192782A3 (en) 2008-11-26 2009-11-25 Video decoding method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
EP09177079.2 Division 2009-11-25

Publications (1)

Publication Number Publication Date
EP2466893A1 true EP2466893A1 (en) 2012-06-20

Family

ID=41820402

Family Applications (2)

Application Number Title Priority Date Filing Date
EP20120157322 Withdrawn EP2466893A1 (en) 2008-11-26 2009-11-25 Video decoding method
EP20090177079 Withdrawn EP2192782A3 (en) 2008-11-26 2009-11-25 Video decoding method

Family Applications After (1)

Application Number Title Priority Date Filing Date
EP20090177079 Withdrawn EP2192782A3 (en) 2008-11-26 2009-11-25 Video decoding method

Country Status (5)

Country Link
US (1) US8798153B2 (en)
EP (2) EP2466893A1 (en)
JP (1) JP5325638B2 (en)
CN (1) CN101742331B (en)
BR (1) BRPI0904534A2 (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4518111B2 (en) * 2007-07-13 2010-08-04 ソニー株式会社 Video processing apparatus, video processing method, and program
EP2528332A4 (en) 2010-01-19 2015-08-05 Samsung Electronics Co Ltd Method and apparatus for encoding/decoding images using a motion vector of a previous block as a motion vector for the current block
WO2011142221A1 (en) * 2010-05-11 2011-11-17 シャープ株式会社 Encoding device and decoding device
JP5784596B2 (en) * 2010-05-13 2015-09-24 シャープ株式会社 Predicted image generation device, moving image decoding device, and moving image encoding device
CN103416064A (en) * 2011-03-18 2013-11-27 索尼公司 Image-processing device, image-processing method, and program
US9350972B2 (en) * 2011-04-28 2016-05-24 Sony Corporation Encoding device and encoding method, and decoding device and decoding method
TW201345262A (en) * 2012-04-20 2013-11-01 Novatek Microelectronics Corp Image processing circuit and image processing method
CN103634606B (en) * 2012-08-21 2015-04-08 腾讯科技(深圳)有限公司 Video encoding method and apparatus
US9524008B1 (en) * 2012-09-11 2016-12-20 Pixelworks, Inc. Variable frame rate timing controller for display devices
JP2014082541A (en) * 2012-10-12 2014-05-08 National Institute Of Information & Communication Technology Method, program and apparatus for reducing data size of multiple images including information similar to each other
WO2018142596A1 (en) * 2017-02-03 2018-08-09 三菱電機株式会社 Encoding device, encoding method, and encoding program
US11032541B2 (en) * 2018-10-22 2021-06-08 Tencent America LLC Method and apparatus for video coding
CN112738527A (en) * 2020-12-29 2021-04-30 深圳市天视通视觉有限公司 Video decoding detection method and device, electronic equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003333540A (en) 2002-05-13 2003-11-21 Hitachi Ltd Frame rate converting apparatus, video display apparatus using the same, and a television broadcast receiving apparatus

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6907073B2 (en) 1999-12-20 2005-06-14 Sarnoff Corporation Tweening-based codec for scaleable encoders and decoders with varying motion computation capability
US6816552B2 (en) 2001-07-11 2004-11-09 Dolby Laboratories Licensing Corporation Interpolation of video compression frames
JP2004007379A (en) * 2002-04-10 2004-01-08 Toshiba Corp Method for encoding moving image and method for decoding moving image
KR101108661B1 (en) 2002-03-15 2012-01-25 노키아 코포레이션 Method for coding motion in a video sequence
JP2004023458A (en) * 2002-06-17 2004-01-22 Toshiba Corp Moving picture encoding/decoding method and apparatus
AU2003265075A1 (en) * 2002-10-22 2004-05-13 Koninklijke Philips Electronics N.V. Image processing unit with fall-back
EP1715694A4 (en) * 2004-02-03 2009-09-16 Panasonic Corp Decoding device, encoding device, interpolation frame creating system, integrated circuit device, decoding program, and encoding program
CN101147399B (en) 2005-04-06 2011-11-30 汤姆森许可贸易公司 Method and apparatus for encoding enhancement layer video data
JP2007082030A (en) 2005-09-16 2007-03-29 Hitachi Ltd Video display device
US8750387B2 (en) * 2006-04-04 2014-06-10 Qualcomm Incorporated Adaptive encoder-assisted frame rate up conversion
JP2008135980A (en) 2006-11-28 2008-06-12 Toshiba Corp Interpolation frame generating method and interpolation frame generating apparatus
JP2008277932A (en) 2007-04-26 2008-11-13 Hitachi Ltd Image encoding method and its apparatus
US20080285656A1 (en) * 2007-05-17 2008-11-20 The Hong Kong University Of Science And Technology Three-loop temporal interpolation for error concealment of multiple description coding
JP2009071809A (en) * 2007-08-20 2009-04-02 Panasonic Corp Video display apparatus, and interpolated image generating circuit and its method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003333540A (en) 2002-05-13 2003-11-21 Hitachi Ltd Frame rate converting apparatus, video display apparatus using the same, and a television broadcast receiving apparatus

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
KAMP S ET AL: "Decoder-side MV derivation with multiple ref pics", 33. VCEG MEETING; 83. MPEG MEETING; 12-1-2008 - 13-1-2008; ANTALYA; (VIDEO CODING EXPERTS GROUP OF ITU-T SG.16),, no. VCEG-AH15, 12 January 2008 (2008-01-12), XP030003553 *
KAMP S ET AL: "Multi-Hypothesis Prediction with Decoder Side Motion Vector Derivation", 27. JVT MEETING; 6-4-2008 - 10-4-2008; GENEVA, ; (JOINT VIDEO TEAM OFISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ),, no. JVT-AA040, 24 April 2008 (2008-04-24), XP030007383, ISSN: 0000-0091 *
STEFFEN KAMP ET AL: "Decoder Side Motion Vector Derivation", 82. MPEG MEETING; 22-10-2007 - 26-10-2007; SHENZHEN; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. M14917, 16 October 2007 (2007-10-16), XP030043523 *
STEFFEN KAMP ET AL: "Improving AVC compression performance by template matching with decoder-side motion vector derivation", 84. MPEG MEETING; 28-4-2008 - 2-5-2008; ARCHAMPS; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. M15375, 25 April 2008 (2008-04-25), XP030043972 *

Also Published As

Publication number Publication date
US20100128792A1 (en) 2010-05-27
BRPI0904534A2 (en) 2011-02-08
EP2192782A3 (en) 2011-01-12
CN101742331A (en) 2010-06-16
US8798153B2 (en) 2014-08-05
CN101742331B (en) 2013-01-16
JP5325638B2 (en) 2013-10-23
EP2192782A2 (en) 2010-06-02
JP2010154490A (en) 2010-07-08

Similar Documents

Publication Publication Date Title
EP2466893A1 (en) Video decoding method
US10142650B2 (en) Motion vector prediction and refinement using candidate and correction motion vectors
US9247250B2 (en) Method and system for motion compensated picture rate up-conversion of digital video using picture boundary processing
US8325805B2 (en) Video encoding/decoding apparatus and method for color image
US6404814B1 (en) Transcoding method and transcoder for transcoding a predictively-coded object-based picture signal to a predictively-coded block-based picture signal
US7983341B2 (en) Statistical content block matching scheme for pre-processing in encoding and transcoding
US6483876B1 (en) Methods and apparatus for reduction of prediction modes in motion estimation
KR100446083B1 (en) Apparatus for motion estimation and mode decision and method thereof
EP0762776A2 (en) A method and apparatus for compressing video information using motion dependent prediction
WO2001050770A2 (en) Methods and apparatus for motion estimation using neighboring macroblocks
JP4462823B2 (en) Image signal processing apparatus and processing method, coefficient data generating apparatus and generating method used therefor, and program for executing each method
US7746930B2 (en) Motion prediction compensating device and its method
CN1695381A (en) Sharpness enhancement in post-processing of digital video signals using coding information and local spatial features
JP2006279917A (en) Dynamic image encoding device, dynamic image decoding device and dynamic image transmitting system
JP3946781B2 (en) Image information conversion apparatus and method
JP2001285881A (en) Digital information converter and method, and image information converter and method
JP4078906B2 (en) Image signal processing apparatus and processing method, image display apparatus, coefficient data generating apparatus and generating method used therefor, program for executing each method, and computer-readable medium storing the program
US20040013197A1 (en) High-speed motion estimator and method with variable search window
JP5598199B2 (en) Video encoding device
JP2868445B2 (en) Moving image compression method and apparatus
JPH0851598A (en) Image information converter
US20110142127A1 (en) Image Processing Device
JP2941693B2 (en) Motion vector search device
JP4001143B2 (en) Coefficient generation apparatus and method
JP2001309389A (en) Device and method for motion vector conversion

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20120309

AC Divisional application: reference to earlier application

Ref document number: 2192782

Country of ref document: EP

Kind code of ref document: P

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20150602