USRE43238E1 - Picture signal transmitting method and apparatus - Google Patents

Picture signal transmitting method and apparatus Download PDF

Info

Publication number
USRE43238E1
USRE43238E1 US12/152,290 US15229008A USRE43238E US RE43238 E1 USRE43238 E1 US RE43238E1 US 15229008 A US15229008 A US 15229008A US RE43238 E USRE43238 E US RE43238E
Authority
US
United States
Prior art keywords
picture
video signal
signal
picture type
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
US12/152,290
Inventor
Katsumi Tahara
Hiromi Yoshinari
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/219,472 external-priority patent/US5473380A/en
Priority claimed from JP13065194A external-priority patent/JP3653745B2/en
Application filed by Sony Corp filed Critical Sony Corp
Priority to US12/152,290 priority Critical patent/USRE43238E1/en
Application granted granted Critical
Publication of USRE43238E1 publication Critical patent/USRE43238E1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/24Systems for the transmission of television signals using pulse code modulation
    • H04N7/52Systems for transmission of a pulse code modulated video signal with one or more other pulse code modulated signals, e.g. an audio signal or a synchronizing signal
    • H04N7/54Systems for transmission of a pulse code modulated video signal with one or more other pulse code modulated signals, e.g. an audio signal or a synchronizing signal the signals being synchronous
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23614Multiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/434Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams, extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
    • H04N21/4348Demultiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction
    • H04N9/8047Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction using transform coding

Definitions

  • the present invention relates to coding and decoding of a picture signal for transmission, and, more particularly, is directed to matching the type of predictive coding applied to pictures of the picture signal.
  • moving picture signals are compressed and encoded by taking advantage of intra-frame and inter-frame correlation so that they can be more efficiently transmitted over a communication channel to a remote location.
  • Intra-frame correlation can be utilized by an orthogonal transformation, such as a discrete cosine transformation (DCT).
  • DCT discrete cosine transformation
  • Inter-frame correlation can be utilized by predictive encoding between successive pictures.
  • a picture generally refers to an image represented by a frame.
  • the fields of a frame are coded in a non-interlaced manner, that is, separately, each field may be referred to as a picture.
  • frame pictures PC 1 , PC 2 and PC 3 are generated at time points t 1 , t 2 and t 3 .
  • the difference between the frame pictures PC 1 and PC 2 is obtained as difference picture data PC 12
  • the difference between the frame pictures PC 2 and PC 3 is obtained as difference picture data PC 23 . Since there is a fairly small change between signals of temporally neighboring frames, transmission of only the difference picture data utilizes the transmission channel more efficiently than transmission of the original pictures. That is, using the difference picture data as encoded picture signals reduces the amount of data to be transmitted.
  • I-pictures Pictures which are encoded utilizing only intra-frame correlation and not inter-frame correlation, are referred to herein as intra-pictures or I-pictures.
  • the previously encoded picture may be an I-picture or a P-picture, and temporally succeeds the P-picture.
  • B-pictures Pictures which are encoded with predictive encoding relative to at most two pictures, a temporally preceding and a temporally succeeding picture, are referred to herein as bi-directionally predictive coded pictures or B-pictures.
  • the two pictures may each be an I-picture or a P-picture. When both are used, the mean value of the two pictures is obtained and used as a reference picture for the picture to be encoded.
  • a series of pictures may be considered as groups of pictures having a predetermined number of frames such as F 1 . . . F 17 .
  • the luminance and chrominance picture signals of the leading frame F 1 are encoded as an I-picture
  • the picture signals of the second frame F 2 are encoded as a B-picture
  • the picture signals of the third frame F 3 are encoded as a P-picture.
  • the fourth and the following frames F 4 to F 17 are encoded alternately as B-pictures and P-pictures.
  • FIG. 2A shows the reference pictures used for encoding P-pictures
  • FIG. 2B shows the reference pictures used for encoding B-pictures.
  • Blocks F 1 to F 5 in FIG. 3A represent data for frames of moving picture signals
  • blocks F 1 X to F 5 X in FIG. 3B represent data for encoded frames.
  • the solid line arrows in FIG. 3A show the frames to which motion vectors x 1 . . . x 6 relate.
  • the first method shown as SP 1 , is to not use predictive encoding, that is, to use only intra-frame correlation. This is suitable for any macro-blocks of an I-picture, a P-picture and a B-picture. In other words, if less encoded data is produced without predictive encoding, then this method is selected.
  • the second method shown as SP 2
  • SP 3 is to predictively encode relative to a picture which temporally precedes the current picture, referred to as forward predictive encoding.
  • the second method is suitable for macro-blocks of only B-pictures.
  • the third method is suitable for macro-blocks of P-pictures and B-pictures.
  • the fourth method shown as SP 4 , is to predictively encode relative to the mean value of two pictures, one temporally preceding and one temporally succeeding the current picture. This method is suitable for macro-blocks of only B-pictures.
  • the first frame F 1 is encoded as an I-picture using the first method SP 1 so that it is directly transmitted over a transmission channel as encoded data F 1 X.
  • the third frame F 3 is encoded as a P-picture.
  • the third method SP 3 forward predictive coding
  • difference signals from the temporally preceding frame F 1 used as the reference picture, as indicated by a broken-line arrow SP 3 , and a motion vector x 3 between the reference picture F 1 and the current picture F 3 are calculated and encoded as data F 3 X for that macro-block.
  • the first method SP 1 can be used wherein the data of the original frame F 3 are directly utilized as the transmission data F 3 X for that macro-block.
  • the second frame F 2 is encoded as a B-picture.
  • a difference between the mean value of the temporally preceding frame F 1 and the temporally succeeding frame F 3 is calculated, on a pixel by pixel basis.
  • the difference data and the motion vectors x 1 and x 2 are encoded as data F 2 X.
  • the data of the original frame F 2 forms the encoded data F 2 X.
  • one of the second or third methods SP 2 , SP 3 is used to encode a macro-block of the frame F 2 , one of the difference between the temporally succeeding frame F 3 and the current frame F 2 , and the difference between the temporally preceding frame F 1 and the current frame F 2 is calculated.
  • the difference data and one of the motion vectors x 1 , x 2 are encoded as the data F 2 X.
  • the frame F 4 for the B-picture and the frame F 5 for the P-picture are processed in a similar manner as described above to generate transmitted data F 4 X and F 5 X.
  • FIG. 4 illustrates an arrangement for encoding and decoding moving picture signals in accordance with the above-described predictive encoding scheme.
  • an encoding device 1 encodes input picture signals and transmits the encoded signals to a recording medium 3 as a transmission channel for recording.
  • a decoding device 2 reproduces the signals recorded on the recording medium 3 and decodes these as output signals.
  • the encoding device 1 includes an input terminal 10 , a pre-processing circuit 11 , A/D converters 12 and 13 , a frame memory 14 including a luminance signal frame memory 15 and a color difference signal frame memory 16 , a format converting circuit 17 and an encoder 18 .
  • Input terminal 10 is adapted to receive a video signal VD and to supply the signal VD to pre-processing circuit 11 which functions to separate the video signal VD into luminance signals and color signals, herein chrominance or color difference signals, that are applied to analog-to-digital (A/D) converters 12 and 13 , respectively.
  • the video signals, digitized by analog- to-digital conversion by the A/D converters 12 and 13 are supplied to frame memory 14 having memories 15 , 16 which function to store the luminance signals and the color difference signals, respectively, and to read out the signals stored therein to format converting circuit 17 .
  • the converter 17 is operative to convert frame format signals stored in the frame memory section 14 into block format signals. As shown in FIG. 5A , pictures are stored in the frame memory section 14 as frame-format data having V lines each consisting of H dots.
  • the converting circuit 17 divides each frame into N slices, each slice comprising a multiple of 16 lines. As shown, in FIG. 5B , the converter 17 divides each slice into M macro-blocks. As shown in FIG. 5C , each macro-block represents luminance signals Y corresponding to 16 ⁇ 16 pixels or dots, and associated chrominance Cr, Cb signals. These luminance signals are sub-divided into blocks Y 1 to Y 4 , each consisting of 8 ⁇ 8 dots. The 16 ⁇ 16 dot luminance signals are associated with 8 ⁇ 8 dot Cb signals and 8 ⁇ 8 dot Cr signals.
  • the converter 17 is also operative to supply the block format signals to the encoder 18 , which is described in detail below with reference to FIG. 6 .
  • the encoder 18 operates to encode the block format signals and to supply the encoded signals as a bitstream over a transmission channel for recording on the recording medium 3 .
  • the decoding device 2 includes a decoder 31 , a format converting circuit 32 , a frame memory section 33 including a luminance signal frame memory 34 and a color difference signal frame memory 35 , digital-to-analog converters 36 and 37 , a post-processing circuit 38 and an output terminal 30 .
  • the decoder 31 is operative to reproduce encoded data from the recording medium 3 and to decode the encoded data, as described in detail below with reference to FIG. 9 , and to supply decoded data signals to format converting circuit 32 which is operative to convert the decoded data signals into frame format data signals and to supply the frame format data signals as luminance signals and color difference signals to the memory 33 .
  • the memories 34 , 35 of the memory 33 function to store the luminance and chrominance signals, respectively, and to apply these signals to D/A converters 36 and 37 , respectively.
  • the analog signals from converters 36 , 37 are synthesized by a post-processing circuit 38 which functions to form output picture signals and to output them to output terminal 30 , and thence to a display unit, such as a CRT, not shown, for display.
  • FIG. 6 illustrates the encoder 18 shown in FIG. 4 .
  • the encoder 18 stores three pictures, the current picture and the pictures temporally preceding and succeeding the current picture. Based on the sequential position of the current picture in the group of pictures, the picture coding type (I, P or B) is selected for each picture.
  • the picture type sequence is determined by a user using picture type input device 65 , independent of the pictures applied to an input terminal 49 .
  • the encoder 18 also chooses one of frame-based and field-based predictive encoding as will be explained with reference to FIG. 7 , and further chooses one of frame-based and field-based DCT encoding as will be explained with reference to FIG. 8 .
  • For each picture appropriate motion vectors are obtained and the picture is predictively encoded relative to zero, one or two previously encoded pictures which have been locally decoded and which are referred to as reference pictures to form a difference data signal.
  • the difference data signal is orthogonally transformed into blocks of coefficient data which are quantized, variable length encoded and transmitted as encoded data.
  • the quantized data are dequantized, inverse orthogonally transformed, and stored as the reference pictures.
  • the predictive encoding applies the motion vector(s) obtained for the current picture to the reference picture(s) to produce a prediction picture which is subtracted from the current picture to yield the difference data.
  • Picture data for encoding is supplied macro-block by macro-block to the input terminal 49 and thence to a motion vector detection circuit 50 which is operative to process the picture data of respective frames as I-pictures, P-pictures or as B-pictures, in accordance with a predetermined sequence for each group of pictures, as shown for example, in FIGS. 2A , 2 B.
  • the circuit 50 applies the picture data of the current frame to a frame memory 51 having frame memories 51 a, 51 b, 51 c used for storing a temporally preceding picture, the current picture and a temporally succeeding picture, respectively.
  • the frames F 1 , F 2 , F 3 are stored in the memories 51 a, 51 b, 51 c, respectively. Then the picture stored in memory 51 c is transferred to memory 51 a. The frames F 4 , F 5 are stored in the memories 51 b, 51 c, respectively. The operations of transferring the picture in memory 51 c to memory 51 a and storing the next two pictures in memories 51 b, 51 c are repeated for the remaining pictures in the group of pictures.
  • the motion vector detection circuit 50 After processing the first frame picture data in a group of pictures as an I-picture and before processing the second frame picture as a B-picture, the motion vector detection circuit 50 processes the third frame P-picture.
  • the processing sequence is different from the sequence in which the pictures are supplied because the B-picture may involve backward prediction, so subsequent decoding may require that the P-picture temporally succeeding the B-picture have been previously decoded.
  • the motion vector detection circuit 50 calculates as an estimated value for intra-coding for each macro-block, the sum of absolute values of prediction errors for the frame prediction mode for each macro-block and the sum of absolute values of prediction errors for the field prediction mode for each macro-block and supplies these sums to the prediction decision circuit 54 which compares these sums and selects frame prediction mode or field prediction mode in accordance with the smallest of these values and provides the selected mode to the prediction mode changeover circuit 52 .
  • the prediction mode changeover circuit 52 outputs the four luminance blocks Y 1 to Y 4 and the two chrominance or color difference blocks Cb, Cr of each macro-block received from the motion vector detection circuit 50 without processing.
  • odd or first field line data indicated by solid lines
  • even or second field line data indicated by dashed lines
  • a indicates units for motion compensation.
  • motion compensation is performed with four luminance blocks (macro-blocks) as a unit and a single motion vector is associated with the four luminance blocks Y 1 to Y 4 .
  • the prediction mode changeover circuit 52 processes the signals received from the motion vector detection circuit 50 so that each of the four luminance blocks comprises data from a single field and the two color difference blocks have non-interlaced odd and even field data.
  • the luminance blocks Y 1 and Y 2 have odd-field data and the luminance blocks Y 3 and Y 4 have even-field data
  • the upper halves of the color difference blocks Cb, Cr represent odd field color difference data for the luminance blocks Y 1 and Y 2
  • the lower halves of the color difference blocks Cb, Cr represent even field color difference data for the luminance blocks Y 3 and Y 4 .
  • b indicates units for motion compensation.
  • motion compensation is performed separately for the odd-field blocks and even-field blocks so that one motion vector is associated with the two luminance blocks Y 1 and Y 2 and another motion vector is associated with the two luminance blocks Y 3 and Y 4 .
  • the prediction mode changeover circuit 52 supplies the current picture, as processed for frame based or field based predictive encoding, to arithmetic unit 53 of FIG. 6 .
  • the arithmetic unit 53 functions to perform one of intra-picture prediction, forward prediction, backward prediction or bi-directional prediction.
  • a prediction decision circuit 54 is adapted to select the best type of prediction in dependence upon the prediction error signals associated with the current picture signals.
  • the motion vector detection circuit 50 calculates, for the current picture, the sum of absolute values of the differences between each Aij and the average value of the Aij in each macro-block ⁇
  • the motion vector detection circuit 50 calculates the sum of absolute values (or sum of squares) of the difference (Aij ⁇ Bij) between signals Aij of the macro-blocks of the current picture, and signals Bij of the macro-blocks of the prediction picture ⁇
  • the motion vector(s) for the current picture are applied to the reference picture(s) to generate the prediction picture.
  • is referred to as a forward prediction error signal
  • is referred to as a backward prediction error signal.
  • the prediction picture is the mean of a temporally preceding and a temporally succeeding reference picture, as motion-compensated
  • is referred to as a bi-directional prediction error signal.
  • the circuit 50 supplies the forward frame prediction, the forward field prediction, the backward frame prediction, the backward field prediction, the bi-directional frame prediction and the bi-directional field prediction error signals to the prediction decision circuit 54 .
  • the prediction decision circuit 54 selects one of intra-coding, forward inter-picture prediction, backward inter-picture prediction or bi-directional inter-picture prediction and one of frame and field prediction mode in accordance with the smallest of the estimated value for intra-coding and the forward frame, the forward field, the backward frame, the backward field, the bi-directional frame and the bi-directional field prediction error signals.
  • the arithmetic unit 53 predictively encodes the current picture, as processed by the frame or field changeover circuit 52 , in accordance with the prediction mode selected by the prediction decision circuit 54 .
  • the motion vector detection circuit 50 serves to calculate and supply the motion vector(s) associated with the selected prediction mode to a variable length encoding circuit 58 and a motion compensation circuit 64 , explained later.
  • the sums of the absolute values of the inter-frame differences (prediction errors) on the macro-block basis are supplied from the motion vector detection circuit 50 to the prediction mode changeover circuit 52 and to the prediction decision circuit 54 , in the manner as described above.
  • the arithmetic unit 53 supplies predictively encoded data, also referred to as difference data, for the current picture to a DCT mode changeover circuit 55 which is adapted to process the current picture for one of frame based and field based orthogonal transformation.
  • the DCT changeover circuit 55 functions to compare the encoding efficiency when the DCT operations for the macro-blocks in a picture are performed with the odd field data alternating with the even field data, that is, for frame based orthogonal transformation, as shown in FIG. 8A , with the encoding efficiency when the DCT operations for the macro-blocks in a picture are performed with the odd field data separated from the even field data, that is, for field based orthogonal transformation, as shown in FIG. 8B .
  • the circuit functions to select the mode with the higher encoding efficiency.
  • the DCT mode changeover circuit 55 places the luminance macro-block data into interlaced form, as shown in FIG. 8A , and calculates the differences between the odd field line signals and even field line signals vertically adjacent to each other, and finds the sum of absolute values of the differences EFM, or the sum of squared values of the differences.
  • the DCT mode changeover circuit 55 places the luminance macro-block data into non-interlaced form, as shown in FIG. 8B , and calculates the differences between vertically adjacent odd field line signals and the differences between vertically adjacent even field line signals, and finds the sum of absolute values of the differences EFD, or the sum of squared values of the differences.
  • the DCT changeover circuit 55 compares the difference between the frame based and field based sums of the absolute values with a predetermined threshold and selects frame based DCT transformation if the difference EFM ⁇ EFD is less than the predetermined threshold.
  • the prediction mode changeover circuit 52 sets the mode which will give the least value of the sum of the absolute values of prediction errors, while the DCT mode changeover circuit 55 sets the mode which will give the optimum orthogonal transformation encoding efficiency.
  • the DCT mode changeover circuit 55 functions to ensure that the four luminance blocks Y 1 to Y 4 and two color difference blocks Cb, Cr represent alternating or interlaced odd and even field lines, as shown in FIG. 8A .
  • the DCT mode changeover circuit 55 functions to ensure that each of the luminance blocks represents only one field, and that each of the color difference blocks has segregated or non-interlaced odd and even field lines, as shown in FIG. 8B .
  • the DCT mode changeover circuit 55 functions to output the data having the configuration associated with the selected DCT mode, and to output a DCT flag indicating the selected DCT mode to the variable length encoding circuit 58 and the motion compensation circuit 64 .
  • the DCT mode changeover circuit 55 supplies appropriately configured difference picture data to a DCT circuit 56 shown in FIG. 6 which is operative to orthogonally transform it using a discrete cosine transformation into DCT coefficients, and to supply the DCT coefficient data to a quantization circuit 57 that functions to quantize the coefficient data with quantization steps selected in accordance with the volume of data stored in a transmission buffer 59 and to supply quantized data to a variable length encoding circuit 58 .
  • the variable length encoding circuit 58 is also supplied with the quantization step or scale data from the quantization circuit 57 , prediction mode data from the prediction decision circuit 54 , that is data indicating which of the intrapicture prediction, forward prediction, backward prediction or bi-directional prediction is used, and motion vector data from the motion vector detection circuit 50 .
  • the encoding circuit 58 also receives prediction flag data from the prediction decision circuit 54 comprising a flag indicating which of the frame prediction mode or the field prediction mode is used, and prediction flag data from the DCT mode changeover circuit 55 comprising a flag indicating which of the frame DCT mode or the field DCT mode is used. This information is placed into the header portion of the encoded data stream.
  • variable length encoding circuit 58 serves to encode the quantized data and the header information using a variable length code such as a Huffman code, in accordance with the quantization step data supplied from the quantization circuit 57 , and to output the resulting data to a transmission buffer 59 .
  • a variable length code such as a Huffman code
  • the quantized data and quantization step are also supplied to a dequantization circuit 60 which serves to dequantize the quantized data using the quantization step, and to supply the recovered DCT coefficient data to an inverse DCT circuit 61 that functions to inverse transform the DCT coefficient data to produce recovered difference data and to supply the recovered difference data to an arithmetic unit 62 .
  • the arithmetic unit 62 combines the recovered difference data with a previously encoded and decoded reference picture, as motion compensated, to produce decoded data for a reconstructed picture which will be used as a reference picture and which is read into one of two frame memories 63 a, 63 b.
  • the memories 63 a, 63 b are adapted to read out the reference picture data stored therein to a motion compensation circuit 64 that uses the motion vectors from the motion vector detection circuit 50 to produce a prediction picture from the reference picture. Specifically, the circuit 50 uses the motion vector to alter the readout address of the reference picture from the memory 63 a or 63 b.
  • the second frame B-picture data is processed by the motion vector detection circuit 50 .
  • the prediction decision circuit 54 selects the frame or field prediction mode, while setting the prediction mode to one of intra-frame prediction mode, forward prediction mode, backward prediction mode and bi-directional prediction mode in correspondence with the sum of absolute values of predictive errors by macro-block.
  • the frame memory 63 has its forward and backward prediction picture units 63 a, 63 b bank-exchanged as needed so that a picture stored in one of the units 63 a or 63 b can be outputted as either a forward or a backward prediction picture.
  • the motion compensation circuit 64 functions to supply the motion compensated data as a prediction picture to the arithmetic unit 62 and to the arithmetic unit 53 which subtracts the prediction picture from the P-picture or the B-picture currently being predictively encoded.
  • the prediction decision circuit 54 selects the intra-frame prediction mode and sets a switch 53 d of the arithmetic unit 53 to an input contact a. This causes the I-picture data to be inputted directly to the DCT mode changeover circuit 55 . In this case, no prediction picture is expected from the motion compensation circuit 64 .
  • the I-picture data is also supplied to the forward prediction picture unit 63 a.
  • the circuit 54 When the forward prediction mode is selected by the prediction decision circuit 54 , the circuit 54 also sets the switch 53 d to an input contact b which causes the arithmetic unit 53 a to subtract the prediction picture, produced by the motion compensation circuit 64 , from the picture read out from the memory 51 , for each macro-block on a pixel by pixel basis, to produce difference data.
  • the P-picture after encoding and local decoding, is supplied to one of the units 63 a, 63 b. For example, if the P-picture immediately follows an I-picture, then the P-picture is stored in the backward prediction picture unit 63 b.
  • the prediction picture is a reference I-picture or P-picture read out from the forward prediction picture unit 63 a of the frame memory 63 and motion-compensated by the motion compensation circuit 64 in accordance with the motion vector outputted from the motion vector detection circuit 50 . More specifically, for each macro-block, the motion compensation circuit 64 shifts the readout address of the forward prediction picture unit 63 a in an amount corresponding to the motion vector currently output by the motion vector detection circuit 50 .
  • the circuit 54 When the backward prediction mode is selected by the prediction decision circuit 54 , the circuit 54 also sets the switch 53 d to an input contact c which causes the arithmetic unit 53 b to subtract the prediction picture, produced by the motion compensation circuit 64 , from the picture read out from the memory 51 , on a pixel by pixel basis, to produce difference data.
  • the prediction picture is a P-picture read out from the backward prediction picture unit 63 b of the frame memory 63 and motion-compensated by the motion compensation circuit 64 in accordance with the motion vector outputted from the motion vector detection circuit 50 . More specifically, for each macro-block, the motion compensation circuit 64 shifts the readout address of the backward prediction picture unit 63 b in an amount corresponding to the motion vector currently output by the motion vector detection circuit 50 .
  • the circuit 54 sets the switch 53 d to an input contact d which causes the arithmetic unit 53 c to subtract a prediction picture from the picture read out from the memory 51 , on a pixel by pixel basis, to produce difference data.
  • the prediction picture is the mean value of a forward prediction picture and a backward prediction picture.
  • the picture stored in the forward prediction picture unit 63 a, and the picture stored in the backward prediction picture unit 63 b, are read out and motion-compensated by the motion compensation circuit 64 in dependence upon the motion vectors outputted from the motion vector detection circuit 50 . More specifically, for each macro-block, the motion compensation circuit 64 shifts the readout address of the forward and backward prediction picture units 63 a, 63 b in an amount corresponding to the appropriate one of the motion vectors currently output by the motion vector detection circuit 50 .
  • the transmission buffer 59 temporarily stores the data supplied thereto, generates control data indicating the volume of data stored therein and supplies the control data to the quantization circuit 57 .
  • the control data from the transmission buffer 59 causes the quantization scale of the quantization circuit 57 to increase so as to decrease the volume of the quantized data.
  • the control data from the transmission buffer 59 causes the quantization scale of the quantization circuit 57 to decrease so as to increase the volume of the quantized data.
  • the transmission buffer 59 prevents the data supplied thereto from overflowing or underflowing its capacity.
  • the data stored in the transmission buffer 59 are read out at a predetermined timing to an output terminal 69 and thence to a transmission channel for recording on, for example, the recording medium 3 .
  • the color difference blocks are similarly processed and transmitted using the motion vector which corresponds to the motion vector of the luminance block halved in both the vertical and horizontal directions.
  • FIG. 9 illustrates the decoder 31 shown in FIG. 4 .
  • the reproduced encoded picture data transmitted from the recording medium 3 is applied to a reception circuit, not shown, or to an input terminal 80 which applies the encoded picture data to a reception buffer 81 that serves to temporarily store the encoded picture data and to supply this data to a variable length decoding circuit 82 of a decoding circuit 90 .
  • variable length decoding circuit 82 functions to variable length decode the encoded data, to output the recovered motion vector, prediction mode data, prediction flags and DCT flags to the motion compensation circuit 87 , and to output the quantization step data and variable length decoded picture-data, including the predictive mode, the motion vector, the predictive flag, the DCT flag and the quantized picture data for each macro-block, to an inverse quantization circuit 83 .
  • the inverse quantization circuit 83 is adapted to dequantize the picture data supplied from the variable length decoding circuit 82 in accordance with the quantization step data supplied from the variable length decoding circuit 82 and to output the thus recovered coefficient data to an inverse transformation IDCT circuit 84 .
  • the IDCT circuit 84 is adapted to perform an inverse transformation on the recovered coefficient data to produce recovered difference data, and to supply the recovered difference data to an arithmetic unit 85 .
  • the arithmetic unit 85 does not process the data and simply supplies it through an output terminal 91 to the format converting circuit 32 shown in FIG. 4 , and to a forward prediction picture unit 86 a of a frame memory 86 .
  • the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a P-picture produced in the forward prediction mode
  • the reference picture data of the preceding frame is read and motion-compensated by a motion compensation circuit 87 in dependence upon the motion vector outputted from the variable length decoding circuit 82 to generate a prediction picture.
  • the motion compensation circuit 87 uses the motion vector to alter the read out address supplied to the memory 86 a.
  • the arithmetic unit 85 adds the prediction picture to the recovered difference data to produce a decoded or reconstructed picture which is stored in a backward prediction picture memory 86 b of the frame memory 86 .
  • the decoded P-picture is retained in the decoder 31 , and output after the next B-picture is decoded and output, so as to restore the pictures to the order in which they were supplied to the encoder 18 of FIG. 4 .
  • the decoded P-picture is directly stored in the backward prediction picture unit 86 b, without being output to the output terminal 91 by the arithmetic unit 85 .
  • the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the intra-coding mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87 , a prediction picture is not generated.
  • the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the forward prediction mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87 , the data stored in the forward prediction picture unit 86 a of the frame memory 86 is read out and motion compensated by the motion compensation circuit 87 using the motion vector supplied from the variable length decoding circuit 82 to form the prediction picture.
  • the arithmetic unit 85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
  • the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the backward prediction mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87 , the data stored in the backward prediction picture unit 86 b is read out and motion compensated by the motion compensation circuit 87 using the motion vector supplied from the variable length decoding circuit 82 to form the prediction picture.
  • the arithmetic unit 85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
  • the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the bi-directional prediction mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87 , the data stored in both the forward and backward prediction picture memories 86 a, 86 b are read out and respectively motion compensated by the motion compensation circuit 87 using the motion vectors supplied from the variable length decoding circuit 82 , then averaged to form the prediction picture.
  • the arithmetic unit 85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
  • the recovered B-picture is supplied via the output terminal 91 to the format converting circuit 32 .
  • the B-picture is not utilized for generating a prediction picture for other pictures, it is not stored in the frame memory 86 .
  • picture data of the P-picture stored in the backward prediction picture unit 86 b is read and supplied via the motion compensation circuit 87 to the arithmetic unit 85 . Motion compensation is not performed at this time.
  • the counterpart circuits to the prediction mode changeover circuit 52 and the DCT mode changeover circuit 55 in the encoder 18 of FIG. 6 are not shown in the decoder 31 .
  • the processing to be performed by these circuits that is, the processing for restoring the configuration in which odd-field line signals and even-field line signals are separated from each other to the configuration in which odd and even-field line signals alternate with each other, is performed by the motion compensation circuit 87 .
  • the processing of the luminance signals has been explained in the foregoing. As will be appreciated by one of ordinary skill in the art, the processing of the color difference signals is carried out in a similar manner.
  • the motion vector employed in such case is the motion vector for luminance signals which is halved in both the vertical and horizontal directions.
  • FIG. 10 shows the signal to noise ratio (SNR) for pictures transmitted using the above-described technique.
  • SNR signal to noise ratio
  • the technique shown in FIG. 10 takes advantage of the human visual sense characteristic of perceiving a series of changing picture quality, as shown in FIG. 10 , as of higher quality than a series of unchanging picture quality, for a predetermined transmission rate.
  • transmission rate control is carried out by the quantizer 57 in order to attain the picture quality perceived as better.
  • FIG. 11 shows a configuration representing two codecs connected by an analog connection, namely, coder 201 , decoder 202 , coder 203 and decoder 204 , connected in series.
  • an analog video signal is supplied to an input terminal 200 as an input signal a.
  • the input terminal 200 functions to apply the analog video signal to an A/D converter 211 of coder 201 .
  • the converter 211 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to coding circuit 212 that serves to encode this signal as previously described to produce a coded digital video signal.
  • the coded digital video signal from coding circuit 212 of coder 201 is supplied to a decoding circuit 213 of decoder 202 which is adapted to decode the coded digital video signal and to apply the decoded video signal to D/A converter 214 .
  • the analog video signal produced by D/A converter of decoder 202 is supplied as an output signal b to the coder 203 , which functions in a similar manner as the coder 201 .
  • the digital video signal produced by the coder 203 is supplied to decoder 204 which functions in a similar manner as the decoder 202 .
  • the decoder 204 delivers an analog video signal as an output signal c to a terminal 205 , which may transmit the signal c to another coder (not shown) and so on.
  • FIG. 12 shows the SNR of the output signals b, c shown in FIG. 11 .
  • the SNR of the output signal c is seen to be substantially worse than the SNR of the output signal b.
  • the deterioration in picture quality results from a mismatch between the picture type applied in the first codec and the picture type applied in the second codec. Namely, if a picture coded as a B picture in the first coder/decoder pair is coded as, e.g., P picture in the second coder/decoder pair, a great deterioration of picture quality results because the picture quality changes as a function of the picture type.
  • FIG. 13 shows a configuration representing two codecs connected by a digital connection, namely, coder 302 , decoder 303 , coder 304 and decoder 305 , connected in series.
  • An analog video signal is supplied to terminal 300 , which supplies the analog video signal as an input signal a to A/D converter 301 that serves to digitize the signal a, and to apply the digital signal to a digital interface 311 of coder 302 .
  • the digital interface 311 applies the signal supplied thereto to a coding circuit 312 which encodes or compresses the digital video data to an encoded digital video bit stream.
  • the encoded digital video signal from the coding circuit 312 is supplied to decoding circuit 313 of decoder 303 that decodes the signal supplied thereto, and applies the decoded signal to digital interface 314 .
  • the interface 314 functions to output the decoded signal as an output signal b.
  • the output signal b is supplied to coder 304 which functions in a similar manner as coder 302 to produce a coded signal that is applied to decoder 305 which functions in a similar manner as decoder 303 .
  • the digital signal output from the decoder 303 is supplied to a D/A converter that serves to convert the signal supplied thereto to an analog video signal and to supply the analog video signal as an output signal c to output terminal 307 .
  • FIG. 12 also generally represents the SNR of the output signals b, c shown in FIG. 13 .
  • an object of the present invention is to provide a method and apparatus for encoding and decoding picture signals which avoid the aforementioned disadvantages of the prior art.
  • Another object of the present invention is to provide a method and apparatus for transmitting and receiving picture signals in serial stages which minimizes the deterioration in picture quality at each stage.
  • Yet another object of the present invention is to match the type of coding applied to pictures of a picture signal in serial processing stages, each stage comprising coding and decoding.
  • apparatus and method for processing a digital picture signal operate by receiving a digital picture signal which has picture type data included in a data identification area of the digital picture signal and which indicates one of intrapicture coding, predictive coding and bidirectionally predictive coding for respective pictures represented by the digital picture signal.
  • the picture signal is encoded as a function of the picture type data to produce an encoded picture signal.
  • apparatus and method for processing an encoded digital picture signal operate by decoding the encoded digital picture signal so as to produce picture type data which represents the type of encoding of the encoded digital picture signal and to produce a decoded digital picture signal.
  • the picture type data is added to a data identification area of the decoded digital picture signal to produce an output signal.
  • FIGS. 1A and 1B are pictures illustrating inter-frame correlation
  • FIGS. 2A and 2B are diagrams illustrating types of pictures used in predictive encoding
  • FIG. 3 is a diagram illustrating how picture signals are converted into encoded data for transmission
  • FIG. 4 is a block diagram showing a conventional device for encoding and decoding picture signals
  • FIG. 5 is a diagram referred to in explaining the operation of the format converting circuit shown in FIG. 4 ;
  • FIG. 6 is a block diagram showing the encoder of the device shown in FIG. 4 ;
  • FIG. 7 is a chart referred to in explaining the predictive encoding operation of the encoder shown in FIG. 6 ;
  • FIG. 8 is a chart referred to in explaining the orthogonal transformation operation of the encoder shown in FIG. 6 ;
  • FIG. 9 is a block diagram showing the decoder of the device shown in FIG. 4 ;
  • FIG. 10 is a graph showing picture quality as a function of picture type in a transmitted signal
  • FIG. 11 is a block diagram showing two conventional video codecs connected in series using an analog connection
  • FIG. 12 is a graph showing picture quality of the signals output by the codecs of FIG. 11 ;
  • FIG. 13 is a block diagram showing two conventional video codecs connected in series using a digital connection
  • FIG. 14 is a block diagram showing two video codecs according to the present invention connected in series using an analog connection
  • FIG. 15 is a graph showing picture quality of the signals output by the codecs of FIG. 14 ;
  • FIG. 16 is a block diagram showing two video codecs according to the present invention connected in series using a digital connection
  • FIGS. 17A , 17 B and 17 C illustrate the data structure of a decoded digital video signal
  • FIGS. 18A , 18 B and 18 C are diagrams illustrating various encoding structures of groups of pictures
  • FIG. 19 is a block diagram showing a coder circuit according to the present invention.
  • FIG. 20 is a block diagram showing a decoding circuit according to the present invention.
  • the present invention comprises a picture type identifier which is included with the picture signal at all times, namely, when the signal is encoded and when the signal is decoded.
  • the picture type indicates one of intra-picture coding (an I-picture), predictive coding (a P-picture) and bi-directionally predictive coding (a B-picture).
  • An I-picture includes macro-blocks encoded by only intra-coding.
  • a P-picture includes macro-blocks encoded by intra-coding and/or macroblocks encoded by forward predictive coding.
  • a B-picture includes macro-blocks encoded by intra-coding and/or macroblocks encoded by forward predictive coding and/or macroblocks encoded by backward predictive coding and/or macroblocks encoded by bi-directionally predictive coding.
  • each picture of a picture signal is encoded as a function of a respective picture type, then appropriately decoded.
  • the decoded picture signal includes the respective picture types, preferably in the vertical blanking interval of each decoded picture.
  • the decoded picture signal may be further processed by, for example, dubbing.
  • the re-encoding is a function of the picture type included in the decoded picture signal.
  • the re-encoded signal includes the picture type.
  • Subsequent decoding of the re-encoded picture is a function of the picture type.
  • Each picture of the re-decoded picture signal includes its respective picture type.
  • the present invention matches the type of predictive coding applied to pictures in a picture signal by serially arranged coders which process the picture signal.
  • the present invention promotes optimum picture quality.
  • a picture, previously encoded as an I-picture, P-picture or B-picture is again encoded as an I-picture, P-picture or B-picture, respectively.
  • encoding of a picture, previously encoded as a B-picture, as an I-picture or a P-picture is prevented.
  • deterioration in signal quality after plural coding and decoding operations is minimized.
  • a first codec comprises coder 120 and decoder 121
  • a second codec comprises coder 122 and decoder 123 . It will be appreciated by one of ordinary skill in the art that additional codecs may be serially connected to those shown in FIG. 14 .
  • an analog video signal is supplied to an input terminal 100 as an input signal a
  • a picture type signal is supplied to an input terminal 108 .
  • the picture type indicates one of intra-picture coding (an I-picture), predictive coding (a P-picture) and bi-directionally predictive coding (a B-picture).
  • the input terminals 100 , 108 function to apply the analog video signal and the picture type signal, respectively, to an A/D converter 101 and a coding circuit 102 , respectively, of coder 120 .
  • the converter 101 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to the coding circuit 102 .
  • the coding circuit 102 serves to encode the digital video signal as a function of the picture type signal to produce a coded digital video signal which includes, for each encoded picture, its picture type as identified by the picture type signal. More specifically, if the picture type for a picture indicates intra-picture coding, then the coding circuit 102 codes the picture as an I-picture. If the picture type for a picture indicates predictive coding, then the coding circuit 102 codes the picture as a P-picture. If the picture type for a picture indicates bi-directionally predictive coding, then the coding circuit 102 codes the picture as a B-picture.
  • the coding unit 120 may alternatively have the structure shown in FIG. 6 , in which coding is performed without reference to an externally supplied picture type.
  • an externally supplied picture type means a picture type supplied from generally the same source as supplies the digital video signal, rather than from a separate source as is shown in FIG. 6 , namely, the picture type input device 65 .
  • the coded digital video signal from coding circuit 102 of FIG. 14 is supplied to a decoding circuit 103 of decoder 121 which is adapted to decode the coded digital video signal as a function of the picture type included in the encoded signal and to apply the decoded video signal to D/A converter 104 .
  • the decoding circuit 103 is further adapted to apply the picture type decoded from the coded digital video signal to a multiplexer 105 .
  • the multiplexer 105 is operative to multiplex the picture type information with the decoded video signal to produce a multiplexed analog video signal as an output signal b in which the picture type information is contained in the decoded video signal.
  • the multiplexer 105 inserts the picture type for a picture of the decoded video signal into the vertical blanking interval of the picture.
  • a picture may be either a frame or a field of the video signal.
  • the output signal b is supplied from the multiplexer 105 to a separating circuit 106 of the coder 122 .
  • the separating circuit is operative to separate or demultiplex the analog video signal and the picture type information from the output signal b, to supply the separated analog video signal to an A/D converter 107 , and to supply the separated picture type information to a coding circuit 108 .
  • the converter 107 is adapted to convert the separated analog video signal to a digital video signal, and to apply the digital video signal to the coding circuit 108 .
  • the coding circuit 108 serves to encode the digital video signal as a function of the separated picture type to produce a re-coded digital video signal which includes, for each re-encoded picture, its picture type as identified by the separated picture type signal.
  • the re-coded digital video signal from the coding circuit 108 is supplied to the decoder 123 , which operates in a similar manner as the decoder 121 .
  • Decoding circuit 110 of decoder 123 decodes the re-coded digital video signal to produce a re-decoded digital video signal and a corresponding picture type signal.
  • the re-decoded digital video signal is converted to an analog signal by D/A converter 109 , and applied to a multiplexer 111 which multiplexes the analog video signal with the picture type signal from decoding circuit 110 to produce a multiplexed analog video signal as an output signal c.
  • the multiplexer 111 inserts the picture type for a picture into the vertical blanking interval of the picture.
  • the multiplexer 111 applies its output signal c to an output terminal 119 .
  • the codecs of FIG. 14 process respective pictures of the video signals b and c in the same manner, that is, as the same one of an I-picture, a P-picture or a B-picture.
  • FIG. 15 shows the SNR of the output signals b, c shown in FIG. 14 .
  • the SNR of the output signal c is seen to be only slightly worse than the SNR of the output signal b.
  • FIG. 16 shows codecs according to the present invention having a serial digital connection therebetween.
  • a first codec comprises coder 142 and decoder 143
  • a second codec comprises coder 144 and decoder 145 .
  • an analog video signal is supplied to an input terminal 140 as an input signal a, and a picture type signal is supplied to an input terminal 148 .
  • the input terminals 140 , 148 function to apply the analog video signal and the picture type signal, respectively, to an A/D converter 141 and a coding circuit 152 of coder 142 , respectively.
  • the converter 141 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to a digital interface 151 of coder 142 .
  • the coding circuit 152 serves to encode the digital video signal as a function of the picture type signal to produce a coded digital video signal which includes, for each encoded picture, its picture type as identified by the picture type signal. More specifically, if the picture type for a picture indicates intra-picture coding, then the coding circuit 152 codes the picture as an I-picture. If the picture type for a picture indicates predictive coding, then the coding circuit 152 codes the picture as a P-picture. If the picture type for a picture indicates bi-directionally predictive coding, then the coding circuit 152 codes the picture as a B-picture.
  • the coding unit 120 may alternatively have the structure shown in FIG. 6 , in which coding is performed without reference to an externally supplied picture type.
  • the coded digital video signal from coding circuit 152 is supplied to a decoding circuit 153 of decoder 143 which is adapted to decode the coded digital video signal as a function of the picture type included in the encoded signal and to apply the decoded video signal to a digital interface 154 .
  • the decoding circuit 153 is further adapted to apply the picture type decoded from the coded digital video signal to a multiplexer 155 .
  • the multiplexer 155 is operative to multiplex the picture type information with the decoded video signal to produce a multiplexed digital video signal as an output signal b in which the picture type information is contained in the decoded video signal.
  • multiplexer 155 multiplexes the picture type for a picture of the decoded video signal as a flag in the respective picture.
  • the picture type information is inserted (or multiplexed) into the decoded video signal at a location therein which precedes the actual video data that represents the video field or frame.
  • FIGS. 17A , 17 B and 17 C illustrate the data structure of the decoded (MPEG) video signal.
  • FIG. 17A illustrates the data structure of a decoded video signal having a serial digital interface format as specified in the standard SMPTE—259 (Society of Motion Picture & Television Engineers).
  • a frame consists of a first vertical blanking area VBK 1 , a first optional blanking area OBK 1 and a first active video area ACV 1 , which constitutes the first field, followed by a second vertical blanking area VBK 2 , a second optional blanking area OBK 2 and a second active video area ACV 2 , which constitutes the second field.
  • each of the vertical blanking areas consists of 9 horizontal scanning lines
  • each of the optional blanking areas consists of 10 horizontal scanning lines
  • the first active video area consists of 244 horizontal scanning lines
  • the second active video area consists of 243 horizontal scanning lines, for a total of 525 horizontal scanning lines for a single frame.
  • FIG. 17B illustrates the data structure of a horizontal scanning line.
  • a horizontal scanning line includes an end of active video (EAV) area followed by an ancillary (ANC) area, a start of active video (SAV) area and a video area.
  • the ANC area includes an ancillary data flag (ADF) area followed by a data identification (DID) area, an ancillary number data (DBN) area, an ancillary word data (DC) area, an ancillary data (ANC DATA) area, and a check sum (CS) area.
  • DID data identification
  • DBN data identification
  • DC ancillary word data
  • ANC DATA ancillary data
  • CS check sum
  • the picture type information is inserted into the DID area of the ANC area of each of the horizontal scanning lines in the first and second vertical blanking areas.
  • the picture type information also may be inserted into the DID area of the ANC area of other horizontal scanning lines of the decoded video signal, although these DID areas may be used for transmitting other types of data.
  • the DID areas in horizontal lines of non-vertical blanking areas may include other formatting information.
  • the picture type information may identify the type of encoding of the picture (e.g., intra-picture coding, predictive coding, and bi-directionally predictive coding) in various ways.
  • FIGS. 18A to 18C illustrate one method in which the type of encoding is identified by the structure of the group of pictures (GOP).
  • the structure of a group of pictures may be identified by the minimum number of frames “M” between I and P pictures, between P and P pictures, and between I and I pictures, and the total number of frames “N” (pictures) in the group of pictures.
  • the output signal b (the multiplexed decoded video signal) is supplied from the multiplexer 155 to a separating circuit 156 of the coder 144 .
  • the separating circuit is operative to separate or demultiplex the digital video signal and the picture type data from the output signal b, to supply the separated digital video signal to a digital interface 157 , and to supply the separated picture type data to a coding circuit 158 .
  • the interface 157 is adapted to apply the separated digital video signal to the coding circuit 158 .
  • the coding circuit 158 serves to encode the separated digital video signal as a function of the separated picture type to produce a re-coded digital video signal which includes, for each re-encoded picture, its picture type as identified by the separated picture type signal.
  • the re-coded digital video signal from the coding circuit 158 is supplied to the decoder 145 , which operates in a similar manner as the decoder 143 .
  • Decoding circuit 160 of decoder 145 decodes the re-coded digital video signal to produce a re-decoded digital video signal and a corresponding picture type signal.
  • the re-decoded digital video signal is supplied to a digital interface 159 and thence to a multiplexer 161 which multiplexes the re-decoded digital video signal with the picture type signal from decoding circuit 160 to produce a multiplexed digital video signal.
  • the multiplexer 161 multiplexes the picture type signal as a flag (e.g., in the DID area of the ANC area) in the re-decoded digital video signal.
  • the multiplexer 161 supplies the multiplexed digital video signal to an A/D converter 146 which serves to convert the multiplexed digital video signal to an analog video signal also referred to as output signal c.
  • the converter 146 applies the output signal c to an output terminal 147 .
  • the codecs of FIG. 16 process respective pictures of the video signals b and c in the same manner, that is, as the same one of an I-picture, a P-picture or a B-picture. Consequently, the deterioration in picture quality at each codec is minimized even when the picture quality changes from picture to picture due to the type of predictive coding employed from picture to picture.
  • FIG. 19 shows the coding circuits of FIGS. 14 and 16 in more detail.
  • elements similar to those in FIG. 6 are indicated by the same reference numerals, and detailed explanations thereof are omitted.
  • a picture type signal is supplied to input terminal 70 which serves to supply the picture type signal to motion vector detector 450 , predictive judging circuit 454 and variable length coding circuit 458 .
  • the processing performed by elements 450 , 454 and 458 is similar to the processing performed by elements 50 , 54 and 58 of FIG. 6 , except that the elements of FIG. 19 perform in accordance with the picture type identified in the external picture type signal which indicates the picture type used in previous coding.
  • the variable length coding circuit 458 includes the picture type based on the external picture type signal as part of the header information.
  • FIG. 20 shows the decoding circuits of FIGS. 14 and 16 in more detail.
  • elements similar to those in FIG. 9 are indicated by the same reference numerals, and detailed explanations thereof are omitted.
  • Variable length decoding circuit 482 of FIG. 20 is similar to variable length decoding circuit 82 of FIG. 9 , except that circuit 482 applies the picture type separated from the encoded signal not only to motion compensator 487 , but also to output terminal 92 .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A picture type identifier, indicating one of intra-picture coding (an I-picture), forward or backward predictive coding (a P-picture) and bi-directionally predictive coding (a B-picture), is included with a picture signal when the signal is encoded and when the signal is decoded. Each of initial and subsequent encoding and decoding is a function of the included picture type.

Description

Notice: More than one reissue application has been filed for the reissue of U.S. Pat. No. 5,715,009. The reissue applications are Ser. No. 10/497,026 and the instant continuation application and three applications filed concurrently herewith.
RELATED APPLICATION
This is a continuation-in-part of U.S. patent application Ser. No. 08/219,472, filed Mar. 29, 1994, now U.S. Pat. No. 5,473,380.
BACKGROUND OF THE INVENTION
The present invention relates to coding and decoding of a picture signal for transmission, and, more particularly, is directed to matching the type of predictive coding applied to pictures of the picture signal.
In, for example, a teleconferencing system or a video telephone system, moving picture signals are compressed and encoded by taking advantage of intra-frame and inter-frame correlation so that they can be more efficiently transmitted over a communication channel to a remote location.
Intra-frame correlation can be utilized by an orthogonal transformation, such as a discrete cosine transformation (DCT).
Inter-frame correlation can be utilized by predictive encoding between successive pictures. As used herein, a picture generally refers to an image represented by a frame. When the fields of a frame are coded in a non-interlaced manner, that is, separately, each field may be referred to as a picture.
As shown in FIG. 1A, for example, frame pictures PC1, PC2 and PC3 are generated at time points t1, t2 and t3. As shown by shading in FIG. 1B, the difference between the frame pictures PC1 and PC2 is obtained as difference picture data PC12, and the difference between the frame pictures PC2 and PC3 is obtained as difference picture data PC23. Since there is a fairly small change between signals of temporally neighboring frames, transmission of only the difference picture data utilizes the transmission channel more efficiently than transmission of the original pictures. That is, using the difference picture data as encoded picture signals reduces the amount of data to be transmitted.
However, if only the difference signals are transmitted, the original picture cannot be restored. It is advantageous to occasionally transmit a picture which is not predictively encoded as a reference for difference picture data, and because it is sometimes more efficient than transmitting the picture as a predictively encoded picture.
Pictures which are encoded utilizing only intra-frame correlation and not inter-frame correlation, are referred to herein as intra-pictures or I-pictures.
Pictures which are encoded with predictive encoding relative to one previously encoded picture are referred to herein as predictive pictures or P-pictures. The previously encoded picture may be an I-picture or a P-picture, and temporally succeeds the P-picture.
Pictures which are encoded with predictive encoding relative to at most two pictures, a temporally preceding and a temporally succeeding picture, are referred to herein as bi-directionally predictive coded pictures or B-pictures. The two pictures may each be an I-picture or a P-picture. When both are used, the mean value of the two pictures is obtained and used as a reference picture for the picture to be encoded.
A series of pictures may be considered as groups of pictures having a predetermined number of frames such as F1 . . . F17. The luminance and chrominance picture signals of the leading frame F1 are encoded as an I-picture, the picture signals of the second frame F2 are encoded as a B-picture, and the picture signals of the third frame F3 are encoded as a P-picture. The fourth and the following frames F4 to F17 are encoded alternately as B-pictures and P-pictures. FIG. 2A shows the reference pictures used for encoding P-pictures, while FIG. 2B shows the reference pictures used for encoding B-pictures.
As shown in FIGS. 3A and 3B, there are four methods for encoding the macro-blocks (discussed below) of a picture. When multiple methods are suitable, the method which will give the smallest amount of encoded data is employed on a macro-block by macro-block basis. Blocks F1 to F5 in FIG. 3A represent data for frames of moving picture signals, whereas blocks F1X to F5X in FIG. 3B represent data for encoded frames. The solid line arrows in FIG. 3A show the frames to which motion vectors x1 . . . x6 relate.
The first method, shown as SP1, is to not use predictive encoding, that is, to use only intra-frame correlation. This is suitable for any macro-blocks of an I-picture, a P-picture and a B-picture. In other words, if less encoded data is produced without predictive encoding, then this method is selected.
The second method, shown as SP2, is to predictively encode relative to a picture which temporally succeeds the current picture, referred to as backward predictive encoding. The third method, shown as SP3, is to predictively encode relative to a picture which temporally precedes the current picture, referred to as forward predictive encoding. The second method is suitable for macro-blocks of only B-pictures. The third method is suitable for macro-blocks of P-pictures and B-pictures.
The fourth method, shown as SP4, is to predictively encode relative to the mean value of two pictures, one temporally preceding and one temporally succeeding the current picture. This method is suitable for macro-blocks of only B-pictures.
The encoding sequence will now be described.
The first frame F1 is encoded as an I-picture using the first method SP1 so that it is directly transmitted over a transmission channel as encoded data F1X.
The third frame F3 is encoded as a P-picture. When the third method SP3, forward predictive coding, is used for a macro-block, difference signals from the temporally preceding frame F1 used as the reference picture, as indicated by a broken-line arrow SP3, and a motion vector x3 between the reference picture F1 and the current picture F3, are calculated and encoded as data F3X for that macro-block. Alternatively, in this or another macro-block of the P picture, if a smaller amount of encoded data is produced for a macro-block of the P picture being encoded, the first method SP1 can be used wherein the data of the original frame F3 are directly utilized as the transmission data F3X for that macro-block.
The second frame F2 is encoded as a B-picture.
When the fourth method SP4 is used to encode a macro-block of the frame F2, a difference between the mean value of the temporally preceding frame F1 and the temporally succeeding frame F3 is calculated, on a pixel by pixel basis. The difference data and the motion vectors x1 and x2 are encoded as data F2X.
When the first processing method SP1 is used to encode a macro-block of the frame F2, the data of the original frame F2 forms the encoded data F2X.
When one of the second or third methods SP2, SP3 is used to encode a macro-block of the frame F2, one of the difference between the temporally succeeding frame F3 and the current frame F2, and the difference between the temporally preceding frame F1 and the current frame F2 is calculated. The difference data and one of the motion vectors x1, x2 are encoded as the data F2X.
The frame F4 for the B-picture and the frame F5 for the P-picture are processed in a similar manner as described above to generate transmitted data F4X and F5X.
FIG. 4 illustrates an arrangement for encoding and decoding moving picture signals in accordance with the above-described predictive encoding scheme. As shown in FIG. 4, an encoding device 1 encodes input picture signals and transmits the encoded signals to a recording medium 3 as a transmission channel for recording. A decoding device 2 reproduces the signals recorded on the recording medium 3 and decodes these as output signals.
The encoding device 1 includes an input terminal 10, a pre-processing circuit 11, A/ D converters 12 and 13, a frame memory 14 including a luminance signal frame memory 15 and a color difference signal frame memory 16, a format converting circuit 17 and an encoder 18.
Input terminal 10 is adapted to receive a video signal VD and to supply the signal VD to pre-processing circuit 11 which functions to separate the video signal VD into luminance signals and color signals, herein chrominance or color difference signals, that are applied to analog-to-digital (A/D) converters 12 and 13, respectively. The video signals, digitized by analog- to-digital conversion by the A/ D converters 12 and 13, are supplied to frame memory 14 having memories 15, 16 which function to store the luminance signals and the color difference signals, respectively, and to read out the signals stored therein to format converting circuit 17.
The converter 17 is operative to convert frame format signals stored in the frame memory section 14 into block format signals. As shown in FIG. 5A, pictures are stored in the frame memory section 14 as frame-format data having V lines each consisting of H dots. The converting circuit 17 divides each frame into N slices, each slice comprising a multiple of 16 lines. As shown, in FIG. 5B, the converter 17 divides each slice into M macro-blocks. As shown in FIG. 5C, each macro-block represents luminance signals Y corresponding to 16×16 pixels or dots, and associated chrominance Cr, Cb signals. These luminance signals are sub-divided into blocks Y1 to Y4, each consisting of 8×8 dots. The 16×16 dot luminance signals are associated with 8×8 dot Cb signals and 8×8 dot Cr signals. The converter 17 is also operative to supply the block format signals to the encoder 18, which is described in detail below with reference to FIG. 6.
The encoder 18 operates to encode the block format signals and to supply the encoded signals as a bitstream over a transmission channel for recording on the recording medium 3.
The decoding device 2 includes a decoder 31, a format converting circuit 32, a frame memory section 33 including a luminance signal frame memory 34 and a color difference signal frame memory 35, digital-to- analog converters 36 and 37, a post-processing circuit 38 and an output terminal 30.
The decoder 31 is operative to reproduce encoded data from the recording medium 3 and to decode the encoded data, as described in detail below with reference to FIG. 9, and to supply decoded data signals to format converting circuit 32 which is operative to convert the decoded data signals into frame format data signals and to supply the frame format data signals as luminance signals and color difference signals to the memory 33. The memories 34, 35 of the memory 33 function to store the luminance and chrominance signals, respectively, and to apply these signals to D/ A converters 36 and 37, respectively. The analog signals from converters 36, 37 are synthesized by a post-processing circuit 38 which functions to form output picture signals and to output them to output terminal 30, and thence to a display unit, such as a CRT, not shown, for display.
FIG. 6 illustrates the encoder 18 shown in FIG. 4.
Generally, the encoder 18 stores three pictures, the current picture and the pictures temporally preceding and succeeding the current picture. Based on the sequential position of the current picture in the group of pictures, the picture coding type (I, P or B) is selected for each picture. The picture type sequence is determined by a user using picture type input device 65, independent of the pictures applied to an input terminal 49.
The encoder 18 also chooses one of frame-based and field-based predictive encoding as will be explained with reference to FIG. 7, and further chooses one of frame-based and field-based DCT encoding as will be explained with reference to FIG. 8. For each picture, appropriate motion vectors are obtained and the picture is predictively encoded relative to zero, one or two previously encoded pictures which have been locally decoded and which are referred to as reference pictures to form a difference data signal. The difference data signal is orthogonally transformed into blocks of coefficient data which are quantized, variable length encoded and transmitted as encoded data.
At the encoder 18, the quantized data are dequantized, inverse orthogonally transformed, and stored as the reference pictures. The predictive encoding applies the motion vector(s) obtained for the current picture to the reference picture(s) to produce a prediction picture which is subtracted from the current picture to yield the difference data.
The elements of the encoder 18 will now be explained in detail.
Picture data for encoding is supplied macro-block by macro-block to the input terminal 49 and thence to a motion vector detection circuit 50 which is operative to process the picture data of respective frames as I-pictures, P-pictures or as B-pictures, in accordance with a predetermined sequence for each group of pictures, as shown for example, in FIGS. 2A, 2B. The circuit 50 applies the picture data of the current frame to a frame memory 51 having frame memories 51a, 51b, 51c used for storing a temporally preceding picture, the current picture and a temporally succeeding picture, respectively.
More specifically, the frames F1, F2, F3 are stored in the memories 51a, 51b, 51c, respectively. Then the picture stored in memory 51c is transferred to memory 51a. The frames F4, F5 are stored in the memories 51b, 51c, respectively. The operations of transferring the picture in memory 51c to memory 51a and storing the next two pictures in memories 51b, 51c are repeated for the remaining pictures in the group of pictures.
After the pictures are read into the memory and temporarily stored, they are read out and supplied to a prediction mode changeover circuit 52 which is adapted to process the current picture for one of frame based and field based predictive encoding. After processing the first frame picture data in a group of pictures as an I-picture and before processing the second frame picture as a B-picture, the motion vector detection circuit 50 processes the third frame P-picture. The processing sequence is different from the sequence in which the pictures are supplied because the B-picture may involve backward prediction, so subsequent decoding may require that the P-picture temporally succeeding the B-picture have been previously decoded.
The motion vector detection circuit 50 calculates as an estimated value for intra-coding for each macro-block, the sum of absolute values of prediction errors for the frame prediction mode for each macro-block and the sum of absolute values of prediction errors for the field prediction mode for each macro-block and supplies these sums to the prediction decision circuit 54 which compares these sums and selects frame prediction mode or field prediction mode in accordance with the smallest of these values and provides the selected mode to the prediction mode changeover circuit 52.
If the frame prediction mode is selected, the prediction mode changeover circuit 52 outputs the four luminance blocks Y1 to Y4 and the two chrominance or color difference blocks Cb, Cr of each macro-block received from the motion vector detection circuit 50 without processing. As shown in FIG. 7A, odd or first field line data, indicated by solid lines, and even or second field line data, indicated by dashed lines, alternate in each luminance and color difference block as received from the motion vector detection circuit 50. In FIG. 7A, a indicates units for motion compensation. In the frame prediction mode, motion compensation is performed with four luminance blocks (macro-blocks) as a unit and a single motion vector is associated with the four luminance blocks Y1 to Y4.
If the field prediction mode is selected, the prediction mode changeover circuit 52 processes the signals received from the motion vector detection circuit 50 so that each of the four luminance blocks comprises data from a single field and the two color difference blocks have non-interlaced odd and even field data. Specifically, as shown in FIG. 7B, the luminance blocks Y1 and Y2 have odd-field data and the luminance blocks Y3 and Y4 have even-field data, while the upper halves of the color difference blocks Cb, Cr represent odd field color difference data for the luminance blocks Y1 and Y2 and the lower halves of the color difference blocks Cb, Cr represent even field color difference data for the luminance blocks Y3 and Y4. In FIG. 7B, b indicates units for motion compensation. In the field prediction mode, motion compensation is performed separately for the odd-field blocks and even-field blocks so that one motion vector is associated with the two luminance blocks Y1 and Y2 and another motion vector is associated with the two luminance blocks Y3 and Y4.
The prediction mode changeover circuit 52 supplies the current picture, as processed for frame based or field based predictive encoding, to arithmetic unit 53 of FIG. 6. The arithmetic unit 53 functions to perform one of intra-picture prediction, forward prediction, backward prediction or bi-directional prediction. A prediction decision circuit 54 is adapted to select the best type of prediction in dependence upon the prediction error signals associated with the current picture signals.
The motion vector detection circuit 50 calculates, for the current picture, the sum of absolute values of the differences between each Aij and the average value of the Aij in each macro-block Σ|Aij−(average of Aij)| and supplies the sum as an estimated value for intra-coding to the prediction decision circuit 54.
The motion vector detection circuit 50 calculates the sum of absolute values (or sum of squares) of the difference (Aij−Bij) between signals Aij of the macro-blocks of the current picture, and signals Bij of the macro-blocks of the prediction picture Σ|Aij−Bij| in each of frame prediction mode and field prediction mode. As explained above, the motion vector(s) for the current picture are applied to the reference picture(s) to generate the prediction picture. When the reference picture temporally precedes the current picture, the quantity E|Aij−Bij| is referred to as a forward prediction error signal, and when the reference picture temporally succeeds the current picture, the quantity Σ|Aij−Bij| is referred to as a backward prediction error signal. When the prediction picture is the mean of a temporally preceding and a temporally succeeding reference picture, as motion-compensated, the quantity Σ|Aij−Bij| is referred to as a bi-directional prediction error signal.
The circuit 50 supplies the forward frame prediction, the forward field prediction, the backward frame prediction, the backward field prediction, the bi-directional frame prediction and the bi-directional field prediction error signals to the prediction decision circuit 54.
The prediction decision circuit 54 selects one of intra-coding, forward inter-picture prediction, backward inter-picture prediction or bi-directional inter-picture prediction and one of frame and field prediction mode in accordance with the smallest of the estimated value for intra-coding and the forward frame, the forward field, the backward frame, the backward field, the bi-directional frame and the bi-directional field prediction error signals. The arithmetic unit 53 predictively encodes the current picture, as processed by the frame or field changeover circuit 52, in accordance with the prediction mode selected by the prediction decision circuit 54.
The motion vector detection circuit 50 serves to calculate and supply the motion vector(s) associated with the selected prediction mode to a variable length encoding circuit 58 and a motion compensation circuit 64, explained later.
The sums of the absolute values of the inter-frame differences (prediction errors) on the macro-block basis are supplied from the motion vector detection circuit 50 to the prediction mode changeover circuit 52 and to the prediction decision circuit 54, in the manner as described above.
The arithmetic unit 53 supplies predictively encoded data, also referred to as difference data, for the current picture to a DCT mode changeover circuit 55 which is adapted to process the current picture for one of frame based and field based orthogonal transformation.
The DCT changeover circuit 55 functions to compare the encoding efficiency when the DCT operations for the macro-blocks in a picture are performed with the odd field data alternating with the even field data, that is, for frame based orthogonal transformation, as shown in FIG. 8A, with the encoding efficiency when the DCT operations for the macro-blocks in a picture are performed with the odd field data separated from the even field data, that is, for field based orthogonal transformation, as shown in FIG. 8B. The circuit functions to select the mode with the higher encoding efficiency.
To evaluate the encoding efficiency for frame based orthogonal transformation, the DCT mode changeover circuit 55 places the luminance macro-block data into interlaced form, as shown in FIG. 8A, and calculates the differences between the odd field line signals and even field line signals vertically adjacent to each other, and finds the sum of absolute values of the differences EFM, or the sum of squared values of the differences.
EFM = j = 1 16 i = 1 16 o ( i , j ) - e ( i , j ) + j = 1 16 i = 1 16 e ( i , j ) - o ( i + 1 , j ) Eq . 1
To evaluate the encoding efficiency for field based orthogonal transformation, the DCT mode changeover circuit 55 places the luminance macro-block data into non-interlaced form, as shown in FIG. 8B, and calculates the differences between vertically adjacent odd field line signals and the differences between vertically adjacent even field line signals, and finds the sum of absolute values of the differences EFD, or the sum of squared values of the differences.
EFD = j = 1 16 i = 1 15 ( o ( i , j ) - o ( i + 1 , j ) + e ( i , j ) - e ( i + 1 , j ) ) Eq . 2
The DCT changeover circuit 55 compares the difference between the frame based and field based sums of the absolute values with a predetermined threshold and selects frame based DCT transformation if the difference EFM−EFD is less than the predetermined threshold.
If the frame prediction mode is selected in the prediction mode changeover circuit 52, the probability is high that the frame DCT mode will be selected in the DCT mode changeover circuit 55. If the field prediction mode is selected in the prediction mode changeover circuit 52, the probability is high that the field DCT mode will be selected in the DCT mode changeover circuit 55. However, since this is not necessarily the case, the prediction mode changeover circuit 52 sets the mode which will give the least value of the sum of the absolute values of prediction errors, while the DCT mode changeover circuit 55 sets the mode which will give the optimum orthogonal transformation encoding efficiency.
If frame based orthogonal transformation mode, also referred to as frame DCT mode, is selected, the DCT mode changeover circuit 55 functions to ensure that the four luminance blocks Y1 to Y4 and two color difference blocks Cb, Cr represent alternating or interlaced odd and even field lines, as shown in FIG. 8A.
If field based orthogonal transformation mode, also referred to as field DCT mode, is selected, the DCT mode changeover circuit 55 functions to ensure that each of the luminance blocks represents only one field, and that each of the color difference blocks has segregated or non-interlaced odd and even field lines, as shown in FIG. 8B.
The DCT mode changeover circuit 55 functions to output the data having the configuration associated with the selected DCT mode, and to output a DCT flag indicating the selected DCT mode to the variable length encoding circuit 58 and the motion compensation circuit 64.
The DCT mode changeover circuit 55 supplies appropriately configured difference picture data to a DCT circuit 56 shown in FIG. 6 which is operative to orthogonally transform it using a discrete cosine transformation into DCT coefficients, and to supply the DCT coefficient data to a quantization circuit 57 that functions to quantize the coefficient data with quantization steps selected in accordance with the volume of data stored in a transmission buffer 59 and to supply quantized data to a variable length encoding circuit 58.
The variable length encoding circuit 58 is also supplied with the quantization step or scale data from the quantization circuit 57, prediction mode data from the prediction decision circuit 54, that is data indicating which of the intrapicture prediction, forward prediction, backward prediction or bi-directional prediction is used, and motion vector data from the motion vector detection circuit 50. The encoding circuit 58 also receives prediction flag data from the prediction decision circuit 54 comprising a flag indicating which of the frame prediction mode or the field prediction mode is used, and prediction flag data from the DCT mode changeover circuit 55 comprising a flag indicating which of the frame DCT mode or the field DCT mode is used. This information is placed into the header portion of the encoded data stream.
The variable length encoding circuit 58 serves to encode the quantized data and the header information using a variable length code such as a Huffman code, in accordance with the quantization step data supplied from the quantization circuit 57, and to output the resulting data to a transmission buffer 59.
The quantized data and quantization step are also supplied to a dequantization circuit 60 which serves to dequantize the quantized data using the quantization step, and to supply the recovered DCT coefficient data to an inverse DCT circuit 61 that functions to inverse transform the DCT coefficient data to produce recovered difference data and to supply the recovered difference data to an arithmetic unit 62.
The arithmetic unit 62 combines the recovered difference data with a previously encoded and decoded reference picture, as motion compensated, to produce decoded data for a reconstructed picture which will be used as a reference picture and which is read into one of two frame memories 63a, 63b. The memories 63a, 63b are adapted to read out the reference picture data stored therein to a motion compensation circuit 64 that uses the motion vectors from the motion vector detection circuit 50 to produce a prediction picture from the reference picture. Specifically, the circuit 50 uses the motion vector to alter the readout address of the reference picture from the memory 63a or 63b.
For a group of pictures, after the first frame I-picture data and the third frame P-picture data are stored in the forward and backward prediction picture memories or units 63a, 63b, respectively, the second frame B-picture data is processed by the motion vector detection circuit 50. The prediction decision circuit 54 selects the frame or field prediction mode, while setting the prediction mode to one of intra-frame prediction mode, forward prediction mode, backward prediction mode and bi-directional prediction mode in correspondence with the sum of absolute values of predictive errors by macro-block.
Since a reconstructed B-picture is not used as a reference picture for other pictures, it is not stored in the frame memory 63.
It will be appreciated that the frame memory 63 has its forward and backward prediction picture units 63a, 63b bank-exchanged as needed so that a picture stored in one of the units 63a or 63b can be outputted as either a forward or a backward prediction picture.
The motion compensation circuit 64 functions to supply the motion compensated data as a prediction picture to the arithmetic unit 62 and to the arithmetic unit 53 which subtracts the prediction picture from the P-picture or the B-picture currently being predictively encoded.
More specifically, when the motion vector detection circuit 50 receives picture data for an I-picture from the forward original picture unit 51a, the prediction decision circuit 54 selects the intra-frame prediction mode and sets a switch 53d of the arithmetic unit 53 to an input contact a. This causes the I-picture data to be inputted directly to the DCT mode changeover circuit 55. In this case, no prediction picture is expected from the motion compensation circuit 64. The I-picture data is also supplied to the forward prediction picture unit 63a.
When the forward prediction mode is selected by the prediction decision circuit 54, the circuit 54 also sets the switch 53d to an input contact b which causes the arithmetic unit 53a to subtract the prediction picture, produced by the motion compensation circuit 64, from the picture read out from the memory 51, for each macro-block on a pixel by pixel basis, to produce difference data. The P-picture, after encoding and local decoding, is supplied to one of the units 63a, 63b. For example, if the P-picture immediately follows an I-picture, then the P-picture is stored in the backward prediction picture unit 63b.
For forward predictive encoding, the prediction picture is a reference I-picture or P-picture read out from the forward prediction picture unit 63a of the frame memory 63 and motion-compensated by the motion compensation circuit 64 in accordance with the motion vector outputted from the motion vector detection circuit 50. More specifically, for each macro-block, the motion compensation circuit 64 shifts the readout address of the forward prediction picture unit 63a in an amount corresponding to the motion vector currently output by the motion vector detection circuit 50.
When the backward prediction mode is selected by the prediction decision circuit 54, the circuit 54 also sets the switch 53d to an input contact c which causes the arithmetic unit 53b to subtract the prediction picture, produced by the motion compensation circuit 64, from the picture read out from the memory 51, on a pixel by pixel basis, to produce difference data.
For backward predictive encoding, the prediction picture is a P-picture read out from the backward prediction picture unit 63b of the frame memory 63 and motion-compensated by the motion compensation circuit 64 in accordance with the motion vector outputted from the motion vector detection circuit 50. More specifically, for each macro-block, the motion compensation circuit 64 shifts the readout address of the backward prediction picture unit 63b in an amount corresponding to the motion vector currently output by the motion vector detection circuit 50.
When the bi-directional prediction mode is selected by the prediction decision circuit 54, the circuit 54 sets the switch 53d to an input contact d which causes the arithmetic unit 53c to subtract a prediction picture from the picture read out from the memory 51, on a pixel by pixel basis, to produce difference data. The prediction picture is the mean value of a forward prediction picture and a backward prediction picture.
In the case of bi-directional prediction, the picture stored in the forward prediction picture unit 63a, and the picture stored in the backward prediction picture unit 63b, are read out and motion-compensated by the motion compensation circuit 64 in dependence upon the motion vectors outputted from the motion vector detection circuit 50. More specifically, for each macro-block, the motion compensation circuit 64 shifts the readout address of the forward and backward prediction picture units 63a, 63b in an amount corresponding to the appropriate one of the motion vectors currently output by the motion vector detection circuit 50.
The transmission buffer 59 temporarily stores the data supplied thereto, generates control data indicating the volume of data stored therein and supplies the control data to the quantization circuit 57. When the volume of data stored in the transmission buffer 59 reaches a predetermined upper limit value, the control data from the transmission buffer 59 causes the quantization scale of the quantization circuit 57 to increase so as to decrease the volume of the quantized data. Similarly, when the volume of data stored in the transmission buffer 59 reaches a predetermined lower limit value, the control data from the transmission buffer 59 causes the quantization scale of the quantization circuit 57 to decrease so as to increase the volume of the quantized data. In this manner, the transmission buffer 59 prevents the data supplied thereto from overflowing or underflowing its capacity. The data stored in the transmission buffer 59 are read out at a predetermined timing to an output terminal 69 and thence to a transmission channel for recording on, for example, the recording medium 3.
Although the foregoing description has been made with reference mainly to the luminance blocks, the color difference blocks are similarly processed and transmitted using the motion vector which corresponds to the motion vector of the luminance block halved in both the vertical and horizontal directions.
FIG. 9 illustrates the decoder 31 shown in FIG. 4.
The reproduced encoded picture data transmitted from the recording medium 3 is applied to a reception circuit, not shown, or to an input terminal 80 which applies the encoded picture data to a reception buffer 81 that serves to temporarily store the encoded picture data and to supply this data to a variable length decoding circuit 82 of a decoding circuit 90.
The variable length decoding circuit 82 functions to variable length decode the encoded data, to output the recovered motion vector, prediction mode data, prediction flags and DCT flags to the motion compensation circuit 87, and to output the quantization step data and variable length decoded picture-data, including the predictive mode, the motion vector, the predictive flag, the DCT flag and the quantized picture data for each macro-block, to an inverse quantization circuit 83.
The inverse quantization circuit 83 is adapted to dequantize the picture data supplied from the variable length decoding circuit 82 in accordance with the quantization step data supplied from the variable length decoding circuit 82 and to output the thus recovered coefficient data to an inverse transformation IDCT circuit 84.
The IDCT circuit 84 is adapted to perform an inverse transformation on the recovered coefficient data to produce recovered difference data, and to supply the recovered difference data to an arithmetic unit 85.
If the recovered difference data supplied from the IDCT circuit 84 represents an I-picture, the arithmetic unit 85 does not process the data and simply supplies it through an output terminal 91 to the format converting circuit 32 shown in FIG. 4, and to a forward prediction picture unit 86a of a frame memory 86.
If the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a P-picture produced in the forward prediction mode, then the reference picture data of the preceding frame, as stored in the forward prediction picture memory 86a of the frame memory 86, is read and motion-compensated by a motion compensation circuit 87 in dependence upon the motion vector outputted from the variable length decoding circuit 82 to generate a prediction picture. Specifically, the motion compensation circuit 87 uses the motion vector to alter the read out address supplied to the memory 86a. The arithmetic unit 85 adds the prediction picture to the recovered difference data to produce a decoded or reconstructed picture which is stored in a backward prediction picture memory 86b of the frame memory 86. The decoded P-picture is retained in the decoder 31, and output after the next B-picture is decoded and output, so as to restore the pictures to the order in which they were supplied to the encoder 18 of FIG. 4.
Even if the macro-block of the P-picture was encoded as intra-coded mode data, the decoded P-picture is directly stored in the backward prediction picture unit 86b, without being output to the output terminal 91 by the arithmetic unit 85.
If the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the intra-coding mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87, a prediction picture is not generated.
If the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the forward prediction mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87, the data stored in the forward prediction picture unit 86a of the frame memory 86 is read out and motion compensated by the motion compensation circuit 87 using the motion vector supplied from the variable length decoding circuit 82 to form the prediction picture. The arithmetic unit 85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
If the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the backward prediction mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87, the data stored in the backward prediction picture unit 86b is read out and motion compensated by the motion compensation circuit 87 using the motion vector supplied from the variable length decoding circuit 82 to form the prediction picture. The arithmetic unit 85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
If the recovered difference data supplied from the IDCT circuit 84 represents a macro-block of a B-picture encoded in the bi-directional prediction mode, as determined from the prediction mode supplied from the variable length decoding circuit 82 to the motion compensation circuit 87, the data stored in both the forward and backward prediction picture memories 86a, 86b are read out and respectively motion compensated by the motion compensation circuit 87 using the motion vectors supplied from the variable length decoding circuit 82, then averaged to form the prediction picture. The arithmetic unit 85 sums the recovered difference data with the prediction picture to form the recovered B-picture.
The recovered B-picture is supplied via the output terminal 91 to the format converting circuit 32. However, since the B-picture is not utilized for generating a prediction picture for other pictures, it is not stored in the frame memory 86.
After outputting of the B-picture, picture data of the P-picture stored in the backward prediction picture unit 86b is read and supplied via the motion compensation circuit 87 to the arithmetic unit 85. Motion compensation is not performed at this time.
The counterpart circuits to the prediction mode changeover circuit 52 and the DCT mode changeover circuit 55 in the encoder 18 of FIG. 6 are not shown in the decoder 31. The processing to be performed by these circuits, that is, the processing for restoring the configuration in which odd-field line signals and even-field line signals are separated from each other to the configuration in which odd and even-field line signals alternate with each other, is performed by the motion compensation circuit 87.
The processing of the luminance signals has been explained in the foregoing. As will be appreciated by one of ordinary skill in the art, the processing of the color difference signals is carried out in a similar manner. However, the motion vector employed in such case is the motion vector for luminance signals which is halved in both the vertical and horizontal directions.
FIG. 10 shows the signal to noise ratio (SNR) for pictures transmitted using the above-described technique. As can be seen, the best quality transmission is obtained for I-pictures, good quality transmission is obtained for P-pictures, and the poorest quality transmission is obtained for B-pictures. Thus, if the transmission path has adequate capacity, it is preferable to transmit a picture as an I-picture.
If all pictures cannot be transmitted as I-pictures, it is better to transmit a series of pictures as shown in FIG. 10, rather than in a format in which one average picture quality is used for all pictures. The technique shown in FIG. 10 takes advantage of the human visual sense characteristic of perceiving a series of changing picture quality, as shown in FIG. 10, as of higher quality than a series of unchanging picture quality, for a predetermined transmission rate.
Accordingly, in the configuration of FIG. 6, transmission rate control is carried out by the quantizer 57 in order to attain the picture quality perceived as better.
To dub pictures, two coder-decoder (codec) units are used in series. However, the picture quality obtained from the second codec is substantially worse than the picture quality obtained from the first codec, as explained below.
FIG. 11 shows a configuration representing two codecs connected by an analog connection, namely, coder 201, decoder 202, coder 203 and decoder 204, connected in series.
In FIG. 11, an analog video signal is supplied to an input terminal 200 as an input signal a. The input terminal 200 functions to apply the analog video signal to an A/D converter 211 of coder 201. The converter 211 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to coding circuit 212 that serves to encode this signal as previously described to produce a coded digital video signal.
The coded digital video signal from coding circuit 212 of coder 201 is supplied to a decoding circuit 213 of decoder 202 which is adapted to decode the coded digital video signal and to apply the decoded video signal to D/A converter 214.
The analog video signal produced by D/A converter of decoder 202 is supplied as an output signal b to the coder 203, which functions in a similar manner as the coder 201.
The digital video signal produced by the coder 203 is supplied to decoder 204 which functions in a similar manner as the decoder 202. The decoder 204 delivers an analog video signal as an output signal c to a terminal 205, which may transmit the signal c to another coder (not shown) and so on.
FIG. 12 shows the SNR of the output signals b, c shown in FIG. 11. The SNR of the output signal c is seen to be substantially worse than the SNR of the output signal b.
The deterioration in picture quality results from a mismatch between the picture type applied in the first codec and the picture type applied in the second codec. Namely, if a picture coded as a B picture in the first coder/decoder pair is coded as, e.g., P picture in the second coder/decoder pair, a great deterioration of picture quality results because the picture quality changes as a function of the picture type.
Since the deterioration in picture quality results from the mismatch between picture types of respective stages of codecs, such deterioration similarly takes place when digital connections are used between respective codecs.
FIG. 13 shows a configuration representing two codecs connected by a digital connection, namely, coder 302, decoder 303, coder 304 and decoder 305, connected in series.
An analog video signal is supplied to terminal 300, which supplies the analog video signal as an input signal a to A/D converter 301 that serves to digitize the signal a, and to apply the digital signal to a digital interface 311 of coder 302. The digital interface 311 applies the signal supplied thereto to a coding circuit 312 which encodes or compresses the digital video data to an encoded digital video bit stream.
The encoded digital video signal from the coding circuit 312 is supplied to decoding circuit 313 of decoder 303 that decodes the signal supplied thereto, and applies the decoded signal to digital interface 314. The interface 314 functions to output the decoded signal as an output signal b.
The output signal b is supplied to coder 304 which functions in a similar manner as coder 302 to produce a coded signal that is applied to decoder 305 which functions in a similar manner as decoder 303. The digital signal output from the decoder 303 is supplied to a D/A converter that serves to convert the signal supplied thereto to an analog video signal and to supply the analog video signal as an output signal c to output terminal 307.
FIG. 12 also generally represents the SNR of the output signals b, c shown in FIG. 13.
OBJECTS AND SUMMARY OF THE INVENTION
Therefore, an object of the present invention is to provide a method and apparatus for encoding and decoding picture signals which avoid the aforementioned disadvantages of the prior art.
Another object of the present invention is to provide a method and apparatus for transmitting and receiving picture signals in serial stages which minimizes the deterioration in picture quality at each stage.
Yet another object of the present invention is to match the type of coding applied to pictures of a picture signal in serial processing stages, each stage comprising coding and decoding.
In accordance with one embodiment of the present invention, apparatus and method for processing a digital picture signal operate by receiving a digital picture signal which has picture type data included in a data identification area of the digital picture signal and which indicates one of intrapicture coding, predictive coding and bidirectionally predictive coding for respective pictures represented by the digital picture signal. The picture signal is encoded as a function of the picture type data to produce an encoded picture signal.
In accordance with another embodiment of the present invention, apparatus and method for processing an encoded digital picture signal operate by decoding the encoded digital picture signal so as to produce picture type data which represents the type of encoding of the encoded digital picture signal and to produce a decoded digital picture signal. The picture type data is added to a data identification area of the decoded digital picture signal to produce an output signal.
The above, and other objects, features and advantages of the present invention will be apparent from the following detailed description of the preferred embodiments of the present invention when read in conjunction with the accompanying drawings in which corresponding parts are identified by the same reference numeral.
BRIEF DESCRIPTION OF THE DRAWINGS
FIGS. 1A and 1B are pictures illustrating inter-frame correlation;
FIGS. 2A and 2B are diagrams illustrating types of pictures used in predictive encoding;
FIG. 3 is a diagram illustrating how picture signals are converted into encoded data for transmission;
FIG. 4 is a block diagram showing a conventional device for encoding and decoding picture signals;
FIG. 5 is a diagram referred to in explaining the operation of the format converting circuit shown in FIG. 4;
FIG. 6 is a block diagram showing the encoder of the device shown in FIG. 4;
FIG. 7 is a chart referred to in explaining the predictive encoding operation of the encoder shown in FIG. 6;
FIG. 8 is a chart referred to in explaining the orthogonal transformation operation of the encoder shown in FIG. 6;
FIG. 9 is a block diagram showing the decoder of the device shown in FIG. 4;
FIG. 10 is a graph showing picture quality as a function of picture type in a transmitted signal;
FIG. 11 is a block diagram showing two conventional video codecs connected in series using an analog connection;
FIG. 12 is a graph showing picture quality of the signals output by the codecs of FIG. 11;
FIG. 13 is a block diagram showing two conventional video codecs connected in series using a digital connection;
FIG. 14 is a block diagram showing two video codecs according to the present invention connected in series using an analog connection;
FIG. 15 is a graph showing picture quality of the signals output by the codecs of FIG. 14;
FIG. 16 is a block diagram showing two video codecs according to the present invention connected in series using a digital connection;
FIGS. 17A, 17B and 17C illustrate the data structure of a decoded digital video signal;
FIGS. 18A, 18B and 18C are diagrams illustrating various encoding structures of groups of pictures;
FIG. 19 is a block diagram showing a coder circuit according to the present invention; and
FIG. 20 is a block diagram showing a decoding circuit according to the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The present invention comprises a picture type identifier which is included with the picture signal at all times, namely, when the signal is encoded and when the signal is decoded. The picture type indicates one of intra-picture coding (an I-picture), predictive coding (a P-picture) and bi-directionally predictive coding (a B-picture). An I-picture includes macro-blocks encoded by only intra-coding. A P-picture includes macro-blocks encoded by intra-coding and/or macroblocks encoded by forward predictive coding. A B-picture includes macro-blocks encoded by intra-coding and/or macroblocks encoded by forward predictive coding and/or macroblocks encoded by backward predictive coding and/or macroblocks encoded by bi-directionally predictive coding.
Initially, each picture of a picture signal is encoded as a function of a respective picture type, then appropriately decoded. The decoded picture signal includes the respective picture types, preferably in the vertical blanking interval of each decoded picture. The decoded picture signal may be further processed by, for example, dubbing.
When the pictures of the decoded picture signal are again encoded, the re-encoding is a function of the picture type included in the decoded picture signal. The re-encoded signal includes the picture type. Subsequent decoding of the re-encoded picture is a function of the picture type. Each picture of the re-decoded picture signal includes its respective picture type.
Thus, the present invention matches the type of predictive coding applied to pictures in a picture signal by serially arranged coders which process the picture signal.
The present invention promotes optimum picture quality. A picture, previously encoded as an I-picture, P-picture or B-picture is again encoded as an I-picture, P-picture or B-picture, respectively. Also, encoding of a picture, previously encoded as a B-picture, as an I-picture or a P-picture is prevented. Thus, deterioration in signal quality after plural coding and decoding operations is minimized.
Referring now to the drawings, and in particular to FIG. 14, there are illustrated coding and decoding units (codecs) according to the present invention having a serial analog connection therebetween. A first codec comprises coder 120 and decoder 121, while a second codec comprises coder 122 and decoder 123. It will be appreciated by one of ordinary skill in the art that additional codecs may be serially connected to those shown in FIG. 14.
In FIG. 14, an analog video signal is supplied to an input terminal 100 as an input signal a, and a picture type signal is supplied to an input terminal 108. The picture type indicates one of intra-picture coding (an I-picture), predictive coding (a P-picture) and bi-directionally predictive coding (a B-picture).
The input terminals 100, 108 function to apply the analog video signal and the picture type signal, respectively, to an A/D converter 101 and a coding circuit 102, respectively, of coder 120. The converter 101 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to the coding circuit 102.
The coding circuit 102 serves to encode the digital video signal as a function of the picture type signal to produce a coded digital video signal which includes, for each encoded picture, its picture type as identified by the picture type signal. More specifically, if the picture type for a picture indicates intra-picture coding, then the coding circuit 102 codes the picture as an I-picture. If the picture type for a picture indicates predictive coding, then the coding circuit 102 codes the picture as a P-picture. If the picture type for a picture indicates bi-directionally predictive coding, then the coding circuit 102 codes the picture as a B-picture.
The coding unit 120 may alternatively have the structure shown in FIG. 6, in which coding is performed without reference to an externally supplied picture type. As used herein, an externally supplied picture type means a picture type supplied from generally the same source as supplies the digital video signal, rather than from a separate source as is shown in FIG. 6, namely, the picture type input device 65.
The coded digital video signal from coding circuit 102 of FIG. 14 is supplied to a decoding circuit 103 of decoder 121 which is adapted to decode the coded digital video signal as a function of the picture type included in the encoded signal and to apply the decoded video signal to D/A converter 104. The decoding circuit 103 is further adapted to apply the picture type decoded from the coded digital video signal to a multiplexer 105.
The multiplexer 105 is operative to multiplex the picture type information with the decoded video signal to produce a multiplexed analog video signal as an output signal b in which the picture type information is contained in the decoded video signal. Preferably, the multiplexer 105 inserts the picture type for a picture of the decoded video signal into the vertical blanking interval of the picture. As mentioned, a picture may be either a frame or a field of the video signal.
The output signal b is supplied from the multiplexer 105 to a separating circuit 106 of the coder 122. The separating circuit is operative to separate or demultiplex the analog video signal and the picture type information from the output signal b, to supply the separated analog video signal to an A/D converter 107, and to supply the separated picture type information to a coding circuit 108. The converter 107 is adapted to convert the separated analog video signal to a digital video signal, and to apply the digital video signal to the coding circuit 108.
The coding circuit 108 serves to encode the digital video signal as a function of the separated picture type to produce a re-coded digital video signal which includes, for each re-encoded picture, its picture type as identified by the separated picture type signal.
The re-coded digital video signal from the coding circuit 108 is supplied to the decoder 123, which operates in a similar manner as the decoder 121.
Decoding circuit 110 of decoder 123 decodes the re-coded digital video signal to produce a re-decoded digital video signal and a corresponding picture type signal. The re-decoded digital video signal is converted to an analog signal by D/A converter 109, and applied to a multiplexer 111 which multiplexes the analog video signal with the picture type signal from decoding circuit 110 to produce a multiplexed analog video signal as an output signal c. Preferably, the multiplexer 111 inserts the picture type for a picture into the vertical blanking interval of the picture. The multiplexer 111 applies its output signal c to an output terminal 119.
Due to the inclusion of the picture type identifier in the signals b and c, the codecs of FIG. 14 process respective pictures of the video signals b and c in the same manner, that is, as the same one of an I-picture, a P-picture or a B-picture.
FIG. 15 shows the SNR of the output signals b, c shown in FIG. 14. The SNR of the output signal c is seen to be only slightly worse than the SNR of the output signal b.
That is, since the type of predictive coding applied to each picture is the same in each of the serially arranged codecs, the deterioration in picture quality at each codec is minimized even when the picture quality changes from picture to picture due to the type of predictive coding employed from picture to picture.
FIG. 16 shows codecs according to the present invention having a serial digital connection therebetween. A first codec comprises coder 142 and decoder 143, while a second codec comprises coder 144 and decoder 145.
In FIG. 16, an analog video signal is supplied to an input terminal 140 as an input signal a, and a picture type signal is supplied to an input terminal 148. The input terminals 140, 148 function to apply the analog video signal and the picture type signal, respectively, to an A/D converter 141 and a coding circuit 152 of coder 142, respectively. The converter 141 is adapted to convert the analog video signal to a digital video signal, and to apply the digital video signal to a digital interface 151 of coder 142.
The coding circuit 152 serves to encode the digital video signal as a function of the picture type signal to produce a coded digital video signal which includes, for each encoded picture, its picture type as identified by the picture type signal. More specifically, if the picture type for a picture indicates intra-picture coding, then the coding circuit 152 codes the picture as an I-picture. If the picture type for a picture indicates predictive coding, then the coding circuit 152 codes the picture as a P-picture. If the picture type for a picture indicates bi-directionally predictive coding, then the coding circuit 152 codes the picture as a B-picture.
The coding unit 120 may alternatively have the structure shown in FIG. 6, in which coding is performed without reference to an externally supplied picture type.
The coded digital video signal from coding circuit 152 is supplied to a decoding circuit 153 of decoder 143 which is adapted to decode the coded digital video signal as a function of the picture type included in the encoded signal and to apply the decoded video signal to a digital interface 154. The decoding circuit 153 is further adapted to apply the picture type decoded from the coded digital video signal to a multiplexer 155.
The multiplexer 155 is operative to multiplex the picture type information with the decoded video signal to produce a multiplexed digital video signal as an output signal b in which the picture type information is contained in the decoded video signal. Preferably, multiplexer 155 multiplexes the picture type for a picture of the decoded video signal as a flag in the respective picture.
In a preferred embodiment of the present invention, the picture type information is inserted (or multiplexed) into the decoded video signal at a location therein which precedes the actual video data that represents the video field or frame. FIGS. 17A, 17B and 17C illustrate the data structure of the decoded (MPEG) video signal. FIG. 17A illustrates the data structure of a decoded video signal having a serial digital interface format as specified in the standard SMPTE—259 (Society of Motion Picture & Television Engineers). As shown, a frame consists of a first vertical blanking area VBK1, a first optional blanking area OBK1 and a first active video area ACV1, which constitutes the first field, followed by a second vertical blanking area VBK2, a second optional blanking area OBK2 and a second active video area ACV2, which constitutes the second field. In the preferred embodiment, each of the vertical blanking areas consists of 9 horizontal scanning lines, each of the optional blanking areas consists of 10 horizontal scanning lines, the first active video area consists of 244 horizontal scanning lines and the second active video area consists of 243 horizontal scanning lines, for a total of 525 horizontal scanning lines for a single frame.
FIG. 17B illustrates the data structure of a horizontal scanning line. As shown, a horizontal scanning line includes an end of active video (EAV) area followed by an ancillary (ANC) area, a start of active video (SAV) area and a video area. The ANC area, as shown in FIG. 17C, includes an ancillary data flag (ADF) area followed by a data identification (DID) area, an ancillary number data (DBN) area, an ancillary word data (DC) area, an ancillary data (ANC DATA) area, and a check sum (CS) area. Since the above-noted areas of a horizontal scanning line of digital data are well-known in the art, their descriptions are omitted herein except where necessary for an understanding of the present invention.
In accordance with the present invention, the picture type information is inserted into the DID area of the ANC area of each of the horizontal scanning lines in the first and second vertical blanking areas. However, the picture type information also may be inserted into the DID area of the ANC area of other horizontal scanning lines of the decoded video signal, although these DID areas may be used for transmitting other types of data. For example, the DID areas in horizontal lines of non-vertical blanking areas may include other formatting information.
The picture type information (or picture type data) may identify the type of encoding of the picture (e.g., intra-picture coding, predictive coding, and bi-directionally predictive coding) in various ways. FIGS. 18A to 18C illustrate one method in which the type of encoding is identified by the structure of the group of pictures (GOP). As shown, the structure of a group of pictures (GOP) may be identified by the minimum number of frames “M” between I and P pictures, between P and P pictures, and between I and I pictures, and the total number of frames “N” (pictures) in the group of pictures. For example, FIG. 18A illustrates groups of pictures having an encoding structure of M=3 and N=9 in which there are 9 frames in each group and wherein there are 3 frames from each I or P frame to the respectively succeeding I or P frame. Similarly, FIG. 18B illustrates groups of pictures having an encoding structure of M=2 and N=2, and FIG. 18C illustrates groups of pictures having an encoding structure of M=1 and N=2.
When the picture type data identifies the “M” and “N” numbers, the type of encoding for each picture can be determined by the location of a respective picture within the group of pictures, and the location of a respective picture may be identified in the picture type data either by identifying each picture's location within the group of pictures or by identifying only the first picture within the group of pictures. For example, when M=3 and N=9 for a group of frames in the decoded video signal (FIG. 18A), the third frame in that group is identified as a decoded I-frame.
Returning to FIG. 16, the output signal b (the multiplexed decoded video signal) is supplied from the multiplexer 155 to a separating circuit 156 of the coder 144. The separating circuit is operative to separate or demultiplex the digital video signal and the picture type data from the output signal b, to supply the separated digital video signal to a digital interface 157, and to supply the separated picture type data to a coding circuit 158. The interface 157 is adapted to apply the separated digital video signal to the coding circuit 158.
The coding circuit 158 serves to encode the separated digital video signal as a function of the separated picture type to produce a re-coded digital video signal which includes, for each re-encoded picture, its picture type as identified by the separated picture type signal.
The re-coded digital video signal from the coding circuit 158 is supplied to the decoder 145, which operates in a similar manner as the decoder 143.
Decoding circuit 160 of decoder 145 decodes the re-coded digital video signal to produce a re-decoded digital video signal and a corresponding picture type signal. The re-decoded digital video signal is supplied to a digital interface 159 and thence to a multiplexer 161 which multiplexes the re-decoded digital video signal with the picture type signal from decoding circuit 160 to produce a multiplexed digital video signal. The multiplexer 161 multiplexes the picture type signal as a flag (e.g., in the DID area of the ANC area) in the re-decoded digital video signal.
The multiplexer 161 supplies the multiplexed digital video signal to an A/D converter 146 which serves to convert the multiplexed digital video signal to an analog video signal also referred to as output signal c. The converter 146 applies the output signal c to an output terminal 147.
Due to the inclusion of the picture type identifier in the signals b and c, the codecs of FIG. 16 process respective pictures of the video signals b and c in the same manner, that is, as the same one of an I-picture, a P-picture or a B-picture. Consequently, the deterioration in picture quality at each codec is minimized even when the picture quality changes from picture to picture due to the type of predictive coding employed from picture to picture.
FIG. 19 shows the coding circuits of FIGS. 14 and 16 in more detail. In FIG. 19, elements similar to those in FIG. 6 are indicated by the same reference numerals, and detailed explanations thereof are omitted.
In FIG. 19, a picture type signal is supplied to input terminal 70 which serves to supply the picture type signal to motion vector detector 450, predictive judging circuit 454 and variable length coding circuit 458. The processing performed by elements 450, 454 and 458 is similar to the processing performed by elements 50, 54 and 58 of FIG. 6, except that the elements of FIG. 19 perform in accordance with the picture type identified in the external picture type signal which indicates the picture type used in previous coding. The variable length coding circuit 458 includes the picture type based on the external picture type signal as part of the header information.
FIG. 20 shows the decoding circuits of FIGS. 14 and 16 in more detail. In FIG. 20, elements similar to those in FIG. 9 are indicated by the same reference numerals, and detailed explanations thereof are omitted.
Variable length decoding circuit 482 of FIG. 20 is similar to variable length decoding circuit 82 of FIG. 9, except that circuit 482 applies the picture type separated from the encoded signal not only to motion compensator 487, but also to output terminal 92.
Although an illustrative embodiment of the present invention, and various modifications thereof, have been described in detail herein with reference to the accompanying drawings, it is to be understood that the invention is not limited to this precise embodiment and the described modifications, and that various changes and further modifications may be effected therein by one skilled in the art without departing from the scope or spirit of the invention as defined in the appended claims.

Claims (53)

What is claimed is:
1. An apparatus for processing a digital picture signal, comprising:
means for receiving a digital picture signal having picture type data included in a data identification area of said digital picture signal indicating one of intrapicture coding, predictive coding and bidirectionally predictive coding for respective pictures represented by said digital picture signal, said picture type data identifying an encoding structure of a group of pictures represented by said digital picture signal and further identifying each respective picture within said group of pictures so as to identify the type of encoding of said digital picture signal for each said picture; and
coding means for encoding said digital picture signal as a function of said picture type data to produce an encoded digital picture signal.
2. The apparatus of claim 1, wherein said picture type data identifies previous types of coding for said respective pictures represented by said digital picture signal.
3. The apparatus of claim 1, wherein said means for receiving includes means for extracting said picture type data from said digital picture signal.
4. The apparatus of claim 1, wherein said picture type data is included in a data identification area of at least a vertical blanking interval of said digital picture signal.
5. The apparatus of claim 1, further comprising means for decoding said encoded digital picture signal as a function of said picture type data.
6. The apparatus of claim 1, wherein said coding means includes motion vector detection means for detecting motion vectors between said pictures represented by said digital picture signal as a function of said picture type data; predictive judging means for choosing one of intra-coding, forward predictive coding, backward predictive coding and bi-directionally predictive coding said digital picture signal as a function of said picture type data; and variable length coding means for encoding said picture type data in said encoded digital picture signal.
7. The apparatus of claim 1, wherein said digital picture signal includes an ancillary area in which said picture type data is included followed by a video area in which picture data representing a picture of said digital picture signal is included.
8. The apparatus of claim 1, wherein said picture type data identifies a minimum number of frames between two frames encoded either by intrapicture or predictive coding and identifies a total number of frames in said group of pictures represented by said digital picture signal.
9. An apparatus for processing an encoded digital picture signal, comprising:
means for decoding said encoded digital picture signal to produce picture type data representing a type of encoding of said encoded digital picture signal and to produce a decoded digital picture signal, said picture type data identifying a previous encoding structure of a group of pictures represented by said encoded digital picture signal and further identifying each respective picture within said group of pictures represented by said decoded digital picture signal so as to identify the previous type of encoding for each picture represented by said decoded digital picture signal; and
means for including said picture type data in a data identification area of said decoded digital picture signal to produce an output signal.
10. The apparatus of claim 9, wherein said means for including is operable to include said picture type data in a data identification area of at least a vertical blanking interval of said decoded digital picture signal to produce said output signal.
11. The apparatus of claim 9, further comprising means for encoding said decoded digital picture signal as a function of said picture type data.
12. The apparatus of claim 9, wherein said means for decoding includes variable length decoding means for separating said picture type data from said encoded digital picture signal, and wherein said means for including is operative to include the separated picture type data in said decoded digital picture signal.
13. The apparatus of claim 9, wherein said digital picture signal includes an ancillary area in which said picture type data is included followed by a video area in which picture data representing a picture of said digital picture signal is included.
14. The apparatus of claim 9, wherein said picture type data identifies a minimum number of frames between two frames encoded either by intrapicture or predictive coding and identifies a total number of frames in said group of pictures represented by said digital picture signal.
15. A method of processing a digital picture signal, comprising the steps of:
receiving a digital picture signal having picture type data included in a data identification area of said digital picture signal indicating one of intrapicture coding, predictive coding and bidirectionally predictive coding for respective pictures represented by said digital picture signal, said picture type data identifying an encoding structure of a group of pictures represented by said digital picture signal and further identifying each respective picture within said group of pictures so as to identify the type of encoding of said digital picture signal for each said picture; and
encoding said digital picture signal as a function of said picture type data to produce an encoded digital picture signal.
16. The method of claim 15, wherein said picture type data identifies previous types of coding for said respective pictures represented by said digital picture signal.
17. The method of claim 15, wherein said step of receiving is carried out by extracting said picture type data from said digital picture signal.
18. The method of claim 15, wherein said picture type data is included in a data identification area of at least a vertical blanking interval of said digital picture signal.
19. The method of claim 15, further comprising the step of decoding said encoded digital picture signal as a function of said picture type data.
20. The method of claim 15, wherein said step of encoding is carried out by detecting motion vectors between said pictures represented by said digital picture signal as a function of said picture type data; choosing one of intra-coding, forward predictive coding, backward predictive coding and bi-directionally predictive coding said digital picture signal as a function of said picture type data; and variable length encoding said picture type data in said encoded digital picture signal.
21. The method of claim 15, wherein said digital picture signal includes an ancillary area in which said picture type data is included followed by a video area in which picture data representing a picture of said digital picture signal is included.
22. The method of claim 15, wherein said picture type data identifies a minimum number of frames between two frames encoded either by intrapicture or predictive coding and identifies a total number of frames in said group of pictures represented by said digital picture signal.
23. A method of processing an encoded digital picture signal, comprising the steps of:
decoding said encoded digital picture signal to produce picture type data representing a type of encoding of said encoded digital picture signal and to produce a decoded digital picture signal, said picture type data identifying a previous encoding structure of a group of pictures represented by said encoded digital picture signal and further identifying each respective picture within said group of pictures represented by said decoded digital picture signal so as to identify the previous type of encoding for each picture represented by said decoded digital picture signal; and
including said picture type data in a data identification area of said decoded digital picture signal to produce an output signal.
24. The apparatus of claim 23, wherein said step of including is carried out by including said picture type data in a data identification area of at least a vertical blanking interval of said decoded digital picture signal to produce said output signal.
25. The method of claim 23, further comprising the step of encoding said decoded digital picture signal as a function of said picture type data.
26. The method of claim 23, wherein said digital picture signal includes an ancillary area in which said picture type data is included followed by a video area in which picture data representing a picture of said digital picture signal is included.
27. The method of claim 23, wherein said picture type data identifies a minimum number of frames between two frames encoded either by intrapicture or predictive coding and identifies a total number of frames in said group of pictures represented by said digital picture signal.
28. Re-encoding apparatus for re-encoding a decoded video signal obtained by decoding a previously encoded video signal, comprising:
first receiving means for receiving the previously encoded video signal and picture type information that indicates picture type in the previously encoded video signal;
decoding means for decoding the received previously encoded video signal on the basis of said received picture type information to generate said decoded video signal;
transferring means for transferring the decoded video signal and the picture type information;
second receiving means for receiving the transferred decoded video signal and picture type information;
encoding means for re-encoding the received decoded video signal utilizing the received picture type information to generate a re-encoded video signal; and
controlling means for controlling said encoding means so as to match the picture type in said re-encoded video signal with the picture type in said previously encoded video signal.
29. The re-encoding apparatus according to claim 28, wherein
said transferring means multiplexes the picture type information into the generated decoded video signal.
30. The re-encoding apparatus according to claim 29, wherein
said transferring means multiplexes the picture type information into vertical blanking interval of the generated decoded video signal.
31. The re-encoding apparatus according to claim 28, wherein
the previously encoded video signal is encoded in accordance with an MPEG standard.
32. The re-encoding apparatus according to claim 31, wherein
the picture type information indicates an intra-picture I, a predictive picture P and a bi-directionally predictive picture B.
33. The re-encoding apparatus according to claim 32, wherein
the picture type information indicates intra-picture, predictive picture and bi-directionally predictive picture as flag information.
34. The re-encoding apparatus according to claim 28, further comprising
output means for outputting the generated re-encoded video signal and the picture type information utilized by said encoding means.
35. The re-encoding apparatus according to claim 28, wherein
said encoding means re-encodes said received decoded video signal in accordance with an MPEG standard.
36. The re-encoding apparatus according to claim 35, wherein
said encoding means re-encodes said decoded video signal macro-block by macro-block utilizing the picture type information.
37. The re-encoding apparatus according to claim 35, wherein
the picture type information indicates an intra-picture I, a predictive picture P and a bi-directionally predictive picture B.
38. The re-encoding apparatus according to claim 37, wherein
the picture type information indicates intra-picture, predictive picture and the bi-directionally predictive picture as flag information.
39. A method of re-encoding a decoded video signal obtained by decoding a previously encoded video signal, comprising the steps of:
receiving the previously encoded video signal and picture type information that indicates picture type in the previously encoded video signal;
decoding the received previously encoded video signal on the basis of said received picture type information to generate said decoded video signal;
transferring the decoded video signal and the picture type information;
receiving the transferred decoded video signal and picture type information;
re-encoding the received decoded video signal utilizing the received picture type information to generate a re-encoded video signal; and
controlling said re-encoding so as to match the picture type in said re-encoded video signal with the picture type in said previously encoded video signal.
40. Re-encoding apparatus for re-encoding a decoded video signal obtained by decoding a previously encoded video signal, comprising:
a first receiver for receiving the previously encoded video signal and picture type information that indicates picture type in the previously encoded video signal;
a decoder for decoding the received previously encoded video signal on the basis of said received picture type information to generate said decoded video signal;
a transfer unit for transferring the decoded video signal and the picture type information;
a second receiver for receiving the transferred decoded video signal and picture type information;
an encoder for re-encoding the received decoded video signal utilizing the received picture type information to generate a re-encoded video signal; and
a controller for controlling said encoder so as to match the picture type in said re-encoded video signal with the picture type in said previously encoded video signal.
41. Re-encoding apparatus for re-encoding a decoded video signal obtained by decoding a previously encoded video signal, comprising:
generating means for generating picture type information that indicates picture type in the previously encoded video signal;
decoding means for decoding the previously encoded video signal on the basis of the generated picture type information to generate the decoded video signal;
transferring means for transferring the decoded video signal and the generated picture type information;
receiving means for receiving the decoded video signal and the transferred picture type information;
encoding means for re-encoding the received decoded video signal utilizing the received picture type information to generate a re-encoded video signal; and
controlling means for controlling said encoding means so as to match the picture type in said re-encod video signal with the picture type in said previously encoded video signal.
42. A method of re-encoding a decoded video signal obtained by decoding a previously encoded video signal, comprising the steps of:
generating picture type information that indicates picture type in the previously encoded video signal;
decoding the previously encoded video signal on the basis of the generated picture type information to generate the decoded video signal;
transferring the decoded video signal and the generated picture type information;
receiving the decoded video signal and the transferred picture type information;
re-encoding the received decoded video signal utilizing the received picture type information to generate a re-encoded video signal; and
controlling the re-encoding so as to match the picture type in said re-encod video signal with the picture type in said previously encoded video signal.
43. Re-encoding apparatus for re-encoding a decoded video signal obtained by decoding a previously encoded video signal, comprising:
a generator for generating picture type information that indicates picture type in the previously encoded video signal;
a decoder for decoding the previously encoded video signal on the basis of the generated picture type information to generate the decoded video signal;
a transfer unit for transferring the decoded video signal and the generated picture type information;
a receiver for receiving the decoded video signal and the transferred picture type information;
an encoder for re-encoding the received decoded video signal utilizing the received picture type information to generate a re-encoded video signal; and
a controller for controlling said encoding means so as to match the picture type in said re-encod video signal with the picture type in said previously encoded video signal.
44. An encoding apparatus for encoding a video signal obtained by decoding a previously encoded video signal, comprising:
receiving means for receiving the decoded video signal and picture type information that indicates picture type in the previously encoded video signal;
encoding means for encoding the video signal received by said receiving means utilizing the received picture type information to generate a re-encoded video signal; and
controlling means for controlling said encoding means so as to match picture type in said re-encoded video signal with picture type in said previously encoded video signal.
45. The encoding apparatus according to claim 44, further comprising
output means for outputting the re-encoded video signal and the picture type information utilized by said encoding means.
46. The encoding apparatus according to claim 44, wherein
said encoding means encodes the video signal in accordance with an MPEG standard.
47. The encoding apparatus according to claim 46, wherein
said encoding means encodes said video signal macro-block by macro-block utilizing the picture type information.
48. The encoding apparatus according to claim 46 wherein
the picture type information indicates an intra-picture I, a predictive picture P and a bi-directionally predictive picture B.
49. The encoding apparatus according to claim 48, wherein
the picture type information indicates the intra-picture, the predictive picture and the bi-directionally predictive picture as flag information.
50. The encoding apparatus according to claim 44, wherein
the picture type information is multiplexed into the video signal, and
said receiving means acquires the picture type information from the video signal.
51. The encoding apparatus according to claim 50, wherein
the picture type information is multiplexed into the vertical blanking interval of the video signal.
52. A method of encoding a video signal obtained by decoding a previously encoded video signal, comprising the steps of:
receiving the decoded video signal and picture type information that indicates picture type in the previously encoded video signal;
encoding the received video signal utilizing the received picture type information to generate a re-encoded video signal; and
controlling said encoding so as to match picture type in said re-encoded video signal with picture type in said previously encoded video signal.
53. An encoding apparatus for encoding a video signal obtained by decoding a previously encoded video signal, comprising:
a receiver for receiving the decoded video signal and picture type information that indicates picture type in the previously encoded video signal;
an encoder for encoding the video signal received by said receiver utilizing the received picture type information to generate a re-encoded video signal; and
a controller for controlling said encoder so as to match picture type in said re-encoded video signal with picture type in said previously encoded video signal.
US12/152,290 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus Expired - Lifetime USRE43238E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/152,290 USRE43238E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US08/219,472 US5473380A (en) 1993-03-29 1994-03-29 Picture signal transmitting method and apparatus
JP13065194A JP3653745B2 (en) 1994-06-13 1994-06-13 Encoding apparatus and method, and encoding / decoding apparatus and method
JP06-130651 1994-06-13
US08/477,855 US5715009A (en) 1994-03-29 1995-06-07 Picture signal transmitting method and apparatus
US09/497,026 USRE40415E1 (en) 1994-03-29 2000-02-02 Picture signal transmitting method and apparatus
US12/152,290 USRE43238E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/477,855 Reissue US5715009A (en) 1994-03-29 1995-06-07 Picture signal transmitting method and apparatus

Publications (1)

Publication Number Publication Date
USRE43238E1 true USRE43238E1 (en) 2012-03-13

Family

ID=46251532

Family Applications (6)

Application Number Title Priority Date Filing Date
US08/477,855 Ceased US5715009A (en) 1994-03-29 1995-06-07 Picture signal transmitting method and apparatus
US09/497,026 Expired - Lifetime USRE40415E1 (en) 1994-03-29 2000-02-02 Picture signal transmitting method and apparatus
US12/152,286 Expired - Lifetime USRE43043E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus
US12/152,289 Expired - Lifetime USRE43021E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus
US12/152,290 Expired - Lifetime USRE43238E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus
US12/152,287 Expired - Lifetime USRE43111E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus

Family Applications Before (4)

Application Number Title Priority Date Filing Date
US08/477,855 Ceased US5715009A (en) 1994-03-29 1995-06-07 Picture signal transmitting method and apparatus
US09/497,026 Expired - Lifetime USRE40415E1 (en) 1994-03-29 2000-02-02 Picture signal transmitting method and apparatus
US12/152,286 Expired - Lifetime USRE43043E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus
US12/152,289 Expired - Lifetime USRE43021E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/152,287 Expired - Lifetime USRE43111E1 (en) 1994-03-29 2008-05-12 Picture signal transmitting method and apparatus

Country Status (1)

Country Link
US (6) US5715009A (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5715009A (en) 1994-03-29 1998-02-03 Sony Corporation Picture signal transmitting method and apparatus
JPH08214265A (en) * 1995-01-31 1996-08-20 Sony Corp Method and device for reproducing encoded data
US6049517A (en) * 1996-04-30 2000-04-11 Sony Corporation Dual format audio signal compression
EP0901283A4 (en) * 1996-05-17 1999-06-30 Matsushita Electric Ind Co Ltd Data multiplexing method, method and device for reproducing multiplexed data, and recording medium containing the data multiplexed by said method
SG65064A1 (en) * 1997-04-09 1999-05-25 Matsushita Electric Ind Co Ltd Image predictive decoding method image predictive decoding apparatus image predictive coding method image predictive coding apparatus and data storage media
US6594311B1 (en) * 1997-10-20 2003-07-15 Hitachi America, Ltd. Methods for reduced cost insertion of video subwindows into compressed video
US6028639A (en) * 1997-12-19 2000-02-22 Thomson Consumer Electronics, Inc. Process and apparatus for converting an MPEG-2 bitstream into SMPTE-259 compatible bitstream
JP3716328B2 (en) 1998-01-19 2005-11-16 ソニー株式会社 Editing apparatus and method, re-encoding apparatus and method
JP4038858B2 (en) * 1998-01-28 2008-01-30 ソニー株式会社 Signal processing apparatus and method
CA2265089C (en) * 1998-03-10 2007-07-10 Sony Corporation Transcoding system using encoding history information
US7551672B1 (en) 1999-02-05 2009-06-23 Sony Corporation Encoding system and method, decoding system and method, multiplexing apparatus and method, and display system and method
WO2000046989A1 (en) 1999-02-05 2000-08-10 Sony Corporation Encoding device, encoding method, decoding device, decoding method, coding system and coding method
JP3672185B2 (en) * 1999-02-09 2005-07-13 ソニー株式会社 CODING SYSTEM, DEVICE USING THE SAME, AND RECORDING MEDIUM
JP3694888B2 (en) 1999-12-03 2005-09-14 ソニー株式会社 Decoding device and method, encoding device and method, information processing device and method, and recording medium
US7054362B1 (en) * 2001-05-29 2006-05-30 Cisco Technology, Inc. Methods and apparatus for updating a reduction ratio
AU2003285850A1 (en) * 2002-04-23 2004-04-30 Nokia Corporation Method and device for indicating quantizer parameters in a video coding system
KR100604032B1 (en) * 2003-01-08 2006-07-24 엘지전자 주식회사 Apparatus for supporting plural codec and Method thereof
JP4196726B2 (en) * 2003-05-14 2008-12-17 ソニー株式会社 Image processing apparatus, image processing method, recording medium, and program
EP1628484B1 (en) * 2004-08-18 2019-04-03 STMicroelectronics Srl Method for transcoding compressed video signals, related apparatus and computer program product therefor
TWI266539B (en) * 2005-01-13 2006-11-11 Via Tech Inc Decoding device with multi-buffers
US20060222251A1 (en) * 2005-04-01 2006-10-05 Bo Zhang Method and system for frame/field coding
US20060256854A1 (en) * 2005-05-16 2006-11-16 Hong Jiang Parallel execution of media encoding using multi-threaded single instruction multiple data processing
JP2008109624A (en) * 2006-09-25 2008-05-08 Victor Co Of Japan Ltd Moving image encoding apparatus, moving image encoding method, and moving image encoding program
ITMI20080741A1 (en) * 2008-04-23 2009-10-24 Clay Paky Spa EFFECT WHEEL ASSEMBLY FOR A LUMINOUS HEADLAMP, IN PARTICULAR A HEADLAMP SCREEN
WO2011027506A1 (en) * 2009-09-03 2011-03-10 日本電気株式会社 Video encoding device, video encoding method and video encoding program
GB2500613A (en) * 2012-03-26 2013-10-02 Sony Corp Audio/visual receiver that can receive non-viewing information via a transmission channel that is not currently in use.
GB2500615B (en) * 2012-03-26 2019-10-23 Saturn Licensing Llc Selecting data packets from a packetized data stream comprising audio/video programme data packets and identification data
US10958987B1 (en) 2018-05-01 2021-03-23 Amazon Technologies, Inc. Matching based on video data
US10630748B1 (en) * 2018-05-01 2020-04-21 Amazon Technologies, Inc. Video-based encoder alignment
US10630990B1 (en) 2018-05-01 2020-04-21 Amazon Technologies, Inc. Encoder output responsive to quality metric information

Citations (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4292651A (en) 1978-12-08 1981-09-29 Francis Kretz Expansion and compression of television signals by use of differential coding
US4825285A (en) 1986-04-19 1989-04-25 United States Of America As Represented By The Director Of The National Security Agency Hybrid encoder
US4893317A (en) 1986-08-07 1990-01-09 International Mobile Machines Corporation Digital signals and frequency correction in a digital wireless system
EP0349847A2 (en) 1988-06-30 1990-01-10 Dainippon Screen Mfg. Co., Ltd. Method of and apparatus for compressing image data
US4941040A (en) * 1985-04-29 1990-07-10 Cableshare, Inc. Cable television system selectively distributing pre-recorded video and audio messages
US5148272A (en) 1991-02-27 1992-09-15 Rca Thomson Licensing Corporation Apparatus for recombining prioritized video data
US5212549A (en) 1991-04-29 1993-05-18 Rca Thomson Licensing Corporation Error concealment apparatus for a compressed video signal processing system
US5227878A (en) 1991-11-15 1993-07-13 At&T Bell Laboratories Adaptive coding and decoding of frames and fields of video
US5260783A (en) 1991-02-21 1993-11-09 Gte Laboratories Incorporated Layered DCT video coder for packet switched ATM networks
US5289190A (en) 1990-12-27 1994-02-22 Kabushiki Kaisha Toshiba Recording/reproducing apparatus including control signal indicating high-efficiency coding
US5291484A (en) 1989-09-04 1994-03-01 Fujitsu Limited Relay and exchange system for time division multiplex data
JPH0670178A (en) 1992-08-21 1994-03-11 Fuji Xerox Co Ltd Encoding device for picture signal
US5327520A (en) 1992-06-04 1994-07-05 At&T Bell Laboratories Method of use of voice message coder/decoder
EP0618734A2 (en) 1993-03-29 1994-10-05 Sony Corporation Picture signal processing
JPH07107461A (en) 1993-09-29 1995-04-21 Sharp Corp Picture encoding device
US5452006A (en) 1993-10-25 1995-09-19 Lsi Logic Corporation Two-part synchronization scheme for digital video decoders
JPH07288804A (en) 1994-04-18 1995-10-31 Kokusai Denshin Denwa Co Ltd <Kdd> Re-coding device for picture signal
US5479212A (en) 1993-07-30 1995-12-26 Matsushita Electric Industrial Co., Ltd. Picture data coding apparatus
WO1995035628A1 (en) 1994-06-17 1995-12-28 Snell & Wilcox Limited Video compression
WO1996001023A1 (en) 1994-06-28 1996-01-11 National Transcommunications Limited Methods for the synchronisation of successive digital video
JPH0865663A (en) 1994-08-19 1996-03-08 Canon Inc Digital image information processor
US5500678A (en) 1994-03-18 1996-03-19 At&T Corp. Optimized scanning of transform coefficients in video coding
JPH0898181A (en) 1994-09-22 1996-04-12 Canon Inc Method and device for picture conversion
JPH08111870A (en) 1994-10-12 1996-04-30 Kokusai Denshin Denwa Co Ltd <Kdd> Method and device for re-coding image information
US5512953A (en) 1994-08-09 1996-04-30 At&T Corp. Method and apparatus for conversion of compressed bit stream representation of video signal
EP0710030A1 (en) 1994-10-31 1996-05-01 Mitsubishi Denki Kabushiki Kaisha Image coded data re-encoding apparatus
US5532746A (en) 1993-11-09 1996-07-02 Korea Telecommunication Authority Bit allocation method for controlling transmission rate of video encoder
US5534937A (en) 1994-04-14 1996-07-09 Motorola, Inc. Minimum-delay jitter smoothing device and method for packet video communications
US5537440A (en) 1994-01-07 1996-07-16 Motorola, Inc. Efficient transcoding device and method
US5541852A (en) 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
US5544266A (en) 1993-08-04 1996-08-06 Koninklijke Ptt Nederland N.V. Transcoding device
US5543847A (en) 1992-12-14 1996-08-06 Sony Corporation Picture coding and decoding method for random accessing
US5563593A (en) 1994-03-18 1996-10-08 Lucent Technologies Inc. Video coding with optimized low complexity variable length codes
EP0740474A2 (en) 1995-04-27 1996-10-30 Oki Electric Industry Co., Ltd. Video coding and decoding method and system with teletext capability
US5657086A (en) 1993-03-31 1997-08-12 Sony Corporation High efficiency encoding of picture signals
US5657416A (en) 1993-06-01 1997-08-12 Matsushita Electric Industrial Co., Ltd. Apparatus for further compressing and recording encoded digital video data streams
JPH1032830A (en) 1996-07-16 1998-02-03 Kokusai Denshin Denwa Co Ltd <Kdd> Re-encoding method and device for image information
US5715009A (en) 1994-03-29 1998-02-03 Sony Corporation Picture signal transmitting method and apparatus
JPH1032829A (en) 1996-07-16 1998-02-03 Kokusai Denshin Denwa Co Ltd <Kdd> Re-encoding method and device for image information
US5729293A (en) 1994-06-30 1998-03-17 U.S. Philips Corporation Method and device for transcoding a sequence of coded digital signals
GB2318246A (en) 1996-10-09 1998-04-15 Sony Uk Ltd Processing MPEG encoded video signals
US5754235A (en) 1994-03-25 1998-05-19 Sanyo Electric Co., Ltd. Bit-rate conversion circuit for a compressed motion video bitstream
JPH10136386A (en) 1996-10-09 1998-05-22 Sony United Kingdom Ltd Coded video signal processing system
US5757421A (en) 1994-08-31 1998-05-26 Sony Corporation Method and apparatus for encoding moving picture signals and recording moving picture signals
US5774206A (en) 1995-05-10 1998-06-30 Cagent Technologies, Inc. Process for controlling an MPEG decoder
US5805224A (en) 1995-02-15 1998-09-08 U.S. Philips Corporation Method and device for transcoding video signals
US5812194A (en) 1995-06-06 1998-09-22 Sony Corporation Motion compensated video processing
US5831668A (en) 1992-02-25 1998-11-03 Imatran Voima Oy Assembly for combustion chamber monitoring camera
EP0889650A2 (en) 1997-06-30 1999-01-07 Hewlett-Packard Company Method and apparatus for processing a bitstream
US5870146A (en) 1997-01-21 1999-02-09 Multilink, Incorporated Device and method for digital video transcoding
US5889561A (en) 1994-11-04 1999-03-30 Rca Thomson Licensing Corporation Method and apparatus for scaling a compressed video bitstream
US5912709A (en) 1993-12-01 1999-06-15 Matsushita Electric Industrial Co., Ltd. Method and apparatus for editing or mixing compressed pictures
GB2333656A (en) 1998-01-22 1999-07-28 British Broadcasting Corp Video compression
US5940130A (en) 1994-04-21 1999-08-17 British Telecommunications Public Limited Company Video transcoder with by-pass transfer of extracted motion compensation data
EP0942605A2 (en) 1998-03-10 1999-09-15 Sony Corporation Transcoding system
US5991452A (en) 1993-04-02 1999-11-23 Fujitsu Limited Image and code data compression apparatus
US6012091A (en) 1997-06-30 2000-01-04 At&T Corporation Video telecommunications server and method of providing video fast forward and reverse
JP2000059784A (en) 1998-03-10 2000-02-25 Sony Corp Encoding device and method therefor, and providing medium
US6081295A (en) 1994-05-13 2000-06-27 Deutsche Thomson-Brandt Gmbh Method and apparatus for transcoding bit streams with video data
US6088393A (en) 1995-01-30 2000-07-11 Snell & Wilcox Limited Video signal processing
US6100940A (en) 1998-01-21 2000-08-08 Sarnoff Corporation Apparatus and method for using side information to improve a coding system
JP2000341686A (en) 1999-05-31 2000-12-08 Toshiba Corp Transcoder device
US6163573A (en) 1996-12-12 2000-12-19 Sony Corporation Equipment and method for compressing picture data
US6167084A (en) 1998-08-27 2000-12-26 Motorola, Inc. Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals
US6201844B1 (en) 1998-04-30 2001-03-13 U.S. Philips Corporation Transcoding of a data stream
JP2001169292A (en) 1999-12-03 2001-06-22 Sony Corp Device and method for processing information, and storage medium
FR2809573A1 (en) 2000-05-26 2001-11-30 Thomson Broadcast Systems METHOD FOR ENCODING A VIDEO IMAGE STREAM
US6369722B1 (en) 2000-03-17 2002-04-09 Matra Nortel Communications Coding, decoding and transcoding methods
US6418167B1 (en) 1996-09-10 2002-07-09 Sony Corporation Moving picture compression/expansion apparatus
US6442207B1 (en) 1996-08-05 2002-08-27 Mitsubishi Denki Kabushiki Kaisha Image coded data re-encoding apparatus without once decoding the original image coded data
US20030018766A1 (en) 2001-06-28 2003-01-23 Sreeram Duvvuru Differentiated quality of service context assignment and propagation
US6567128B1 (en) 1999-09-03 2003-05-20 Sony United Kingdom Limited Video signal processing in which first video information and additional video information are combined to produce frames that are encoded with different picture qualities for edge and non-edge regions
US6574274B2 (en) 1998-02-27 2003-06-03 Sony Corporation Picture signal processing system, decoder, picture signal processing method, and decoding method
US6704363B1 (en) 1999-06-02 2004-03-09 Lg Electronics Inc. Apparatus and method for concealing error in moving picture decompression system
US6839384B2 (en) 2000-11-13 2005-01-04 Nec Corporation Method and apparatus for decoding compressed video signals
US6856650B1 (en) 1996-07-16 2005-02-15 Kokusai Denshin Denwa Co., Ltd. Method and apparatus for second or later generation coding of video signal
US6870886B2 (en) 1993-12-15 2005-03-22 Koninklijke Philips Electronics N.V. Method and apparatus for transcoding a digitally compressed high definition television bitstream to a standard definition television bitstream
JP2005304065A (en) 2005-05-16 2005-10-27 Sony Corp Decoding device and method, coding device and method, information processing device and method, and recording medium
US6987535B1 (en) 1998-11-09 2006-01-17 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US7236526B1 (en) 1999-02-09 2007-06-26 Sony Corporation Coding system and its method, coding device and its method, decoding device and its method, recording device and its method, and reproducing device and its method
US7257264B2 (en) 2001-08-29 2007-08-14 Canon Kabushiki Kaisha Image processing apparatus and method for compression-encoding image area information

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1365593A3 (en) 1996-07-15 2004-11-17 SNELL &amp; WILCOX LIMITED Video signal compression
AU7566898A (en) 1997-05-09 1998-11-27 Neomedia Technologies, Inc. Method for embedding links to a networked resource in a transmission medium

Patent Citations (100)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4292651A (en) 1978-12-08 1981-09-29 Francis Kretz Expansion and compression of television signals by use of differential coding
US4941040A (en) * 1985-04-29 1990-07-10 Cableshare, Inc. Cable television system selectively distributing pre-recorded video and audio messages
US4825285A (en) 1986-04-19 1989-04-25 United States Of America As Represented By The Director Of The National Security Agency Hybrid encoder
US4893317A (en) 1986-08-07 1990-01-09 International Mobile Machines Corporation Digital signals and frequency correction in a digital wireless system
EP0349847A2 (en) 1988-06-30 1990-01-10 Dainippon Screen Mfg. Co., Ltd. Method of and apparatus for compressing image data
US5291484A (en) 1989-09-04 1994-03-01 Fujitsu Limited Relay and exchange system for time division multiplex data
US5289190A (en) 1990-12-27 1994-02-22 Kabushiki Kaisha Toshiba Recording/reproducing apparatus including control signal indicating high-efficiency coding
US5260783A (en) 1991-02-21 1993-11-09 Gte Laboratories Incorporated Layered DCT video coder for packet switched ATM networks
US5148272A (en) 1991-02-27 1992-09-15 Rca Thomson Licensing Corporation Apparatus for recombining prioritized video data
US5212549A (en) 1991-04-29 1993-05-18 Rca Thomson Licensing Corporation Error concealment apparatus for a compressed video signal processing system
US5227878A (en) 1991-11-15 1993-07-13 At&T Bell Laboratories Adaptive coding and decoding of frames and fields of video
US5831668A (en) 1992-02-25 1998-11-03 Imatran Voima Oy Assembly for combustion chamber monitoring camera
US5327520A (en) 1992-06-04 1994-07-05 At&T Bell Laboratories Method of use of voice message coder/decoder
US5754698A (en) 1992-08-21 1998-05-19 Fuji Xerox Co., Ltd. Image signal encoding device having first and second encoding means
JPH0670178A (en) 1992-08-21 1994-03-11 Fuji Xerox Co Ltd Encoding device for picture signal
US5543847A (en) 1992-12-14 1996-08-06 Sony Corporation Picture coding and decoding method for random accessing
US5473380A (en) 1993-03-29 1995-12-05 Sony Corporation Picture signal transmitting method and apparatus
EP0618734A2 (en) 1993-03-29 1994-10-05 Sony Corporation Picture signal processing
US5657086A (en) 1993-03-31 1997-08-12 Sony Corporation High efficiency encoding of picture signals
US5991452A (en) 1993-04-02 1999-11-23 Fujitsu Limited Image and code data compression apparatus
US5657416A (en) 1993-06-01 1997-08-12 Matsushita Electric Industrial Co., Ltd. Apparatus for further compressing and recording encoded digital video data streams
US5479212A (en) 1993-07-30 1995-12-26 Matsushita Electric Industrial Co., Ltd. Picture data coding apparatus
US5544266A (en) 1993-08-04 1996-08-06 Koninklijke Ptt Nederland N.V. Transcoding device
JPH07107461A (en) 1993-09-29 1995-04-21 Sharp Corp Picture encoding device
US5452006A (en) 1993-10-25 1995-09-19 Lsi Logic Corporation Two-part synchronization scheme for digital video decoders
US5532746A (en) 1993-11-09 1996-07-02 Korea Telecommunication Authority Bit allocation method for controlling transmission rate of video encoder
US5912709A (en) 1993-12-01 1999-06-15 Matsushita Electric Industrial Co., Ltd. Method and apparatus for editing or mixing compressed pictures
US6870886B2 (en) 1993-12-15 2005-03-22 Koninklijke Philips Electronics N.V. Method and apparatus for transcoding a digitally compressed high definition television bitstream to a standard definition television bitstream
US5537440A (en) 1994-01-07 1996-07-16 Motorola, Inc. Efficient transcoding device and method
US5500678A (en) 1994-03-18 1996-03-19 At&T Corp. Optimized scanning of transform coefficients in video coding
US5563593A (en) 1994-03-18 1996-10-08 Lucent Technologies Inc. Video coding with optimized low complexity variable length codes
US5754235A (en) 1994-03-25 1998-05-19 Sanyo Electric Co., Ltd. Bit-rate conversion circuit for a compressed motion video bitstream
US5715009A (en) 1994-03-29 1998-02-03 Sony Corporation Picture signal transmitting method and apparatus
US5534937A (en) 1994-04-14 1996-07-09 Motorola, Inc. Minimum-delay jitter smoothing device and method for packet video communications
US5541852A (en) 1994-04-14 1996-07-30 Motorola, Inc. Device, method and system for variable bit-rate packet video communications
JPH07288804A (en) 1994-04-18 1995-10-31 Kokusai Denshin Denwa Co Ltd <Kdd> Re-coding device for picture signal
US5940130A (en) 1994-04-21 1999-08-17 British Telecommunications Public Limited Company Video transcoder with by-pass transfer of extracted motion compensation data
US6081295A (en) 1994-05-13 2000-06-27 Deutsche Thomson-Brandt Gmbh Method and apparatus for transcoding bit streams with video data
WO1995035628A1 (en) 1994-06-17 1995-12-28 Snell & Wilcox Limited Video compression
US6285716B1 (en) 1994-06-17 2001-09-04 Snell & Wilcox Limited Video compression
JPH10503895A (en) 1994-06-17 1998-04-07 スネル アンド ウィルコックス リミテッド Video compression
WO1996001023A1 (en) 1994-06-28 1996-01-11 National Transcommunications Limited Methods for the synchronisation of successive digital video
US5729293A (en) 1994-06-30 1998-03-17 U.S. Philips Corporation Method and device for transcoding a sequence of coded digital signals
US5512953A (en) 1994-08-09 1996-04-30 At&T Corp. Method and apparatus for conversion of compressed bit stream representation of video signal
JPH0865663A (en) 1994-08-19 1996-03-08 Canon Inc Digital image information processor
US5757421A (en) 1994-08-31 1998-05-26 Sony Corporation Method and apparatus for encoding moving picture signals and recording moving picture signals
JPH0898181A (en) 1994-09-22 1996-04-12 Canon Inc Method and device for picture conversion
JPH08111870A (en) 1994-10-12 1996-04-30 Kokusai Denshin Denwa Co Ltd <Kdd> Method and device for re-coding image information
EP0710030A1 (en) 1994-10-31 1996-05-01 Mitsubishi Denki Kabushiki Kaisha Image coded data re-encoding apparatus
US5831688A (en) 1994-10-31 1998-11-03 Mitsubishi Denki Kabushiki Kaisha Image coded data re-encoding apparatus
JPH08130743A (en) 1994-10-31 1996-05-21 Mitsubishi Electric Corp Picture encoded data re-encoding device
US5889561A (en) 1994-11-04 1999-03-30 Rca Thomson Licensing Corporation Method and apparatus for scaling a compressed video bitstream
US6088393A (en) 1995-01-30 2000-07-11 Snell & Wilcox Limited Video signal processing
US5805224A (en) 1995-02-15 1998-09-08 U.S. Philips Corporation Method and device for transcoding video signals
EP0740474A2 (en) 1995-04-27 1996-10-30 Oki Electric Industry Co., Ltd. Video coding and decoding method and system with teletext capability
US5774206A (en) 1995-05-10 1998-06-30 Cagent Technologies, Inc. Process for controlling an MPEG decoder
US5812194A (en) 1995-06-06 1998-09-22 Sony Corporation Motion compensated video processing
JPH1032829A (en) 1996-07-16 1998-02-03 Kokusai Denshin Denwa Co Ltd <Kdd> Re-encoding method and device for image information
JPH1032830A (en) 1996-07-16 1998-02-03 Kokusai Denshin Denwa Co Ltd <Kdd> Re-encoding method and device for image information
US6856650B1 (en) 1996-07-16 2005-02-15 Kokusai Denshin Denwa Co., Ltd. Method and apparatus for second or later generation coding of video signal
US6442207B1 (en) 1996-08-05 2002-08-27 Mitsubishi Denki Kabushiki Kaisha Image coded data re-encoding apparatus without once decoding the original image coded data
US6661843B2 (en) 1996-09-10 2003-12-09 Sony Corporation Moving picture compression/expansion apparatus
US6418167B1 (en) 1996-09-10 2002-07-09 Sony Corporation Moving picture compression/expansion apparatus
GB2318246A (en) 1996-10-09 1998-04-15 Sony Uk Ltd Processing MPEG encoded video signals
US6160844A (en) 1996-10-09 2000-12-12 Sony Corporation Processing digitally encoded signals
JPH10145798A (en) 1996-10-09 1998-05-29 Sony United Kingdom Ltd System for processing digital coding signal
JPH10136386A (en) 1996-10-09 1998-05-22 Sony United Kingdom Ltd Coded video signal processing system
US6125140A (en) 1996-10-09 2000-09-26 Sony Corporation Processing encoded signals
US6493384B1 (en) 1996-12-12 2002-12-10 Sony Corporation Video data compression apparatus and method of same
US6163573A (en) 1996-12-12 2000-12-19 Sony Corporation Equipment and method for compressing picture data
US5870146A (en) 1997-01-21 1999-02-09 Multilink, Incorporated Device and method for digital video transcoding
JPH1174798A (en) 1997-06-30 1999-03-16 Hewlett Packard Co <Hp> Compressed input bit stream processor
US6012091A (en) 1997-06-30 2000-01-04 At&T Corporation Video telecommunications server and method of providing video fast forward and reverse
US5907374A (en) 1997-06-30 1999-05-25 Hewlett-Packard Company Method and apparatus for processing a compressed input bitstream representing an information signal
EP0889650A2 (en) 1997-06-30 1999-01-07 Hewlett-Packard Company Method and apparatus for processing a bitstream
US6100940A (en) 1998-01-21 2000-08-08 Sarnoff Corporation Apparatus and method for using side information to improve a coding system
GB2333656A (en) 1998-01-22 1999-07-28 British Broadcasting Corp Video compression
US6574274B2 (en) 1998-02-27 2003-06-03 Sony Corporation Picture signal processing system, decoder, picture signal processing method, and decoding method
EP0942605A2 (en) 1998-03-10 1999-09-15 Sony Corporation Transcoding system
US6560282B2 (en) 1998-03-10 2003-05-06 Sony Corporation Transcoding system using encoding history information
US20030128766A1 (en) 1998-03-10 2003-07-10 Sony Corporation Transcoding system using encoding history information
US20030016755A1 (en) 1998-03-10 2003-01-23 Katsumi Tahara Transcoding system using encoding history information
JP2000059784A (en) 1998-03-10 2000-02-25 Sony Corp Encoding device and method therefor, and providing medium
US6201844B1 (en) 1998-04-30 2001-03-13 U.S. Philips Corporation Transcoding of a data stream
US6167084A (en) 1998-08-27 2000-12-26 Motorola, Inc. Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals
US6987535B1 (en) 1998-11-09 2006-01-17 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and storage medium
US7236526B1 (en) 1999-02-09 2007-06-26 Sony Corporation Coding system and its method, coding device and its method, decoding device and its method, recording device and its method, and reproducing device and its method
JP2000341686A (en) 1999-05-31 2000-12-08 Toshiba Corp Transcoder device
US6704363B1 (en) 1999-06-02 2004-03-09 Lg Electronics Inc. Apparatus and method for concealing error in moving picture decompression system
US6567128B1 (en) 1999-09-03 2003-05-20 Sony United Kingdom Limited Video signal processing in which first video information and additional video information are combined to produce frames that are encoded with different picture qualities for edge and non-edge regions
JP2001169292A (en) 1999-12-03 2001-06-22 Sony Corp Device and method for processing information, and storage medium
US6556627B2 (en) 1999-12-03 2003-04-29 Sony Corporation Information processing apparatus, information processing method and recording medium
JP3694888B2 (en) 1999-12-03 2005-09-14 ソニー株式会社 Decoding device and method, encoding device and method, information processing device and method, and recording medium
US6369722B1 (en) 2000-03-17 2002-04-09 Matra Nortel Communications Coding, decoding and transcoding methods
US20040101041A1 (en) 2000-05-26 2004-05-27 Patrice Alexandre Method for coding a video image flux
FR2809573A1 (en) 2000-05-26 2001-11-30 Thomson Broadcast Systems METHOD FOR ENCODING A VIDEO IMAGE STREAM
US6839384B2 (en) 2000-11-13 2005-01-04 Nec Corporation Method and apparatus for decoding compressed video signals
US20030018766A1 (en) 2001-06-28 2003-01-23 Sreeram Duvvuru Differentiated quality of service context assignment and propagation
US7257264B2 (en) 2001-08-29 2007-08-14 Canon Kabushiki Kaisha Image processing apparatus and method for compression-encoding image area information
JP2005304065A (en) 2005-05-16 2005-10-27 Sony Corp Decoding device and method, coding device and method, information processing device and method, and recording medium

Non-Patent Citations (9)

* Cited by examiner, † Cited by third party
Title
AT&T Technical Journal, vol. 72, No. 1, Feb. 1993, New York, pp. 67-89, Aravind et al., "Image and Video Coding Standards".
D. G. Morrison et al., "Reduction of the bit-rate of compressed video while in its coded form," 6th International Workshop on Packet Video, 1994, D17.1-D17.4.
Fleischman RM et al: "Digital Video System Image Enchancement Broadcast Compatible Encoding Process", IP. Com Journal, IP.com Inc., West Henrietta, NY, US, Mar. 1, 1992 (Mar. 1, 1992), XP013095952 ISSN: 1533-0001.
Gertjan Keesman et al., "Transcoding of MPED bitstreams," Signal Processing: Image Communication, Elsevier Science B.V., Sep. 1996, vol. 8, No. 6, pp. 481-500.
Keiichi Hibi, "A Study on Coding Control Scheme for Video Transcoding," PCSJ93, Oct. 4, 1993, pp. 27-28.
Oliver Morgan (SMPTE): "SMPTE 328M Video Elementary Stream Editing Information" Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG(ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 06), XX, XX, No. M5924, Mar. 14, 2000 (Mar. 14, 2000), XP030035101.
P. N. Tudor et al., "Real-Time Transcoding of MPEG-2 Video Bit Streams," BBC R&D-IBC '97, Amsterdam .
P. N. Tudor et al., "Real-Time Transcoding of MPEG-2 Video Bit Streams," BBC R&D—IBC '97, Amsterdam <http://www.bbc.co.uk/rd/pubs/papers/pdffiles/ibc97ptw.pdf>.
Transcoder Architectures For Video coding, Bjork and Christoopoulos, Ericsson Telecom AB, Compression Lab, HF/ETX/PN/XML, 9-12625 Stockholm, Sweden, IEEE Transcoder on Consumer Electronics, vol. 44, No. 1, pp. 88-98, Feb. 1998.

Also Published As

Publication number Publication date
USRE43021E1 (en) 2011-12-13
USRE40415E1 (en) 2008-07-01
USRE43043E1 (en) 2011-12-27
US5715009A (en) 1998-02-03
USRE43111E1 (en) 2012-01-17

Similar Documents

Publication Publication Date Title
USRE43238E1 (en) Picture signal transmitting method and apparatus
US5473380A (en) Picture signal transmitting method and apparatus
US5805225A (en) Method and apparatus for variable resolution video picture coding/decoding
US5825421A (en) Video coding method and decoding method and devices thereof
KR100252108B1 (en) Apparatus and method for digital recording and reproducing using mpeg compression codec
KR100571687B1 (en) Coding system and its method, coding device and its method, decoding device and its method, recording device and its method, and reproducing device and its method
KR100766740B1 (en) Apparatus and method for converting data using encoding history information
EP1793614A2 (en) Picture encoding method and apparatus and picture decoding method and apparatus
KR20010020147A (en) Dual Compressed Video Bitstream Camera for Universal Serial Bus Connection
GB2318472A (en) Generating inter-frame encoded video signals
US6256349B1 (en) Picture signal encoding method and apparatus, picture signal transmitting method, picture signal decoding method and apparatus and recording medium
JPH08251582A (en) Encoded data editing device
US6754272B2 (en) Picture signal processing method and apparatus, picture signal recording method and apparatus and recording medium
JPH06276481A (en) Picture signal coding and decoding method and recording medium
AU707033B2 (en) Picture signal transmitting method and apparatus
JP3891198B2 (en) Decoding apparatus and method, and transmission apparatus and method
JP4543321B2 (en) Playback apparatus and method
JP4482811B2 (en) Recording apparatus and method
JP3516622B2 (en) Decoding apparatus and method
JP3500340B2 (en) Coding apparatus and method
JP3516621B2 (en) Encoding device and method
JP3307379B2 (en) Decoding apparatus and method
JPH04306095A (en) Moving picture packet coding decoding system
JP2000165870A (en) Coding device and its method
JP2000165890A (en) Coder and coding method