WO1994023535A1 - Method and apparatus for coding video signal, and method and apparatus for decoding video signal - Google Patents
Method and apparatus for coding video signal, and method and apparatus for decoding video signal Download PDFInfo
- Publication number
- WO1994023535A1 WO1994023535A1 PCT/JP1994/000499 JP9400499W WO9423535A1 WO 1994023535 A1 WO1994023535 A1 WO 1994023535A1 JP 9400499 W JP9400499 W JP 9400499W WO 9423535 A1 WO9423535 A1 WO 9423535A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image signal
- signal
- pixels
- resolution
- resolution image
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
Definitions
- the present invention records a moving image signal on a recording medium such as a magneto-optical disk or a magnetic tape, reproduces the moving image signal, and displays it on a display or the like.
- video signals are transmitted from the transmitting side to the receiving side via a transmission line in a video conference system, videophone system, broadcasting equipment, etc., and are received and displayed on the receiving side.
- TECHNICAL FIELD The present invention relates to a suitable image signal encoding method and image signal encoding device, an image signal decoding method, and an image signal decoding device. 2. Background Art First, an encoding method and a decoding method that do not perform hierarchical encoding will be described, and then an encoding procedure when hierarchical encoding is performed will be described.
- an image signal is utilized by utilizing a line correlation or an inter-frame correlation of the image signal in order to use the transmission path efficiently. Is compressed and encoded.
- the image signal is (Discrete cosine transform) It can be compressed by processing.
- the difference between the frame images PC2 and PC3 in FIG. 13A is calculated to generate the image PC23 in FIG.
- the images of temporally adjacent frames do not have such a large change, and when the difference between them is calculated, the difference signal becomes a small value. That is, in the image PC12 shown in FIG. 22B, the difference between the frame image PC1 and the image signal of PC2 in FIG. A signal is obtained. Also, in the image PC 23 shown in B of FIG. 22, the difference between the image signals of the frame images PC 2 and PC 3 of A in FIG.
- the signal indicated by the hatched portion in FIG. 3 is obtained. Therefore, if this difference signal is encoded, the code amount can be compressed.
- the image of each frame is defined as an I-picture (intra-coded picture), a P-picture (predictive-coded picture) or a B-picture.
- One of three types of pictures (bidirectionally-coded picture) is used to compress and encode the image signal.
- Image signals of 17 frames from F17 to F17 are set as one unit of processing as a group of pictures. Then, the image signal of the first frame F1 is encoded as an I picture, the second frame F2 is processed as a .B picture, and the third frame F3 is processed as a P picture.
- the fourth and subsequent frames F4 to F17 are alternately processed as B pictures or P pictures.
- FIGS. 24A and B show the principle of the method of encoding a moving image signal in this manner.
- a of FIG. 24 schematically shows frame data of a moving image video signal
- B of FIG. 24 schematically shows frame data to be transmitted.
- the first frame F1 is processed as an I picture, that is, a non-interpolated frame, it is transmitted as it is to the transmission path as transmission data FIX (transmission non-interpolated frame data).
- transmission data FIX transmission non-interpolated frame data
- the second frame F2 is processed as a B picture, that is, an interpolation frame, the frame F1 that precedes in time and the frame F3 that follows in time (interframe coding
- the difference from the average value of the non-interpolated frames is calculated, and the difference is calculated as the transmission data (transmission interpolation frame data) F 2 X Transmitted.
- the first process is to transmit the data of the original frame F2 as it is as the transmission data F2X as shown by the dashed arrow SP1 in the figure (intra coding mode). Similar processing is performed.
- the second process is to calculate the difference from the frame F3 that is later in time and transmit the difference as indicated by the broken-line arrow SP2 in the figure (backward prediction mode).
- the third process is to transmit the difference from the temporally preceding frame F 1 as indicated by the broken arrow SP 3 in the figure (forward prediction mode).
- the fourth process generates a difference between the temporally preceding frame F 1 and the average frame of the succeeding frame F 3 as indicated by the broken arrow SP 4 in the figure, and this is referred to as the transmission data F 2 It is transmitted as X (bidirectional prediction mode).
- the method that minimizes the transmission data is adopted for each macroblock.
- the motion vector xl (the motion vector between frames F 1 and F 2 in the case of forward prediction) between the image of the frame (prediction image) for which the difference is to be calculated.
- the motion vector X2 (the motion vector between frames F3 and F2 in the case of backward prediction) or both the motion vectors X1 and X2 (in the case of bidirectional prediction).
- the frame F 3 of the P picture (a non-interpolated frame of inter-frame coding) is obtained by using the temporally preceding frame F 1 as a predicted image and a difference signal from the frame F 1 (indicated by a broken arrow SP 3). )
- the vector x 3 is calculated and transmitted as transmission data F 3 X (forward prediction mode).
- the data of the original frame F3 is transmitted as it is as the transmission data F3X (indicated by a broken arrow SP1) (intra-coding mode).
- the method of transmission is the same as in the case of the B picture, and the one that transmits less data is selected in units of macro blocks.
- the frame F 4 of the B picture and the P picture The frame F5 is also processed in the same manner as described above, and transmission data F4X, F5X, motion vectors X4, X5, X6, and the like are obtained.
- FIG. 25 shows a configuration example of a device that encodes and transmits a moving image signal based on the above-described principle and decodes the encoded signal.
- the encoding device 1 encodes an input video signal, transmits the encoded video signal to a recording medium 3 as a transmission path, and records it.
- the decoding device 2 reproduces the signal recorded on the recording medium 3, and decodes and outputs the signal.
- the video signal VD input via the input terminal 10 is input to the preprocessing circuit 11, where the luminance signal and the color signal (in this case, the color difference signal) are separated.
- AZD conversion is performed by AZD converters 12 and 13, respectively.
- the video signal converted into a digital signal by AZD conversion by the AZD converters 12 and 13 is supplied to and stored in the frame memory 14.
- the luminance signal is stored in the luminance signal frame memory
- the color difference signal is stored in the color difference signal frame memory 16.
- the format conversion circuit 17 converts the frame format signal stored in the frame memory 14 into a block format signal. Replace. That is, as shown in (A) of FIG. 26, the video signal stored in the frame memory 14 is data of a frame format in which H dots are collected in V lines per line. .
- the format conversion circuit 17 divides the signal of one frame into N slices in units of 16 lines. And each slice is
- each macro block is composed of luminance signals corresponding to 16 ⁇ 16 pixels (dots) as shown in (C) of FIG. 26.
- This luminance signal is further divided into blocks Y [1] to Y [4] in units of 8 ⁇ 8 dots, as shown in FIG. 26 (C).
- the 16 ⁇ 16 dot luminance signal corresponds to an 8 ⁇ 8 dot Cb signal and an 8 ⁇ 8 dot Cr signal.
- the data converted into the block format is supplied from the format conversion circuit 17 to the encoder 18, where the end code (encoding) is performed.
- the end code encoding
- the signal encoded by the encoder 18 is output to the transmission path as a bit stream, and is recorded on the recording medium 3, for example.
- the data reproduced from the recording medium 3 is supplied to the decoder 31 of the decoding device 2 and decoded. See Figure 3 for more information on Decoder 31
- the data decoded by the decoder 31 is input to a format conversion circuit 32, and is converted from a block format to a frame format.
- the luminance signal of the frame format is supplied to the luminance signal frame memory 34 of the frame memory 33 and recorded.
- the color difference signal is supplied to and stored in the color difference signal frame memory 35.
- the luminance signal and the color difference signal read from the luminance signal frame memory 34 and the color difference signal frame memory 35 are DZA-converted by the DZA converters 36 and .37, respectively, and supplied to the post-processing circuit 38. Combined.
- This output video signal is output from an output terminal 30 to a display such as a CRT (not shown) and displayed.
- the image data to be coded supplied via the input terminal 49 is input to the motion vector detection circuit 50 on a macroblock basis.
- the motion vector detection circuit 50 processes the image data of each frame as an I picture, P picture, or B picture according to a predetermined sequence set in advance. It is determined in advance which of the I, P, and B pictures to process the sequentially input image of each frame as a picture. For example, as shown in FIG. 23, a group-of-picture composed of frames F1 to F17 is processed as I, B, P, B,..., B, P.
- Image data of a frame (for example, frame F1) processed as an I picture is transferred from the motion vector detection circuit 50 to the front original image section 51a of the frame memory 51, stored, and processed as a B picture.
- the image data of the frame (eg, frame F 2) is transferred and stored in the original image section (reference original image section) 5 lb, and the image data of the frame (eg, frame F 3) processed as a P-picture is The image is transferred to and stored in the image section 51c.
- a B picture for example, When an image of a frame to be processed as a frame F 4) or a P picture (the frame F 5) is input, the first P picture (frame F 3) previously stored in the rear original image section 51C is input.
- the image data is transferred to the front original image section 51a, the image data of the next B picture (frame F4) is stored (overwritten) in the original image section 51b, and the next P picture ( The image data of the frame F5) is stored (overwritten) in the rear original image section 51c.
- Such operations are sequentially repeated.
- the signal of each picture stored in the frame memory 51 is read therefrom, and the prediction mode switching circuit 52 performs frame prediction mode processing or field prediction mode processing.
- the calculation unit 53 performs a calculation in the intra-coding mode, the forward prediction mode, the backward prediction mode, or the bidirectional prediction mode. Which of these processes is to be performed is determined on a macroblock basis in accordance with the prediction error signal (the difference between the reference image to be processed and the predicted image corresponding thereto). For this reason, the motion vector detection circuit 50 calculates the sum of the absolute value of the prediction error signal used for this determination (or the sum of squares) and the evaluation value of the intra coding mode corresponding to the prediction error signal. Is generated on a per-block basis.
- the prediction mode switching circuit 52 When the frame prediction mode is set, the prediction mode switching circuit 52 outputs the four luminance blocks Y [1] to Y [4] supplied from the motion vector detection circuit 50 as they are. It is output to the operation unit 53 in the subsequent stage. In other words, in this case, as described above, in each luminance block, the data of the line of the odd field and the data of the line of the even field are mixed. Note that the solid line in each macroblock in Fig. 28 represents the data of the line of the odd field (the line of the first field), and the broken line represents the data of the even field.
- Line of the second field indicates the data of the line
- a and b in FIG. 28 indicate units of motion compensation.
- prediction is performed in units of four luminance blocks (macroblocks), and one motion vector corresponds to four luminance blocks.
- the prediction mode switching circuit 52 converts the signal input from the motion vector detection circuit 50 with the configuration shown in FIG.
- the luminance blocks Y [1] and Y [2] are composed only of dots of odd-numbered field lines, for example, and the other two luminance blocks Y [3] and Y [4] are composed of the data of the even field lines and output to the arithmetic unit 53.
- one motion vector corresponds to two luminance blocks Y [1] and Y [2], and the other two luminance blocks Y [3] and Y [4].
- One other motion vector corresponds to.
- the chrominance signal is processed in a state where the data of the odd field lines and the data of the even field lines are mixed as shown in A of FIG. Supplied to In the field prediction mode, the upper half (4 lines) of each chrominance block Cb, Cr corresponds to the luminance block Y [1], Y [2] as shown in B of Fig. 28.
- the color difference signal of the odd field is regarded as the lower half (4 Line) is the color difference signal of the even field corresponding to the luminance block Y [3], ⁇ [4].
- the motion vector detection circuit 50 determines whether each macroblock has an intra coding mode, a forward prediction mode, a backward prediction mode, or a bidirectional prediction mode in the prediction determination circuit 54 as follows.
- the evaluation value of the intra coding mode and the sum of the absolute value of each prediction error for determining whether to perform the shift prediction, the frame prediction mode or the field prediction mode, are generated for each macroblock. That is, the absolute value sum ⁇ ⁇ I Aij- (average value of Aij) I of the difference between the macroblock signal Aij of the reference image to be encoded and its average value is determined as the evaluation value of the intra-coding mode.
- the difference (Aij-Bij) between the signal Aij of the macroblock of the reference image and the signal Bij of the macroblock of the predicted image in each of the frame prediction mode and the field prediction mode is calculated as the absolute value sum of the prediction errors of the forward prediction.
- Absolute value I Aij—Bij I sum ⁇ I Aij-Bij I is calculated.
- the sum of the absolute values of the prediction errors between the backward prediction and the bidirectional prediction is calculated in the same manner as in the forward prediction (by changing the predicted image to a different prediction image from that in the forward prediction) and the frame prediction mode and the field prediction. Determine for each of the modes.
- the prediction determination circuit 54 determines the smallest of the absolute values of the prediction errors of the forward prediction, the backward prediction, and the bidirectional prediction in each of the frame prediction mode and the field prediction mode as the absolute value of the prediction error of the inter prediction. Select as sum. Furthermore, the absolute sum of the prediction errors of the inter prediction and the evaluation value of the intra coding mode are compared, and And the mode corresponding to the selected value is selected as the prediction mode and the frame field prediction mode. That is, if the evaluation value of the intra coding mode is smaller, the intra coding mode is set. If the absolute value sum of the prediction errors in the inter prediction is smaller, the mode with the smallest absolute value sum is the prediction mode or frame field prediction among the forward prediction mode, backward prediction mode, and bidirectional prediction mode. Set as mode.
- the prediction mode switching circuit 52 converts the macroblock signal of the reference image into a mode selected by the prediction determination circuit 54 of the frame prediction mode or the field prediction mode.
- the motion vector detection circuit 50 outputs a motion vector between the prediction image corresponding to the prediction mode selected by the prediction determination circuit 54 and the reference image, and outputs a variable length encoding circuit described later. 5 8 and motion compensation circuit 6 4 As the motion vector, the motion vector that minimizes the sum of the absolute values of the corresponding prediction errors is selected.
- the prediction determination circuit 54 sets the intra-coding mode (motion compensation mode) as the prediction mode. Is set), and switch 53 d of operation unit 53 is switched to contact a. As a result, the image data of the I picture is input to the DCT mode switching circuit 55.
- this DCT mode switching circuit 55 converts the data of the four luminance blocks into a state where the odd field lines and the even field lines are mixed (the frame DCT mode). Output) to the DCT circuit 56 in a state where the line of the odd field and the line of the even field are separated (field DCT mode).
- the DCT mode switching circuit 55 provides the coding efficiency when the DCT processing is performed by mixing the data of the odd field and the even field, and the coding efficiency when the DCT processing is performed in the separated state. And select a mode having good coding efficiency.
- the input signal is composed of a mixture of odd and even field lines, and the signal of the odd field line and the even field line that are vertically adjacent.
- the difference between the signals is calculated, and the sum (or sum of squares) of the absolute values is calculated.
- the input signal has a configuration in which the lines of the odd field and the even field are separated, and the signal difference between the lines of the odd field adjacent vertically and the signal of the even field Calculate the signal difference between the lines in the field and calculate the sum (or sum of squares) of the absolute values of each. Then, compare the two (sum of absolute values) and set the DCT mode corresponding to the smaller value. That is, if the former is smaller, the frame DCT mode is set, and if the latter is smaller, the field DCT mode is set.
- the data having the configuration corresponding to the selected DCT mode is output to the DCT circuit 56, and the DCT flag indicating the selected DCT mode is output to the variable length encoding circuit 58.
- the frame prediction mode (a mode in which odd-numbered lines and even-numbered lines are mixed)
- the frame DCT mode (in which odd-numbered lines and even-numbered lines are mixed) is also used in the DCT mode switching circuit 55. If the prediction mode switching circuit 52 selects the field prediction mode (the mode in which the data of the odd field and the data of the even field are separated), In the DCT mode switching circuit 55, there is a high possibility that the field DCT mode (mode in which the odd field and the even field are separated) is selected.
- the mode is determined so that the sum of absolute values of the prediction errors becomes small.
- the mode is determined so that the conversion efficiency is good.
- the I-picture image data output from the DCT mode switching circuit 55 is input to the DCT circuit 56, subjected to DCT (discrete cosine conversion) processing, and converted into DCT coefficients.
- the DCT coefficient is input to a quantization circuit 57 and is quantized in a quantization step corresponding to the data storage amount (buffer storage amount) of the transmission buffer 59, and then to the variable length coding circuit 58. Is entered.
- the variable-length encoding circuit 58 receives the image data (in this case, I-picture data) supplied from the quantization circuit 57 in accordance with the quantization step (scale) supplied from the quantization circuit 57. Into a variable length code such as a Huffman code, and Output.
- a variable length code such as a Huffman code
- the variable length coding circuit 58 also has a quantization step (scale) from the quantization circuit 57 and a prediction mode (intra coding mode, forward prediction mode, backward prediction mode or both directions) from the prediction decision circuit 54.
- a mode indicating which of the prediction modes has been set) a motion vector from the motion vector detection circuit 50, and a prediction flag (either the frame prediction mode or the field prediction mode) from the prediction determination circuit 54.
- a DCT flag (flag indicating whether the frame DCT mode or the field DCT mode has been set) output by the DCT mode switching circuit 55, and these are also variable-length codes.
- the transmission buffer 59 temporarily stores the input data and outputs data corresponding to the storage amount to the quantization circuit 57.
- the transmission buffer 59 When the remaining data amount increases to the permissible upper limit, the transmission buffer 59 reduces the data amount of the quantized data by increasing the quantization scale of the quantization circuit 57 by the quantization control signal. Conversely, when the remaining data amount decreases to the permissible lower limit value, the transmission buffer 59 reduces the quantization scale of the quantization circuit 57 by the quantization control signal, thereby obtaining the quantized data. Increase the amount of data. In this way, overflow or underflow of the transmission buffer 59 is prevented.
- the data stored in the transmission buffer 59 is read at a predetermined timing, output to the transmission path via the output terminal 69, and recorded on the recording medium 3, for example.
- the I-picture data output from the quantization circuit 57 is It is input to the quantization circuit 60 and inversely quantized according to the quantization step supplied from the quantization circuit 57.
- the output of the inverse quantization circuit 60 is input to the IDCT (inverse DCT) circuit 61, subjected to inverse DCT processing, and then supplied to the forward prediction image section 63 a of the frame memory 63 via the arithmetic unit 62 Is stored.
- IDCT inverse DCT
- the motion vector detection circuit 50 processes the sequentially input image data of each frame, for example, as I, B, P, B, P, B,. After processing the image data of the first input frame as an I picture, before processing the image of the next input frame as a B picture, the image data of the next input frame is further processed as a P picture. Process as This is because a B picture may involve backward prediction and bidirectional prediction, and cannot be decoded unless a P picture as a backward prediction image is prepared beforehand.
- the motion vector detection circuit 50 starts processing the image data of the P picture stored in the rear original image section 51c after the processing of the I picture. Then, as in the case described above, the sum of the absolute value of the evaluation value of the intra coding mode and the difference between frames (prediction error) in macroblock units is calculated from the motion vector detection circuit 50 to the prediction determination circuit 5. Supplied to 4.
- the prediction determination circuit 54 determines whether the prediction mode in one of the frame prediction mode and the field prediction mode and the intra coding mode corresponds to the evaluation value of the intra coding mode of the macroblock of the P picture and the absolute value of the prediction error. Mode or forward prediction mode is set for each macroblock.
- the operation unit 53 switches when the intra-coding mode is set. 5 Switch 3d to contact a as described above. Therefore, this data is transmitted via the DCT mode switching circuit 55, the DCT circuit 56, the quantization circuit 57, the variable length coding circuit 58, and the transmission buffer 59 in the same way as the I picture data. Is transmitted to Also, this data, the inverse quantization circuit 6 0, IDCT circuit 61, is supplied to and stored in the backward prediction picture unit 6 3 b of the frame memory 6 3 via the operation unit 6 2 c
- the switch 53d is switched to the contact b, and the image (I-picture image in this case) data stored in the forward prediction image section 63a of the frame memory 63. Is read out, and motion compensation is performed by the motion compensation circuit 64 in accordance with the motion vector output by the motion vector detection circuit 50. That is, when the setting of the forward prediction mode is instructed by the prediction determination circuit 54, the motion compensation circuit 64 reads the read address of the forward prediction image section 63a, and the motion vector detection circuit 50 now outputs the read address. Data is read out from the position corresponding to the output macroblock position by the amount corresponding to the motion vector, and predicted image data is generated.
- the predicted image data output from the motion compensation circuit 64 is supplied to a computing unit 53a.
- the arithmetic unit 53 a calculates the predicted image data corresponding to the macroblock supplied from the motion compensation circuit 64 from the macroblock data of the reference image supplied from the prediction mode switching circuit 52. Subtract and output the difference (prediction error).
- the difference data is transmitted to the transmission path via the DCT mode switching circuit 55, the DCT circuit 56, the quantization circuit 57, the variable length coding circuit 58, and the transmission buffer 59.
- the difference data is locally decoded by the inverse quantization circuit 60 and the IDCT circuit 61, and is input to the arithmetic unit 62.
- the same data as the predicted image data supplied to the calculator 53 a is supplied to the calculator 62.
- the arithmetic unit 62 adds the prediction image data output from the motion compensation circuit 64 to the difference data output from the IDCT circuit 61. As a result, the original (decoded) P picture image data is obtained.
- the image data of the P picture is supplied to and stored in the backward prediction image part 63 b of the frame memory 63.
- the frame file prediction model is used. It is necessary to have a circuit to sort the data in case that the DCT mode is different from the field and frame field DCT mode, but this is omitted for simplicity.
- the prediction decision circuit 54 sets the frame Z-field prediction mode in accordance with the evaluation value of the intra coding mode in macroblock units and the magnitude of the sum of absolute values of the differences between frames, and sets the prediction mode to intra. Set to one of the coding mode, forward prediction mode, backward prediction mode or bidirectional prediction mode.
- the switch 53d is switched to the contact a or the contact b. At this time, the same processing as in the case of the P picture is performed, and the data is transmitted.o
- the switch 53d is switched to contact c or contact d, respectively. It is.
- the image (in this case, the image of the P picture) stored in the backward predicted image section 63b is read out and the motion is performed.
- the motion is compensated by the compensation circuit 64 in accordance with the motion vector output from the motion vector detection circuit 50. That is, when the setting of the backward prediction mode is instructed by the prediction determination circuit 54, the motion compensation circuit 64 reads the read address of the backward prediction image section 63 b and the motion vector detection circuit 50.
- the predicted image data output from the motion compensation circuit 64 is generated by reading the data from the position corresponding to the position of the currently output macroblock by a distance corresponding to the motion vector and reading the data. It is supplied to the arithmetic unit 53b.
- the arithmetic unit 53b subtracts the predicted image data supplied from the motion compensation circuit 64 from the macroblock data of the reference image supplied from the prediction mode switching circuit 52 and outputs the difference. You. This difference data is transmitted to the transmission path via the DCT mode switching circuit 55, the DCT circuit 56, the quantization circuit 57, the variable length coding circuit 58, and the transmission buffer 59.
- the image (in this case, the I-picture image) data stored in the forward prediction image section 6 3 a and the backward prediction image section 6 3 is read out, and the motion compensation circuit 64 reads the image corresponding to the motion vector output by the motion vector detection circuit 50. Motion compensated. In other words, when the setting of the bidirectional prediction mode is instructed by the prediction determination circuit 54, the motion compensation circuit 64 reads the forward prediction image section 63a and the backward prediction image section 63b.
- the dress is moved from the position corresponding to the position of the macro block that the motion vector detection circuit 50 is currently outputting from the motion vector detection circuit 50 (in this case, the motion vector is forward in the frame prediction mode.
- the predicted image data output from the motion compensation circuit 64 is supplied to a calculator 53c.
- the arithmetic unit 53c subtracts the average value of the predicted image data supplied from the motion compensation circuit 64 from the macroblock data of the reference image supplied from the motion vector detection circuit 50, and calculates the difference. Output.
- This difference data is transmitted to the transmission line via the DCT mode switching circuit 55, the DCT circuit 56, the quantization circuit 57, the variable length coding circuit 58, and the transmission buffer 59.
- the picture of the B picture is not stored in the frame memory 63 because it is not regarded as a predicted picture of another picture.
- the forward prediction image section 63a and the backward prediction image section 63b are switched as necessary, and stored in one or the other with respect to a predetermined reference image. This can be switched and output as a forward prediction image or a backward prediction image.
- FIG. 30 is a block diagram showing a configuration of an example of the decoder 31 of FIG. 25.
- the encoded image data transmitted via the transmission path (recording medium 3) is received by a receiving circuit (not shown) or reproduced by a reproducing device, and is transmitted to a receiving buffer 81 via an input terminal 80. After being temporarily stored, it is supplied to the variable length decoding circuit 82 of the decoding circuit 90.
- variable length decoding circuit 82 performs variable length decoding on the data supplied from the reception buffer 81, and outputs the motion vector, the prediction mode, the prediction flag, and the DCT flag to the motion compensation circuit 87, and the quantization step. Is output to the inverse quantization circuit 83, and the decoded image data is output to the inverse quantization circuit 83.
- the inverse quantization circuit 83 inversely quantizes the image data supplied from the variable-length decoding circuit 82 in accordance with the quantization step also supplied from the variable-length decoding circuit 82, and outputs the result to the IDCT circuit 84. Output.
- the data (DCT coefficient) output from the inverse quantization circuit 83 is subjected to inverse DCT processing in the IDCT circuit 84 and supplied to the arithmetic unit 85.
- the data supplied from the IDCT circuit 84 is the data of an I picture
- the data is output from the arithmetic unit 85 and the image data (P or B picture
- the data is supplied to and stored in the forward prediction image section 86 a of the frame memory 86 to generate the prediction image data of the data. This data is output to the format conversion circuit 32 (FIG. 25).
- the image data supplied from the IDCT circuit 84 is P-picture data in which the image data one frame before that is predicted image data and is macroblock data encoded in the forward prediction mode
- the frame It is stored in the forward prediction image section 86a of the memory 86.
- the image data (I picture data) one frame before is read out, and the motion compensation circuit 87 performs motion compensation corresponding to the motion vector output from the variable length decoding circuit 82. .
- the arithmetic unit 85 adds the image data (data of the difference) supplied from the IDCT circuit 84 and outputs the result.
- the added data that is, the decoded P-picture data is used to generate a predicted image data of image data (B-picture or P-picture data) to be input later to the arithmetic unit 85. It is supplied to and stored in the backward prediction image section 86 b of the memory 86.
- the forward prediction image section 8 of the frame memory 86 corresponds to the prediction mode supplied from the variable length decoding circuit 82.
- 6 I-picture image data stored in a (for forward prediction mode), P-picture image data stored in backward prediction section 86 b (for backward prediction mode), or both images
- the data (in the case of bidirectional prediction mode) is read out, and the motion compensation circuit 87 performs motion compensation corresponding to the motion vector output from the variable length decoding circuit 82.
- a predicted image is generated.
- motion compensation is not required (in the case of the intra-coding mode)
- no predicted image is generated.
- the data subjected to the motion compensation by the motion compensation circuit 87 in this way is added to the output of the IDCT circuit 84 in the arithmetic unit 85. This addition output is output to the format conversion circuit 32 via the output terminal 91.
- this added output is a short-term image of the B picture, and is not used for generating a predicted image of another image, and thus is not stored in the frame memory 86.
- the image data of the P picture stored in the backward prediction image section 86 b is read out and output as a reproduced image via the motion compensation circuit 87 and the arithmetic unit 85. You. However, at this time, motion compensation and addition are not performed.
- the motion compensation circuit 87 executes the process of returning the configuration in which the signals of the lines of the odd field and the even field are separated to the original mixed configuration as necessary.
- the processing of the luminance signal has been described, but the processing of the chrominance signal is also performed in the same manner.
- the motion vector used for the luminance signal is 1Z2 in the vertical and horizontal directions.
- a higher resolution image and a lower resolution can be obtained.
- a high-resolution input image is down-sampled by a circuit as shown in FIG. That is, the signal of the high-resolution input image is input to the low-pass filter 91, and unnecessary high-frequency components are removed.
- the signal limited to a predetermined frequency band by the low-pass filter 91 is input to the thinning circuit 92, and the pixels are thinned at a rate of one out of two. As a result, a signal having a resolution of 1 Z 4 is obtained.
- An image signal having a resolution of 14 is encoded and transmitted as described above.
- the high resolution image signal is also encoded and transmitted together with the 14 resolution image signal.
- a predicted image signal of a high-resolution image signal is generated from a signal obtained by decoding a signal obtained by encoding a 1/4 resolution image signal.
- an interpolation circuit 95 as shown in FIG. 32 is used.
- a signal obtained by decoding a signal obtained by encoding an image signal having a resolution of 1Z4 is input to the interpolation circuit 95.
- the interpolation circuit 95 interpolates (upsamples) this signal as shown in FIG. That is, the luminance data of a line having no luminance data is generated by adding (averaging) the values of the luminance data located on the upper and lower lines to 1 Z 2, respectively.
- the down-sampling circuit shown in Fig. 31 since the band is limited, the spatial frequency is not expanded by this up-sampling, but the resolution can be doubled.
- a high-resolution image signal is encoded and transmitted based on the predicted image signal thus generated.
- the decoder decodes (decodes) a 1/4 resolution image signal in the same manner as described above.
- a predicted image signal of a high-resolution image signal is generated.
- the high-resolution image signal is decoded (decoded) using the predicted image signal. Even when a predicted image signal is generated on the decoder side, an interpolation circuit as shown in FIG. 32 is used.
- the high-resolution image signal is down-sampled at a ratio of 2: 1 to the low-resolution image signal.
- An image signal is generated.
- a high-definition television signal represented by Hi-Vision is converted into a high-resolution image signal
- a low-resolution image signal obtained by down-sampling this at a ratio of 2: 1 is encoded and transmitted. Since the aspect ratio remains 16: 9, it is possible to monitor a 1/4 resolution image signal on an NTSC television receiver having an aspect ratio of 4: 3. There was a task that could not be done.
- the present invention has been made in view of such circumstances, and has been made in view of the above circumstances, and has been made of a low-resolution image obtained by down-sampling a high-resolution image signal having an aspect ratio of 16: 9. This enables the image signal to be monitored by a conventional NTSC receiver, for example.
- the image signal encoding method includes: Generates a predicted image signal of a high-resolution image signal from a signal obtained by decomposing the image signal into a high-resolution image signal and decoding a signal obtained by encoding the low-resolution image signal, and generates the generated prediction.
- an image signal encoding method for encoding a high-resolution image signal using an image signal spatial filtering is performed on a high-resolution image signal at a predetermined resolution so as to have different aspect ratios.
- a low resolution image signal is generated, and a signal obtained by decoding a signal obtained by encoding the low resolution image signal is converted into a predetermined signal so as to have the original aspect ratio.
- a feature is that a predicted image signal of a high-resolution image signal is generated by using spatial filtering at a resolution, and an image signal of a high resolution is encoded by using the generated predicted image signal.
- a high-resolution image signal can be converted into a squeeze-type low-resolution image signal by multiplying it by 1 Z 2 vertically and 38 times horizontally.
- a high-resolution predicted image signal can be generated by multiplying a signal obtained by decoding a signal obtained by decoding a low-resolution image signal by 2 times vertically and 8 times 3 times horizontally.
- This method uses, for example, a high-resolution image signal having 192 pixels in the horizontal direction and 960 pixels in the vertical direction, or 192 pixels in the horizontal direction.
- the present invention can be applied to a high-resolution image signal having 1152 pixels in the vertical direction.
- a high-resolution image signal can be multiplied by 7/15 vertically and 3Z by 8 horizontally. This method can be applied to a high-resolution image signal having 192 pixels in the horizontal direction and 1204 pixels in the vertical direction.
- An image signal encoding method decomposes an image signal into a low-resolution image signal and a high-resolution image signal, An image signal code that generates a predicted image signal of a high-resolution image signal from a signal obtained by decoding a signal obtained by encoding the signal, and encodes the high-resolution image signal using the generated predicted image signal.
- the high resolution image signal is thinned out using spatial filtering at a resolution of 3/8 times vertically and 3/8 times horizontally, and a low resolution image signal of the letterbox method is used.
- a prediction image signal of an image signal is generated, and a high-resolution image signal is encoded using the generated prediction image signal.
- This method uses a high-resolution image signal with 920 pixels in the horizontal direction and 960 pixels in the vertical direction, or 192 pixels in the horizontal direction and 920 pixels in the vertical direction. It can be applied to high-resolution image signals with 1 152 pixels.
- the spatial signal is used to generate a predicted image signal of a high-resolution image signal using spatial filtering with a resolution of 207 times vertically and 8 times 3 times horizontally. be able to.
- This method can be applied to a high-resolution image signal having 192 pixels in the horizontal direction and 1204 pixels in the vertical direction.
- a high resolution image signal can be multiplied by 1 Z 3 times vertically and 3 Z 8 times horizontally.
- Image signal decoding methods corresponding to these image signal encoding methods are implemented. Can be manifested. Also, by applying these methods, an encoding device and a decoding device can be realized.
- FIG. 1 is a block diagram showing a configuration of an embodiment of an image signal encoding device according to the present invention.
- FIG. 2 is a block diagram showing a configuration example of the downsampling circuit 301 of FIG.
- FIG. 3 is a block diagram showing a configuration example of the upsampling circuit 302 of FIG.
- FIG. 4 is a block diagram showing a configuration of an embodiment of an image signal decoding device according to the present invention.
- FIG. 5 is a diagram illustrating a first processing example in the downsampling circuit 301 of FIG.
- FIG. 6 is a diagram illustrating horizontal downsampling in the embodiment of FIG.
- FIG. 7 illustrates vertical downsampling in the embodiment of FIG. FIG.
- FIG. 8 is a diagram illustrating an upsampling process corresponding to the downsampling of FIG.
- FIG. 9 is a diagram illustrating an upsampling process corresponding to the downsampling of FIG.
- FIG. 10 is a view for explaining a second processing example in the downsampling circuit 301 of FIG.
- FIG. 11 is a view for explaining downsampling in the vertical direction in the embodiment of FIG.
- FIG. 12 is a diagram for explaining an upsampling process corresponding to the downsampling of FIG.
- FIG. 13 is a view for explaining a third processing example in the downsampling circuit 301 of FIG.
- FIG. 14 is a diagram illustrating the downsampling in the vertical direction in the embodiment of FIG.
- FIG. 15 is a diagram for explaining upsampling corresponding to the downsampling of FIG.
- FIG. 16 is a diagram illustrating a fourth processing example in the downsampling circuit 301 of FIG.
- FIG. 17 is a diagram illustrating the downsampling in the vertical direction in the embodiment of FIG.
- FIG. 18 is a view for explaining upsampling corresponding to the downsampling of FIG.
- FIG. 19 is a diagram illustrating another processing example of the downsampling in the vertical direction in the embodiment of FIG.
- FIG. 20 is a diagram illustrating upsampling corresponding to the downsampling of FIG.
- FIG. 21 is a diagram for explaining a method of manufacturing a disk on which data encoded by the image signal encoding method according to the present invention is recorded.
- FIG. 22 is a diagram illustrating the principle of high-efficiency coding.
- FIG. 23 is a diagram illustrating a picture type for encoding an image.
- FIG. 24 is a diagram for explaining the principle of encoding a continuous moving image signal.
- FIG. 25 is a block diagram showing a configuration of a conventional video encoding device and a conventional decoding device.
- FIG. 26 is a diagram illustrating the structure of image data.
- FIG. 27 is a block diagram showing a configuration example of the encoder 18 in FIG. 25.
- FIG. 28 is a diagram for explaining the operation of the prediction mode switching circuit 52 of FIG.
- FIG. 29 is a diagram illustrating the operation of the DCT mode switching circuit 55 of FIG.
- FIG. 30 is a block diagram showing a configuration example of the decoder 31 in FIG. 25.
- FIG. 31 is a block diagram showing a configuration example of a conventional downsampling circuit.
- FIG. 32 is a block diagram showing a configuration example of a conventional upsampling circuit.
- FIG. 33 is a diagram for explaining the operation of the interpolation circuit 95 of FIG. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a block diagram showing a configuration of an embodiment of an image signal encoding device (encoder) of the present invention. In this embodiment, hierarchical coding is performed. In the figure, blocks denoted by reference numerals in the 100's are blocks for processing low-resolution image signals, and blocks denoted by reference numerals in the 200's are high-resolution image signals. This is the block that processes the issue. Each block that performs the processing of each layer has basically the same configuration as the encoder shown in FIG. 27. In FIG.
- the last two digits of the 100s and 200s codes are The two-digit code of the corresponding function block in FIG. 27 is matched.
- the prediction mode switching circuit 52 and the DCT mode switching circuit 55 in FIG. 27 are not shown in FIG. 1 for simplicity, the corresponding circuit is also used in the embodiment of FIG. Is inserted.
- the prediction determination circuit 54 in FIG. 27 is not shown in FIG. 1 for simplicity, but the calculation unit 1553 in FIG. 1 also includes this prediction determination circuit 54.
- a high-resolution image 201 is prepared as an input image. This is converted to a 1Z4 resolution image 101 via a downsampling circuit 301 for hierarchical coding. As shown in FIG. 2, the downsampling circuit 301 includes a low-pass filter 901 for band limitation and a thinning circuit 902 for thinning data. The operation of the downsampling circuit 301 will be described later in detail.
- the processing of the 1/4 resolution image 101 is basically the same as that shown in FIG. 27, and will be described briefly.
- 1 no 4 resolution image 1 0 1 Input to the motion vector detection circuit 150.
- the input image is read from the frame memory 151 in macroblock units according to a preset image sequence (the processing order of I-picture, P-picture, B-picture, etc.), and is read as a reference source.
- a motion vector is detected between the image and the front original image or the rear original image, or both.
- the operation unit 153 determines the prediction mode of the reference block based on the absolute value sum of the inter-frame difference in units of blocks calculated by the motion vector detection circuit 150. Further, the prediction mode determined by the arithmetic unit 153 is supplied to the variable length coding circuit (VLC) 158.
- VLC variable length coding circuit
- the calculation unit 153 switches between the intra coding mode, the forward prediction mode, the backward prediction mode, and the bidirectional prediction mode for each block, and in the case of the intra coding mode,
- an inter-frame encoded data (difference data) is generated from each predicted image, and the difference data is calculated by the arithmetic unit 1.
- the DCT circuit 156 uses the two-dimensional correlation of the image signal to perform discrete cosine transform of the input image data or difference data in units of blocks, and converts the resulting transform data to a quantization circuit (Q). 1 5 7 is output.
- the quantization circuit 157 quantizes the DCT transform data with a quantization step size determined for each macroblock and slice, and converts the quantized data obtained at the output end to a variable length coding (VLC) circuit 157. 8 and the inverse quantization circuit (IQ) 160. Quantum used for quantization The conversion scale is determined by feeding back the remaining amount of the buffer in the transmission buffer memory 159 so that the transmission buffer memory 159 does not fail. This quantization scale is also supplied from the quantization circuit 157 to the variable-length encoding circuit 158 and the inverse quantization circuit 160 together with the quantized data.
- VLC variable length coding
- IQ inverse quantization circuit
- the VLC circuit 158 performs variable-length coding on the quantized data together with the quantization scale, macroblock type (prediction mode), and motion vector, and supplies the same to the transmission buffer memory 159 as transmission data. .
- the bit stream 109 of the 1Z4 resolution image is transmitted in the order of ⁇ prediction mode> ⁇ motion vector> ⁇ DCT coefficient>.
- the inverse quantization circuit 160 converts the quantized data transmitted from the quantization circuit 157 into inverse quantized data by inversely quantizing the quantized data to a representative value, and converts the inverse quantized data to a discrete cosine inverse.
- a conversion (IDCT) circuit is provided to supply the circuit.
- the IDCT circuit 161 converts the inversely quantized data decoded by the inverse quantization circuit 160 into decoded image data by a conversion process reverse to that of the DCT circuit 156.
- the data is output to the frame memory 163 via the external memory.
- the motion compensation circuit (MC) 164 moves the data stored in the frame memory 163 based on the macroblock type, motion vector, Frame / Field prediction flag, and Frame / Field DCT Flag. Compensation is performed to generate a predicted image signal.
- the arithmetic unit 162 adds the predicted image signal and the output data (difference data) of the IDCT circuit 161, and performs local decoding.
- the decoded image is written to the frame memory 163 as a forward predicted image or a backward predicted image. In the frame memory 163, bank switching is performed as necessary. As a result, a single frame is output as a backward prediction image or is output as a forward prediction image depending on the image to be encoded.
- the difference from the predicted image is sent as the output of the IDCT circuit 161, and this difference is output from the motion compensation circuit 1664 by the arithmetic unit 162.
- Local decoding is performed by adding to the predicted image that is to be decoded. This predicted image is exactly the same image as the image decoded by the decoder, and the next processed image performs forward, backward or bidirectional prediction based on this predicted image.
- the description so far is about the encoding procedure of the 14-resolution image.
- the output of the arithmetic unit 162 described above is used as the spatial (Spatia 1) prediction image signal of the high-resolution image signal, It is supplied to the upsampling circuit 302 on the transformer side and used for prediction.
- This encoding procedure is exactly the same as the encoding procedure of the 14-resolution image, except for the procedure of generating the predicted image signal.
- the high-resolution image 201 is supplied to a calculation unit 253 via a motion vector detection circuit 250.
- the arithmetic section 253 performs forward, backward or bidirectional prediction by motion compensation from the frame memory 263 or prediction from a 1Z4 resolution image together with the intra coding.
- the image data output from the computing unit 162 is interpolated by the upsampling circuit 302 to the same resolution as the high-resolution image.
- a typical upsampling circuit 302 is implemented by an interpolation circuit 903 as shown in FIG. 3, for example. Be composed. Details of the operation will be described later.
- the interpolated image generated by the upsampling circuit 302 is input to the weighting circuit 303.
- the output of the upsampling circuit 302 is multiplied by a weight (1 ⁇ W). This is used as a first predicted image signal.
- a temporal (Temp oral) predicted image is output from the motion compensation circuit 26 4 in accordance with the forward, backward or bidirectional motion compensation c.
- the weight W is multiplied by 6. This is used as a second predicted image signal.
- the first and second predicted image signals are added by the adder 304 or 305 to generate a third predicted image signal.
- the calculation unit 253 uses the third predicted image signal to performs prediction.
- the weight W is determined by the weight determination circuit 307 so that the prediction efficiency of the third predicted image signal is maximized.
- the weight W is supplied to a variable length coding circuit (VLC) 258 and coded and transmitted.
- VLC variable length coding circuit
- the calculation unit 2553 can obtain higher prediction efficiency by using the 14-resolution image as the spatial prediction image in addition to the conventional motion compensation.
- the prediction mode is determined.
- the determined prediction mode is supplied to the variable-length coding circuit 258 and coded and transmitted.
- the prediction data is supplied to the DCT circuit 256.
- Other processes are the same as those for encoding a 1Z4 resolution image.
- the bit stream 209 of the high-resolution image is transmitted in the following order: prediction mode> motion vector> ⁇ weight W> ⁇ DCT coefficient>
- Figure 4 shows a block diagram of the decoder for hierarchically encoded data.
- reference numerals in the 400s represent blocks for decoding image signals of 1Z4 resolution
- reference numerals in the 500s represent blocks for decoding image signals of high resolution.
- the basic operation of each block is the same as that shown in Fig. 30.
- the two digits of the last two digits of each block match the two digits of the corresponding block in Fig. 30.
- the 1/4 resolution bit stream 401 (corresponding to the output 109 of the transmit buffer 159 in Figure 1) is decoded as before.
- a 1 Z4 resolution bitstream 401 is input via a transmission medium (for example, recording medium 3 in FIG. 25).
- This bit stream is input to a variable length decoding (IVLC) circuit 482 via a reception buffer 481.
- the variable-length decoding circuit 48.2 decodes the quantized data, motion vector, macroblock type, quantization scale, Frame / Field prediction Flag, and Frame / Field DCT Flag from the bitstream.
- the quantized data and the quantized scale are input to the next inverse quantization circuit (IQ) 483.
- IQ inverse quantization circuit
- a 1Z4 resolution image 408 is obtained.
- the decoded image is stored in the frame memory 486 for prediction of the next image.
- the image stored in the frame memory 486 is used as a spatial prediction image signal used for decoding a high-resolution image. Then, it is input to the upsampling circuit 602 of the high resolution decoding device.
- This up-sampling circuit 602 like the up-sampling circuit 302 of FIG.
- the data up-sampled by the up-sampling circuit 302 is supplied to a weighting circuit 603, and is multiplied by (11 W). This is the first predicted image for the high-resolution decoding device.
- the high-resolution decoding device decoding is performed through exactly the same processing as for a 1Z4 resolution image signal. That is, the high-resolution bit stream 501 is input via the transmission medium. This bitstream is input to a variable length decoding (IVLC) circuit 583 via the reception buffer 581, and is decoded.
- IVLC variable length decoding
- variable-length decoded data is processed by an inverse quantization circuit (IQ) 583, an IDCT circuit 584, an arithmetic unit 5885, a frame memory 5886, and a motion compensation circuit 5887.
- IQ inverse quantization circuit
- IDCT IDCT
- arithmetic unit 5885 arithmetic unit 5885
- frame memory 5886 a frame memory 5886
- motion compensation circuit 5887 The output from the motion compensation circuit 587 is input to a weighting circuit 604, and is multiplied by a weighting coefficient W to form a second predicted image signal.
- the second predicted image signal and the I-th predicted image signal from the 1Z4 resolution image described above are added by an adder 605 to form a third predicted image signal for the high-resolution decoding device. .
- This third predicted image signal is added to the difference data output from the IDCT circuit 584 in the arithmetic unit 585, and the original high-resolution image is decoded.
- This high-resolution image 508 is simultaneously stored in the frame memory 586 for prediction of the next image.
- the weight W used is used in the weighting circuit 306 in FIG.
- the weight W is obtained from the IVLC circuit 582 through the decoding of the bit stream 209 (501).
- the decoding of the high-resolution image is performed as described above.
- Figure 5 shows an example of the relationship between the resolution of a high-resolution image and the resolution of a low-resolution image. That is, in this example, the high-resolution image 201 in FIG. 1 has a resolution of H horizontal pixels and V vertical pixels, and this is determined by the down-sampling circuit 301 so that the horizontal Hx 3Z8 pixels and the vertical Vx1 Z2 images are obtained. The image is converted to a low-resolution image 101 having a natural resolution. This low-resolution image is a so-called squeeze-type image (the image becomes vertically long).
- Such downsampling is realized by the downsampling circuit having the configuration shown in FIG. That is, the input signal is band-limited by the low-pass filter 901 and then input to the decimating circuit 902, where it is decimated to 3/8 horizontally and 1Z2 vertically.
- FIG. 6 shows the principle of horizontal thinning of the downsampling circuit 301 (thinning circuit 902).
- the coefficients (1, 13, 2, 3) shown in Fig. 6 are calculated using the reciprocal of the ratio of the distance between adjacent input pixels corresponding to the output pixel position. expressed.
- the value obtained by multiplying the pixel value of the adjacent pixel by this coefficient is the value of the output pixel.
- the position of the input pixel is equal to the position of the output pixel, the value of the input pixel becomes the value of the output pixel as it is.
- FIG. 7 shows the principle of vertical thinning of the downsampling circuit 301 (thinning circuit 902). In consideration of the interlaced structure of the image, 1Z2 is thinned out in the vertical direction.
- the coefficients (1, 1 2) shown in Fig. 7 are expressed using the reciprocal of the ratio of the distance between adjacent input pixels corresponding to the output pixel position.
- the value obtained by multiplying the pixel value of the adjacent pixel by this coefficient is the value of the output pixel.
- the value of the input pixel becomes the value of the output pixel as it is.
- every other line is thinned out.
- the input pixels a and b, c and d, e and f, etc. of the vertically adjacent lines are multiplied by a factor of 1 2 and added, and the output pixels A, B, C etc. are configured o
- the interpolating circuit 903 of FIG. 3 constituting the up-sampling circuits 302 and 602 of FIGS. , 8/3 times the width and 2 times the height.
- FIG. 8 shows the principle of interpolation of 8 ⁇ 3 in the horizontal direction.
- the coefficients (1, 7Z8, 6/8, 5/8, 4/8, 3/8, 2/8, 1Z8) shown in Fig. 8 It is expressed using the reciprocal of the distance between adjacent input pixels corresponding to the position.
- Image of adjacent pixel The value obtained by multiplying the prime value by this coefficient is the value of the output pixel.
- the position of the input pixel is equal to the position of the output pixel, the value of the input pixel becomes the value of the output pixel as it is.
- the input pixel a is used as the output pixel A as it is.
- the value obtained by multiplying the input pixel a by the coefficient 5/8 and the value obtained by multiplying the input pixel b by 38 are added to obtain the output pixel B.
- output pixels are similarly interpolated.
- Figure 9 shows the principle of double interpolation in the vertical direction. Double the interpolation in the vertical direction taking into account the interlacing structure of the image.
- the coefficients (1, 3/4, 1/2, 1/4) shown in FIG. 9 are obtained by calculating the reciprocal of the distance between adjacent input pixels corresponding to the output pixel position. Is represented by The value obtained by multiplying the pixel value of the adjacent pixel by this coefficient is the value of the output pixel.
- the position of the input pixel is equal to the position of the output pixel, the value of the input pixel becomes the value of the output pixel as it is. That is, in the first field F1, the input pixels a, b, c, etc.
- the output pixels D are directly set as the output pixels A, C, E, etc., and the output pixel D, for example, is 1 It is generated by adding values multiplied by.
- the second field F2 for example, a value obtained by multiplying the input pixel b by 3 ⁇ 4 and a value obtained by multiplying the input pixel c by 14 are added to generate an output pixel D. .
- a high resolution image of 1920 pixels horizontally and 960 pixels vertically is converted into a low resolution image of 720 pixels horizontally and 480 pixels vertically
- a high resolution image of 20 pixels and 1152 pixels in height can be converted to a low resolution image of 720 pixels in width and 576 pixels in height.
- This allows, for example, a 16: 9 aspect ratio HDTV system.
- Image can be monitored at a low resolution, NTSC receiver with an aspect ratio of 4: 3.
- FIG. 10 shows another example of the relationship between the resolutions of the high-resolution image and the low-resolution image.
- the high-resolution image in this example has a resolution of H horizontal pixels and V vertical pixels
- the low-resolution image has a resolution of 8 horizontal HX pixels and 7 vertical VX pixels.
- This low-resolution image is also a squeeze-type image.
- FIG. 11 shows the principle of thinning out 7Z 15 in the vertical direction.
- the output pixel z can be obtained by weighting the inverse of the ratio of the distances a and b to the input pixels x and y as shown in the following equation.
- the output pixel z can be obtained by weighting the inverse of the ratio of the distances a and b corresponding to the input pixels X and y as shown in the following equation.
- 15 output pixels may be generated from 7 input pixels.
- the embodiment of FIG. 10 can be applied to a case where a high-resolution image having 1920 horizontal pixels and 120 vertical pixels is converted to a low-resolution image having 720 horizontal pixels and 483 vertical pixels. it can. If downsampling is performed 7 to 15 times in the vertical direction with respect to 104 pixels in the vertical direction, the number of pixels obtained will be less than 483 pixels in the vertical direction. Processing such as adding several lines of images is performed.
- FIG. 13 shows still another embodiment.
- a high-resolution image of horizontal H pixels and vertical V pixels is converted to a low-resolution image having a resolution of horizontal HX3 8 pixels and vertical VX3 / 8 pixels.
- the low-resolution image since the high-resolution image is down-sampled at the same ratio in the horizontal and vertical directions, the low-resolution image has the same aspect ratio as the high-resolution image. become.
- this low-resolution image is processed as a letterbox type image. That is, the required number of lines (no images) are added and displayed above and below the transmitted image.
- Downsampling of 3Z8 in the horizontal direction can be realized according to the principle shown in FIG.
- the downsampling of 3Z8 in the vertical direction can be realized, for example, according to the principle shown in FIG. That is, in this embodiment, the first field F 1 Then, input pixel a is output as output pixel A as it is, and output pixel B is generated by adding the value obtained by multiplying input pixel c by 1/3 and the value obtained by multiplying input pixel d by 2/3. I do. Also, the output pixel C is generated by adding the value obtained by multiplying the input pixel f by 1/3 and the value obtained by multiplying the input pixel g by 2/3.
- the output pixel A is generated by multiplying the input pixel a by 1Z6, multiplying the input pixel b by 56, and adding both. Further, the input pixel d is multiplied by 2, the input pixel e is multiplied by 2, and the two are added to generate the output pixel B. Further, a value obtained by multiplying the input pixel g by 5Z6 and a value obtained by multiplying the input pixel h by 1Z6 are added to generate an output pixel C. In this way, three lines of data are generated from eight lines of data.
- Upsampling by a factor of 83 in the horizontal direction can be realized according to the principle shown in FIG. 8, as described above.
- Upsampling by 3 ⁇ 8Z in the vertical direction can be realized according to the principle shown in FIG. That is, in the first field F 1, the input pixel “a” is used as it is as the output pixel “A”. Output pixel B is generated by adding the value obtained by multiplying input pixel a by 58 and the value obtained by multiplying input pixel b by 38. The output pixel C is generated by adding the value obtained by multiplying the input pixel a by 2Z8 and the value obtained by multiplying the input pixel b by 6Z8.
- a value obtained by multiplying an adjacent input pixel by the illustrated coefficient is added to generate data of an output pixel.
- an output pixel is generated by multiplying an input pixel by a predetermined coefficient. In this way, three Eight lines of data are generated from the lines of data.
- a high-resolution image of 1920 pixels horizontally and 960 pixels vertically is down-sampled to a low-resolution image of 720 pixels horizontally and 360 pixels vertically.
- FIG. 16 shows still another embodiment.
- the horizontal H pixels, vertical V high-resolution image of the pixel is converted horizontal HX 3Z8 pixels, in the vertical VX 1 7/2 0 pixels of the low resolution image.
- This low-resolution image is a letterbox format image.
- the output pixel z can be obtained by calculating the following equation corresponding to the distances a and b to the input pixels X and y.
- data of seven lines may be generated from 20 lines in the first field F1 and the second field F2.
- Upsampling by 20 ⁇ 7 times in the vertical direction can be realized, for example, according to the principle shown in FIG.
- the input pixel X According to the distances a and b with respect to y and y, the following equation is used to obtain the output pixel z.
- the embodiment shown in FIG. 16 is used to convert a high-resolution image of 1920 pixels horizontally and 120 pixels vertically into a low-resolution image of 720 pixels horizontally and 358 pixels vertically. It is possible to apply. If downsampling is performed 7 ⁇ 20 times the vertical 104 pixels, the number of pixels obtained will be larger than the vertical 358 pixels. It performs processing such as deleting the application.
- FIG. 19 illustrates the principle of downsampling by 1 ⁇ 3. As shown in the figure, in the first field F 1 and the second field F 2, one line is thinned out (extracted) from three lines, thereby reducing the vertical down time by three times. Sampling is possible.
- the vertical upsampling needs to be tripled, but this can be achieved, for example, according to the principle shown in FIG. That is, even in this case, a predetermined coefficient By multiplying and weighting, in each of the fields F1 and F2, data of one line is generated from data of one line.
- the configuration can be simplified and a low-cost device can be realized. It can be realized.
- This embodiment can be applied to the case where a high resolution image of 1920 pixels wide and 120 pixels high is converted into a low resolution image of 720 pixels wide and 34 pixels high. is there. If vertical downsampling is performed on 104 pixels in the vertical direction, the number of pixels obtained will be greater than the pixels in the vertical direction, but in this case, lines at the top or bottom of the screen may be deleted. Perform processing.
- a high-resolution image and a low-resolution image can be encoded and transmitted, respectively, and decoded.
- the high-resolution image and the low-resolution image are recorded on the optical disk.
- FIG. 21 shows a method of manufacturing such a disk. That is, a master made of, for example, glass is prepared, and a recording material made of, for example, photoresist is applied thereon. As a result, a master for recording is manufactured.
- the bit stream including the high-resolution image data and the low-resolution image data is once recorded on a magnetic tape or the like according to a predetermined format to produce a software.
- the master is developed and pits appear on the master.
- the master prepared in this way is subjected to, for example, processing such as electric power, to produce a metal master in which the pits on the glass master are transferred. From this metal master, a metal stamper is further manufactured and used as a molding die.
- a material such as PMMA (acrylic) or PC (polycarbonate) is injected into the molding die by, for example, injection and solidified.
- PMMA acrylic
- PC polycarbonate
- 2P ultraviolet curable resin
- the pit on the metal stamper can be transferred onto a replica made of resin.
- a reflective film is formed by vapor deposition or sputtering. Alternatively, it is formed by spin coating.
- the image signal encoding method and the image signal encoding device of the present invention a high-resolution image signal is encoded so as to have a different aspect ratio, transmitted, and decoded. Because of this, it becomes possible to monitor, for example, a high-definition high-resolution image signal using, for example, an NTSC low-resolution receiver. Of course, depending on the HDTV receiver, it is not possible to view high-resolution images as they are.
- an NTSC receiver whose aspect ratio is set to a ratio of 16: 9, for example, has a squeeze method with an aspect ratio of 4: 3. If a function to restore the image to the original 16: 9 aspect ratio image is provided, it will be possible to observe a normal ratio image.
- a low-resolution image is transmitted as a letterbox image
- the image will be observed as an image with the correct aspect ratio even on an NTSC type receiver with an aspect ratio of 4: 3. be able to.
- an NTSC type receiver having an aspect ratio of 16: 9 it is necessary to display a large image over the entire screen without inserting substantially no image lines above and below the screen. Can be.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Signal Processing For Recording (AREA)
- Television Systems (AREA)
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/341,546 US5832124A (en) | 1993-03-26 | 1994-03-28 | Picture signal coding method and picture signal coding apparatus, and picture signal decoding method and picture signal decoding apparatus |
DE1994621868 DE69421868T2 (de) | 1993-03-26 | 1994-03-28 | Verfahren und vorrichtung zur kodierung/dekodierung eines videosignals |
KR1019940704311A KR100311294B1 (ko) | 1993-03-26 | 1994-03-28 | 화상신호부호화방법및화상신호부호화장치및화상신호복호화방법및화상신호복호화장치 |
EP19940910550 EP0643535B1 (en) | 1993-03-26 | 1994-03-28 | Method and apparatus for coding/decoding a video signal |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP5/68785 | 1993-03-26 | ||
JP06878593A JP3374989B2 (ja) | 1993-03-26 | 1993-03-26 | 画像信号符号化方法および画像信号符号化装置、ならびに画像信号復号化方法および画像信号復号化装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1994023535A1 true WO1994023535A1 (en) | 1994-10-13 |
Family
ID=13383734
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1994/000499 WO1994023535A1 (en) | 1993-03-26 | 1994-03-28 | Method and apparatus for coding video signal, and method and apparatus for decoding video signal |
Country Status (7)
Country | Link |
---|---|
US (1) | US5832124A (ja) |
EP (1) | EP0643535B1 (ja) |
JP (1) | JP3374989B2 (ja) |
KR (1) | KR100311294B1 (ja) |
CN (1) | CN1076932C (ja) |
DE (1) | DE69421868T2 (ja) |
WO (1) | WO1994023535A1 (ja) |
Families Citing this family (41)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR970705240A (ko) * | 1995-07-19 | 1997-09-06 | 니시무로 타이조 | 레터박스 변환 장치(letter-box transformation device) |
KR100192270B1 (ko) * | 1996-02-03 | 1999-06-15 | 구자홍 | 에이치디티브이 비데오 디코더 회로 |
US5907372A (en) * | 1996-06-28 | 1999-05-25 | Hitachi, Ltd. | Decoding/displaying device for decoding/displaying coded picture data generated by high efficiency coding for interlace scanning picture format |
AU731425B2 (en) * | 1996-09-09 | 2001-03-29 | Sony Corporation | Picture encoding and decoding |
JP3263807B2 (ja) * | 1996-09-09 | 2002-03-11 | ソニー株式会社 | 画像符号化装置および画像符号化方法 |
AU759558B2 (en) * | 1996-09-09 | 2003-04-17 | Sony Corporation | Picture encoding and decoding |
US6353703B1 (en) | 1996-10-15 | 2002-03-05 | Matsushita Electric Industrial Co., Ltd. | Video and audio coding method, coding apparatus, and coding program recording medium |
CN1474604A (zh) * | 1996-12-06 | 2004-02-11 | 松下电器产业株式会社 | 图像信号传送、编码和解码方法和装置以及光盘记录和再现方法 |
DE69735437T2 (de) * | 1996-12-12 | 2006-08-10 | Matsushita Electric Industrial Co., Ltd., Kadoma | Bildkodierer und bilddekodierer |
JPH10248051A (ja) * | 1997-03-05 | 1998-09-14 | Matsushita Electric Ind Co Ltd | ディジタルデータ送信方法、ディジタルデータ送信装置およびディジタルデータ受信装置 |
JPH10257502A (ja) * | 1997-03-17 | 1998-09-25 | Matsushita Electric Ind Co Ltd | 階層画像符号化方法、階層画像多重化方法、階層画像復号方法及び装置 |
JPH10271504A (ja) * | 1997-03-18 | 1998-10-09 | Texas Instr Inc <Ti> | 画像信号の符号化方法 |
MY129665A (en) * | 1997-04-09 | 2007-04-30 | Matsushita Electric Ind Co Ltd | Image predictive decoding method, image predictive decoding apparatus, image predictive coding method, image predictive coding apparatus, and data storage media |
JPH118856A (ja) * | 1997-06-17 | 1999-01-12 | Mitsubishi Electric Corp | 画像符号化方法及びその装置 |
US6310918B1 (en) * | 1997-07-31 | 2001-10-30 | Lsi Logic Corporation | System and method for motion vector extraction and computation meeting 2-frame store and letterboxing requirements |
CN100459715C (zh) * | 1997-07-31 | 2009-02-04 | 日本胜利株式会社 | 数字视频信号块间预测编码/解码装置及编码/解码方法 |
JP3813320B2 (ja) * | 1997-08-27 | 2006-08-23 | 株式会社東芝 | 動きベクトル検出方法および装置 |
US6094225A (en) * | 1997-12-02 | 2000-07-25 | Daewoo Electronics, Co., Ltd. | Method and apparatus for encoding mode signals for use in a binary shape coder |
KR100252108B1 (ko) * | 1997-12-20 | 2000-04-15 | 윤종용 | Mpeg 압축부호화 및 복호화기를 채용한 디지털 기록 재생장치 및 그 방법 |
KR100281462B1 (ko) * | 1998-03-30 | 2001-02-01 | 전주범 | 격행 부호화에서 이진 형상 신호의 움직임 벡터 부호화 방법 |
JP2940545B1 (ja) * | 1998-05-28 | 1999-08-25 | 日本電気株式会社 | 画像変換方法および画像変換装置 |
JP3937028B2 (ja) * | 1998-09-18 | 2007-06-27 | 富士フイルム株式会社 | 画像変換方法、画像変換装置及び画像変換プログラムを記録した記録媒体 |
US6115072A (en) * | 1999-01-27 | 2000-09-05 | Motorola, Inc. | 16:9 aspect ratio conversion by letterbox method for an MPEG image |
US7020195B1 (en) * | 1999-12-10 | 2006-03-28 | Microsoft Corporation | Layered coding and decoding of image data |
US6445832B1 (en) * | 2000-10-10 | 2002-09-03 | Lockheed Martin Corporation | Balanced template tracker for tracking an object image sequence |
US6765964B1 (en) * | 2000-12-06 | 2004-07-20 | Realnetworks, Inc. | System and method for intracoding video data |
JP3695451B2 (ja) * | 2003-05-28 | 2005-09-14 | セイコーエプソン株式会社 | 画像サイズの変更方法及装置 |
JP2005141722A (ja) * | 2003-10-15 | 2005-06-02 | Ntt Docomo Inc | 画像信号処理方法、画像信号処理装置、及び画像信号プログラム |
US7033355B2 (en) * | 2004-01-15 | 2006-04-25 | Muzzammel Mohiuddin M | Endocervical electrode |
US8358701B2 (en) * | 2005-04-15 | 2013-01-22 | Apple Inc. | Switching decode resolution during video decoding |
US8385427B2 (en) * | 2005-04-15 | 2013-02-26 | Apple Inc. | Reduced resolution video decode |
JP2007172170A (ja) * | 2005-12-20 | 2007-07-05 | Fujitsu Ltd | 画像処理回路及び画像処理方法 |
US20070201833A1 (en) * | 2006-02-17 | 2007-08-30 | Apple Inc. | Interface for defining aperture |
JP2007271700A (ja) * | 2006-03-30 | 2007-10-18 | Fujitsu Ltd | 画像情報送信装置及び画像情報受信装置 |
KR101619972B1 (ko) * | 2008-10-02 | 2016-05-11 | 한국전자통신연구원 | 이산 여현 변환/이산 정현 변환을 선택적으로 이용하는 부호화/복호화 장치 및 방법 |
US8634466B2 (en) * | 2009-03-17 | 2014-01-21 | Freescale Semiconductor, Inc. | Video decoder plus a discrete cosine transform unit |
CN101742320B (zh) * | 2010-01-20 | 2012-09-12 | 李博航 | 图像处理方法 |
IT1403450B1 (it) * | 2011-01-19 | 2013-10-17 | Sisvel S P A | Flusso video costituito da frame video combinati, e procedimento e dispositivi per la sua generazione, trasmissione, ricezione e riproduzione |
GB2527315B (en) * | 2014-06-17 | 2017-03-15 | Imagination Tech Ltd | Error detection in motion estimation |
JP2019519148A (ja) * | 2016-05-13 | 2019-07-04 | ヴィド スケール インコーポレイテッド | ビデオ符号化のための一般化された多重仮説予測(Generalized Multi−Hypothesis Prediction)のためのシステムおよび方法 |
CN111801946A (zh) | 2018-01-24 | 2020-10-20 | Vid拓展公司 | 用于具有降低的译码复杂性的视频译码的广义双预测 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02214389A (ja) * | 1989-02-15 | 1990-08-27 | Kokusai Denshin Denwa Co Ltd <Kdd> | 動画像の縦続的符号化方式 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3779345D1 (de) * | 1986-03-20 | 1992-07-02 | American Telephone & Telegraph | Datenkompression mit listensformation. |
US5068813A (en) * | 1989-11-07 | 1991-11-26 | Mts Systems Corporation | Phased digital filtering in multichannel environment |
KR920704513A (ko) * | 1990-10-09 | 1992-12-19 | 프레데릭 얀 스미트 | 텔레비젼 영상에 상응하는 디지탈 신호 코딩 장치 및 디코딩 장치 |
US5367334A (en) * | 1991-05-20 | 1994-11-22 | Matsushita Electric Industrial Co., Ltd. | Video signal encoding and decoding apparatus |
US5184218A (en) * | 1991-07-03 | 1993-02-02 | Wavephore, Inc. | Bandwidth compression and expansion system |
DE69322769T2 (de) * | 1992-03-03 | 1999-07-22 | Toshiba Kawasaki Kk | Koder für zeitveränderliche bilder |
NL9201594A (nl) * | 1992-09-14 | 1994-04-05 | Nederland Ptt | Systeem omvattende ten minste één encoder voor het coderen van een digitaal signaal en ten minste één decoder voor het decoderen van een gecodeerd digitaal signaal, en encoder en decoder voor toepassing in het systeem. |
US5420891A (en) * | 1993-03-18 | 1995-05-30 | New Jersey Institute Of Technology | Multiplierless 2-band perfect reconstruction quadrature mirror filter (PR-QMF) banks |
-
1993
- 1993-03-26 JP JP06878593A patent/JP3374989B2/ja not_active Expired - Lifetime
-
1994
- 1994-03-28 WO PCT/JP1994/000499 patent/WO1994023535A1/ja active IP Right Grant
- 1994-03-28 KR KR1019940704311A patent/KR100311294B1/ko not_active IP Right Cessation
- 1994-03-28 EP EP19940910550 patent/EP0643535B1/en not_active Expired - Lifetime
- 1994-03-28 DE DE1994621868 patent/DE69421868T2/de not_active Expired - Fee Related
- 1994-03-28 CN CN94190238A patent/CN1076932C/zh not_active Expired - Fee Related
- 1994-03-28 US US08/341,546 patent/US5832124A/en not_active Expired - Lifetime
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02214389A (ja) * | 1989-02-15 | 1990-08-27 | Kokusai Denshin Denwa Co Ltd <Kdd> | 動画像の縦続的符号化方式 |
Non-Patent Citations (1)
Title |
---|
See also references of EP0643535A4 * |
Also Published As
Publication number | Publication date |
---|---|
KR100311294B1 (ko) | 2001-12-15 |
JP3374989B2 (ja) | 2003-02-10 |
DE69421868T2 (de) | 2000-06-15 |
JPH06284413A (ja) | 1994-10-07 |
EP0643535A4 (en) | 1995-12-06 |
CN1076932C (zh) | 2001-12-26 |
US5832124A (en) | 1998-11-03 |
EP0643535B1 (en) | 1999-12-01 |
CN1108462A (zh) | 1995-09-13 |
EP0643535A1 (en) | 1995-03-15 |
DE69421868D1 (de) | 2000-01-05 |
KR950702082A (ko) | 1995-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO1994023535A1 (en) | Method and apparatus for coding video signal, and method and apparatus for decoding video signal | |
JP3381855B2 (ja) | 画像信号符号化方法および画像信号符号化装置、並びに画像信号復号化方法および画像信号復号化装置 | |
JP3092610B2 (ja) | 動画像の復号化方法、該方法が記録されたコンピュータ読みとり可能な記録媒体、及び、動画像の復号化装置 | |
EP0881835B1 (en) | Interlaced video signal encoding and decoding method, using conversion of periodically selected fields to progressive scan frames | |
US5485279A (en) | Methods and systems for encoding and decoding picture signals and related picture-signal records | |
US5504530A (en) | Apparatus and method for coding and decoding image signals | |
JP4863333B2 (ja) | 高分解能静止画像を創出するための方法及び装置 | |
JP3189258B2 (ja) | 画像信号符号化方法および画像信号符号化装置、並びに画像信号復号化方法および画像信号復号化装置 | |
JPH07123447A (ja) | 画像信号記録方法および画像信号記録装置、画像信号再生方法および画像信号再生装置、画像信号符号化方法および画像信号符号化装置、画像信号復号化方法および画像信号復号化装置、ならびに画像信号記録媒体 | |
JPH0937243A (ja) | 動画像符号化装置及び復号装置 | |
JP3532709B2 (ja) | 動画像符号化方法および装置 | |
JP2998741B2 (ja) | 動画像の符号化方法、該方法が記録されたコンピュータ読みとり可能な記録媒体、及び動画像の符号化装置 | |
JP4193252B2 (ja) | 信号処理装置及び方法、信号復号装置、並びに信号符号化装置 | |
JP4526529B2 (ja) | 階層画像を用いる映像信号変換装置 | |
JP3407727B2 (ja) | 記録媒体 | |
JP3552045B2 (ja) | 画像信号記録媒体の記録方法、画像信号記録装置、および、画像信号再生装置 | |
JP3092614B2 (ja) | ディジタル携帯端末 | |
JP3410037B2 (ja) | 復号化方法、復号化装置、および、コンピュータ読み取り可能な記録媒体 | |
JP3407726B2 (ja) | 符号化方法、符号化装置、および、コンピュータ読み取り可能な記録媒体 | |
JP3092613B2 (ja) | 記録媒体 | |
JP3092612B2 (ja) | 動画像の復号化方法、及び、動画像の復号化装置 | |
JP3092611B2 (ja) | 動画像の符号化方法、及び、動画像の符号化装置 | |
JP2004088795A (ja) | 画像信号生成装置および方法、並びに、画像信号再生装置および方法 | |
JP2004147352A (ja) | 画像信号生成装置および方法、並びに、画像信号再生装置および方法 | |
JPH06189293A (ja) | 画像符号化方法、画像復号化方法、画像符号化装置、画像復号化装置及び記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): DE FR GB |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1994910550 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 08341546 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1994910550 Country of ref document: EP |
|
WWG | Wipo information: grant in national office |
Ref document number: 1994910550 Country of ref document: EP |