WO2016002140A1 - Image encoding method, image decoding method, image encoding device, and image decoding device - Google Patents

Image encoding method, image decoding method, image encoding device, and image decoding device Download PDF

Info

Publication number
WO2016002140A1
WO2016002140A1 PCT/JP2015/002969 JP2015002969W WO2016002140A1 WO 2016002140 A1 WO2016002140 A1 WO 2016002140A1 JP 2015002969 W JP2015002969 W JP 2015002969W WO 2016002140 A1 WO2016002140 A1 WO 2016002140A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
key frame
decoding
reference picture
encoding
Prior art date
Application number
PCT/JP2015/002969
Other languages
French (fr)
Japanese (ja)
Inventor
寿郎 笹井
哲史 吉川
健吾 寺田
Original Assignee
パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2015077493A external-priority patent/JP2018142752A/en
Application filed by パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ filed Critical パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Publication of WO2016002140A1 publication Critical patent/WO2016002140A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/58Motion compensation with long-term prediction, i.e. the reference frame for a current frame not being the temporally closest one

Definitions

  • the present invention relates to an image encoding method or an image decoding method.
  • Non-Patent Document 1 High Efficiency Video Coding
  • the predetermined frame is an I frame that is intra (in-screen) encoded in MPEG-4 AVC, HEVC, and the like.
  • JCT-VC Join Collaborative Team on Video Coding
  • doc High Efficiency Video Coding (HEVC) text specification draft 10 (for FDIS & Last Call) http: // phenix. it-sudparis. eu / jct / doc_end_user / documents / 12_Geneva / wg11 / JCTVC-L1003-v34. zip
  • an object of the present invention is to provide an image encoding method capable of efficiently encoding an image or an image decoding method capable of efficiently decoding an image.
  • an image encoding method is an image encoding method for encoding a plurality of images, and selects randomly accessible key frames from the plurality of images. And a coding step of coding the key frame using inter prediction that refers to a reference picture different from the key frame.
  • An image decoding method is an image decoding method for decoding a plurality of images, wherein a determination step of determining a randomly accessible key frame from the plurality of images, and the key frame And a decoding step of decoding using inter prediction that refers to a reference picture different from the key frame.
  • the present invention can provide an image encoding method capable of efficiently encoding an image or an image decoding method capable of efficiently decoding an image.
  • FIG. 1A is a diagram illustrating an example of an image distribution system.
  • FIG. 1B is a diagram for explaining a reproduction operation in the image decoding apparatus.
  • FIG. 2 is a diagram illustrating an example of a conventional code string.
  • FIG. 3A is a diagram illustrating an example of the operation of the image distribution system.
  • FIG. 4 is a diagram illustrating an example of a code string according to the first embodiment.
  • FIG. 5A is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 5B is a diagram illustrating an example of a conventional code string.
  • FIG. 6A is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 6B is a diagram illustrating an example of a conventional code string.
  • FIG. 1A is a diagram illustrating an example of an image distribution system.
  • FIG. 1B is a diagram for explaining a reproduction operation in the image decoding apparatus.
  • FIG. 2 is a diagram
  • FIG. 7A is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 7B is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 7C is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 8 is a block diagram of the image coding apparatus according to Embodiment 1.
  • FIG. 9 is a flowchart of the image encoding process according to the first embodiment.
  • FIG. 10 is a flowchart of the key frame encoding process according to the first embodiment.
  • FIG. 11 is a flowchart of the image encoding process according to the first embodiment.
  • FIG. 12 is a flowchart of the image encoding process according to the first embodiment.
  • FIG. 13 is a block diagram of an image decoding apparatus according to the second embodiment.
  • FIG. 14 is a flowchart of image decoding processing according to the second embodiment.
  • FIG. 15 is a flowchart of key frame decryption processing according to the second embodiment.
  • FIG. 16 is a diagram illustrating a configuration of a system according to the third embodiment.
  • FIG. 17 is a diagram illustrating the operation of the system according to the third embodiment.
  • FIG. 18 is an overall configuration diagram of a content supply system that implements a content distribution service.
  • FIG. 19 is an overall configuration diagram of a digital broadcasting system.
  • FIG. 20 is a block diagram illustrating a configuration example of a television.
  • FIG. 20 is a block diagram illustrating a configuration example of a television.
  • FIG. 21 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
  • FIG. 22 is a diagram illustrating a structure example of a recording medium that is an optical disk.
  • FIG. 23A is a diagram illustrating an example of a mobile phone.
  • FIG. 23B is a block diagram illustrating a configuration example of a mobile phone.
  • FIG. 24 is a diagram showing a structure of multiplexed data.
  • FIG. 25 is a diagram schematically showing how each stream is multiplexed in the multiplexed data.
  • FIG. 26 is a diagram showing in more detail how the video stream is stored in the PES packet sequence.
  • FIG. 27 is a diagram illustrating the structure of TS packets and source packets in multiplexed data.
  • FIG. 28 shows the data structure of the PMT.
  • FIG. 29 is a diagram showing an internal configuration of multiplexed data information.
  • FIG. 30 shows the internal structure of stream attribute information.
  • FIG. 31 is a diagram illustrating steps for identifying video data.
  • FIG. 32 is a block diagram illustrating a configuration example of an integrated circuit that implements the moving picture coding method and the moving picture decoding method according to each embodiment.
  • FIG. 33 is a diagram illustrating a configuration for switching the driving frequency.
  • FIG. 34 is a diagram illustrating steps for identifying video data and switching between driving frequencies.
  • FIG. 35 is a diagram illustrating an example of a look-up table in which video data standards are associated with drive frequencies.
  • FIG. 36A is a diagram illustrating an example of a configuration for sharing a module of a signal processing unit.
  • FIG. 36B is a diagram illustrating another example of
  • Patent Document 1 discloses a technique for improving encoding efficiency by storing an image for a long period of time and using it as a reference picture.
  • Non-Patent Document 1 has a long term reference picture so that the technique described in Patent Document 1 can be used.
  • the decoded image is stored in the frame memory for a long time. This makes it possible to refer to the long-term reference picture for subsequent decoded images.
  • Patent Document 1 when a video is recorded for a long time, or when a plurality of videos are recorded, using the technique of Patent Document 1, when playback is started from an arbitrary time or when playback of a plurality of videos is switched, There is a problem that it is necessary to wait for a long time before starting reproduction or the encoding efficiency is not improved.
  • FIG. 1A and 1B are schematic diagrams for explaining a problem to be solved in the present embodiment.
  • FIG. 1A shows a case where videos from a plurality of cameras are appropriately switched and played back by a playback terminal (decoding device).
  • FIG. 1A shows an example of code strings (BitStream) 103A to 103C that are output from the image encoding devices 102A to 102C, for example, images captured by the cameras 101A to 101C.
  • the code strings 103A to 103C are composed of a frame serving as a decoding start point called a key frame 104 (KeyFrame) and a frame other than the key frame called a normal frame 105.
  • the key frame 104 is a frame (intra prediction frame) that has been subjected to intra-frame prediction encoding.
  • the image decoding apparatus 106 which is a portable device such as a tablet terminal or a smartphone, there is a restriction on the transmission band.
  • the image decoding apparatus 106 does not receive all of the code sequences 103A to 103C transmitted from the plurality of image encoding apparatuses 102A to 102C at the same time, but rather the video that the user wants to view (display or play). Select and play only the video stream.
  • the image decoding apparatus 106 when the image decoding apparatus 106 performs reproduction in the order of the code string 103A, the code string 103B, and the code string 103C, the image decoding apparatus 106 can only perform this in units of key frames 104 as shown in FIG. 1B. Cannot switch.
  • the key frame 104 is an intra prediction frame (I).
  • I in a picture indicates an I frame (intra prediction frame) in which intra prediction is used
  • P indicates a P frame in which unidirectional prediction in which only one frame is referenced is used.
  • Numerical values in parentheses in the picture indicate processing order (display order).
  • An arrow indicates a reference relationship, and indicates that the original picture of the arrow is used (referenced) for the prediction process of the picture after the arrow.
  • frame P (3) cannot be decoded without frame P (2)
  • frame P (2) cannot be decoded without frame P (1)
  • frame P (1) is frame I (0). It is in a relationship that cannot be decrypted without it. Therefore, the video cannot be reproduced from the frame P (3).
  • an intra prediction frame used as the key frame 104 in the prior art generally has lower encoding efficiency than an inter (inter-screen) prediction frame. This is because an intra-prediction frame cannot compress a moving image using temporal continuity characteristics.
  • the inter prediction frame is a frame encoded using inter prediction with reference to another frame.
  • the ratio of intra prediction frames in the data amount of the code string 103 is very high. ing. Therefore, it is important to increase the number of playback start points (key frames) while reducing the number of intra prediction frames.
  • the image encoding device 102 generates a code string 103 by encoding a long-time video such as a video taken by a 24-hour monitoring camera, for example.
  • the image decoding apparatus 106 is required to jump to the decoding start point of the code string 103 when reproducing the video. For example, if there is only one key frame 104 per hour, the image decoding device 106 cannot reproduce the target scene unless the worst one-hour data is decoded. In order to prevent this, it is known to appropriately insert a key frame 104 as an arbitrary playback point.
  • the key frame 104 using the conventional intra prediction frame has a low encoding efficiency as described above, the ratio of the data of the intra prediction frame to the total data amount is large especially for video recorded for a long time. It has been known.
  • the above problem is solved by inter-predicting key frames required in both cases to improve the encoding efficiency.
  • the time required for switching between multiple videos can be shortened, or a code string that can shorten the time required to start playback of a long video at an arbitrary timing is efficiently used.
  • An image encoding method or image decoding method that can be generated or decoded well will be described.
  • An image encoding method is an image encoding method for encoding a plurality of images, wherein a selection step of selecting a randomly accessible key frame from the plurality of images, and the key frame And a coding step of coding using inter prediction with reference to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • the image encoding method may further include an information encoding step of encoding information for specifying a key frame encoded using the inter prediction from the plurality of images.
  • the reference picture may be a picture that is not adjacent to the key frame in decoding order and display order.
  • the reference picture may be a long term reference picture.
  • a plurality of key frames including the key frame are selected from the plurality of images, and in the encoding step, target key frames included in the plurality of key frames are selected as the plurality of key frames.
  • the encoding may be performed with reference to other key frames.
  • the reference picture may be an image acquired via a network.
  • the reference picture may be an image encoded by an encoding method different from the key frame.
  • a background area of the key frame area is determined, information for specifying the reference picture is encoded for the background area, and image information of the background area is encoded. It does not have to be.
  • the data amount of the encoded data in the background area can be reduced.
  • a similarity between the key frame and the reference picture is determined, and when the similarity is equal to or greater than a predetermined value, information for specifying the reference picture is encoded,
  • the image information may not be encoded.
  • the amount of encoded data can be reduced.
  • An image decoding method is an image decoding method for decoding a plurality of images, wherein a determination step of determining a randomly accessible key frame from the plurality of images, and the key frame And a decoding step of decoding using inter prediction that refers to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • the image decoding method further includes an information decoding step of decoding information for specifying a key frame encoded using the inter prediction from the plurality of images, and the determining step further includes the step of: Based on the information, it is determined whether the key frame is a key frame encoded using inter prediction.
  • the key frame encoded using the inter prediction is decoded using inter prediction. May be.
  • the reference picture may be a picture that is not adjacent to the key frame in decoding order and display order.
  • the reference picture may be a long term reference picture.
  • a plurality of key frames including the key frame are determined from the plurality of images, and in the decoding step, target key frames included in the plurality of key frames are determined from the plurality of key frames.
  • Decoding may be performed with reference to other key frames.
  • the reference picture may be an image acquired via a network.
  • the reference picture may be an image encoded by an encoding method different from the key frame.
  • the decoding step information for specifying the reference picture is decoded for a background area in the key frame area, and the reference picture specified by the information is used as an image of the background area. It may be output.
  • the data amount of the encoded data in the background area can be reduced.
  • information for specifying the reference picture may be decoded, and the reference picture specified by the information may be output as the key frame.
  • the amount of encoded data can be reduced.
  • the image decoding method further includes an acquisition step of acquiring an image that matches a shooting situation of a specified image from a storage device storing a plurality of images, and stores the acquired image as the reference picture. Storage step.
  • the shooting situation may be the time when the image was shot, the place where the image was shot, or the weather when the image was shot.
  • An image encoding apparatus is an image encoding apparatus that encodes a plurality of images, the selection unit selecting a randomly accessible key frame from the plurality of images, and the key frame Is encoded using inter prediction that refers to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • An image decoding device is an image decoding device that decodes a plurality of images, and includes a determination unit that determines a randomly accessible key frame from the plurality of images, and the key frame. And a decoding unit that performs decoding using inter prediction that refers to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • a frame may be described in other words as a picture or an image.
  • a frame (picture or image) to be encoded or decoded may be referred to as a current picture or an access frame.
  • various terms commonly used in the field of codec technology can also be used.
  • an image encoding device and an image encoding method capable of appropriately switching videos from a plurality of cameras or realizing high-efficiency encoding while reproducing a long-time video from an arbitrary point. Will be described.
  • FIG. 4 is a diagram illustrating an example of a code string 250 output from the image encoding device 200.
  • FIG. 4 shows an example of a code string 250 in the present embodiment corresponding to FIG. 2 shown in the above-described conventional configuration.
  • the arrows indicate reference relationships.
  • the meanings of characters and numbers in each picture are the same as those in FIG.
  • the key frame 104 is an I frame, but in the present embodiment, a key frame other than the I frame may be set as the key frame 104.
  • the frame P (t) that is the key frame 104 is a P frame.
  • this frame P (t) makes a long term reference to the frame I (0).
  • a frame that can be used for long-term reference like the frame I (0) is called a long-term reference picture.
  • This long term reference picture is stored in the memory of the image encoding device and the image decoding device, and can be referred to at any time.
  • the image decoding apparatus can detect the frame P (t). Playback can be started from
  • a P frame using a long term reference is used as the key frame 104.
  • encoding efficiency can be improved.
  • the key frame 104 can be decoded by decoding only the long-term reference picture.
  • the number of images that need to be decoded when the P frame is set as the key frame 104 can be reduced.
  • FIG. 5A is a diagram showing an example of a code string 250 according to the present embodiment.
  • FIG. 5B is a diagram for comparison and shows an example of a conventional code string 103.
  • a long term reference picture is used when the background is the same video for a long time in a surveillance camera video or the like.
  • the normal frame 105 refers to the long term reference picture, not the immediately preceding frame.
  • the configuration shown in FIGS. 2 and 4 cannot decode the frames from the frame P (3) to the frame P (t ⁇ 1).
  • the configuration shown in FIGS. 5A and 5B even if the frame P (2) cannot be decoded, the frame P (2) is not used for prediction of other frames. Therefore, if the frame I (0) has been correctly decoded, the image decoding apparatus can correctly decode the frames from the frame P (3) to the frame P (t ⁇ 1).
  • an intra prediction frame (key frame 104), which is a long-term reference picture, is used periodically.
  • the long term reference picture can be updated as appropriate, so that it is possible to cope with a change in a still area such as a background.
  • an inter prediction frame P (t) (key frame 104) that refers only to the long term reference picture I (0) is used instead of the intra prediction frame. Used. Thereby, encoding efficiency can be improved similarly to the structure shown in FIG.
  • the inter prediction frame generally has better encoding efficiency when referring to a temporally close frame. Therefore, by using only the P frame, it is possible to reduce the processing amount, reduce the delay, and suppress the deterioration of the encoding efficiency.
  • B frames prediction frames
  • FIG. 6A is a diagram illustrating an example of a code string 250 according to the present embodiment.
  • FIG. 6B is a diagram for comparison, and shows an example of a conventional code string 103.
  • the normal frame 105 refers to the long-term reference picture and the immediately preceding frame.
  • the frame B (t) which is a long-term reference picture, cannot be set as the key frame 104 but is set as the normal frame 105 in order to make the B frames continuous. Yes. Even in this case, in order to increase the key frame 104, it is necessary to use the intra prediction frame I as the key frame 104 as in FIG. 2 or FIG. 5B.
  • the frame P (t) that is an inter prediction frame is used as the key frame 104.
  • accessibility can be improved while maintaining encoding efficiency.
  • FIG. 7A is a schematic diagram for explaining processing in the image encoding device and the image decoding device according to the present embodiment.
  • the code string shown in FIG. 7A has the same configuration as the code string 250 shown in FIG. Therefore, the frames I (0), P (8), and I (16) are treated as the key frame 104.
  • the image decoding apparatus can start decoding after a transmission delay. Further, the image decoding apparatus can decode the frame P (8) after decoding the frame I (0), or can start decoding from the frame I (16). Also, when it is desired to create playback points at a high frequency, it is possible to increase the encoding efficiency by inserting a key frame 104 of a P frame as in the frame P (8).
  • the key frame 104 may be stored in the storage 400.
  • the configuration of the code string is the same as that in FIG. 7A.
  • the frame I (16) is the same image as the frame I (0).
  • the code string information regarding the frame I (16) may be parameter information indicating the same as the frame I (0). Thereby, encoding efficiency can be further improved.
  • a frame I (0) indicating background information is stored in the storage 400 provided in the image decoding apparatus.
  • the image decoding apparatus acquires (for example, downloads via the network) the frame I (0) and stores it in the storage.
  • the image decoding apparatus can decode the frame P (8) by referring to the image of the frame I (0) stored in the storage. Thereby, it is possible to add a reproduction point (add a key frame) while improving encoding efficiency.
  • a reference image for example, a background image
  • a reference image for example, a background image
  • the images to be switched can be switched, and high-quality video switching can be realized even when a narrow-band network is used.
  • FIG. 7C illustrates a case where a plurality of key frame 104 images or a key frame reference picture that is a reference image of the key frame 104 is stored in the storage 400, or a plurality of key frames via a cloud or a network. A case where the frame 104 or the key frame reference picture can be accessed is shown.
  • a B frame is set as the key frame 104.
  • the image decoding apparatus uses a plurality of key frame reference pictures stored in the storage 400 or a key frame reference picture acquired via a network as the frame B (8), which is the key frame 104. Decrypt. Thereby, since the image decoding apparatus can start decoding from the frame B (8), switching of a plurality of camera videos or jumping reproduction can be realized.
  • FIG. 7A to 7B show the transmission delay when data is transmitted from the image encoding device to the image decoding device. For example, when there is data in storage accessible from the image decoding device in advance, FIG. No transmission delay occurs.
  • a frame described as a B frame may be encoded as a P frame in relation to the surroundings, or a frame described as a P frame may be encoded as a B frame.
  • a frame described as a P frame may be encoded as a B frame.
  • the storage 400 may not be used.
  • the B frame which is the key frame 104 may refer to only a plurality of past key frames 104, for example.
  • the normal frame 105 refers only to the immediately preceding frame, but the previous two frames may be referred to, or bi-directional prediction or multiple prediction may be used.
  • P frames are used for at least a part of key frames in which I frames are used in the prior art.
  • the key frame is a frame that can be randomly accessed, in other words, a frame that can be reproduced (decoded or displayed) by the image decoding apparatus.
  • information indicating which image is a key frame among a plurality of images included in a video is included in the code string.
  • key frames are inter-predicted by referring only to key frame reference pictures.
  • a key frame reference picture is a picture that is not adjacent to a processing target key frame in decoding order (encoding order) and display order.
  • the key frame reference picture is a long term reference picture.
  • the key frame reference picture is a decoded (encoded) key frame or an image obtained via a network.
  • FIG. 8 is a block diagram showing the configuration of the image coding apparatus 200 according to the present embodiment.
  • the image encoding apparatus 200 generates a code string 250 by encoding the moving image data 251.
  • the image coding apparatus 200 includes a prediction unit 201, a subtraction unit 202, a transform quantization unit 203, a variable length coding unit 204, an inverse quantization inverse transform unit 205, an addition unit 206, and a prediction control unit 207.
  • the prediction unit 201 generates a prediction image 257 based on the target image included in the moving image data 251 and the reference image 256 selected by the selection unit 208, and the generated prediction image 257 is subtracted by the subtraction unit 202 and the addition unit. It outputs to 206.
  • the prediction unit 201 outputs a prediction parameter 258 that is a parameter used to generate the predicted image 257 to the variable length coding unit 204.
  • the subtraction unit 202 calculates a difference signal 252 that is a difference between the target image included in the moving image data 251 and the predicted image 257, and outputs the calculated difference signal 252 to the transform quantization unit 203.
  • the transform quantization unit 203 transform-quantizes the difference signal 252 to generate a quantized signal 253, and outputs the generated quantized signal 253 to the variable length coding unit 204 and the inverse quantization inverse transform unit 205. .
  • the variable length coding unit 204 generates a code string 250 by performing variable length coding on the quantized signal 253 and the prediction parameter 258 output from the prediction unit 201 and the prediction control unit 207.
  • the prediction parameter 258 includes information indicating a used prediction method, a prediction mode, and a reference picture.
  • the inverse quantization inverse transform unit 205 generates a decoded differential signal 254 by performing inverse quantization and inverse transform on the quantized signal 253, and outputs the generated decoded differential signal 254 to the adder 206.
  • the adding unit 206 generates a decoded image 255 by adding the predicted image 257 and the decoded difference signal 254, and outputs the generated decoded image 255 to the frame memory 212.
  • the frame memory 212 includes a key frame memory 209 in which a key frame reference picture, which is a long term reference picture for a key frame, is stored, a neighboring frame memory 210 in which other already decoded and referenceable images are stored, And an in-plane frame memory 211 in which partially decoded images included in the image to be encoded are stored. These memories are controlled by the prediction control unit 207, and a reference image necessary for creating the predicted image 257 is output to the selection unit 208.
  • the prediction control unit 207 determines in which memory the reference image stored is to be used based on the moving image data 251.
  • the key frame memory 209 may store not only the decoded image 255 but also image data 259 separately obtained from the outside as a reference image.
  • the frame memory 212 includes three types of memories. However, in actual implementation, there is no need to provide individual memory spaces, and for example, each reference image is stored in the same memory. May be. That is, the frame memory 212 may be configured to output any reference image based on an instruction from the prediction control unit 207.
  • Non-Patent Document 1 Can be used. Note that, as these processes, other moving image encoding methods may be used.
  • FIG. 9 is a flowchart of operations related to the prediction control unit 207, the selection unit 208, and the frame memory 212.
  • the prediction control unit 207 selects an image to be set as a key frame among a plurality of images included in the moving image data 251 (S101). Specifically, the prediction control unit 207 sets a frame located at an access point, which is a randomly accessible point, as a key frame. For example, the frequency of access (switching or jumping over) is set in advance, and the prediction control unit 207 sets a key frame every plural number according to this frequency. Alternatively, the prediction control unit 207 may set, as a key frame, an image in which an object has moved greatly (the movement of the image is large) or an object that is designated in advance is included in the image. Further, the prediction control unit 207 may combine both methods.
  • the image encoding device 200 may acquire a key frame reference picture from another device or the like and store it in the key frame memory 209. In this case, the image encoding device 200 compares the moving image data 251 with a plurality of key frame reference pictures stored in another device, and acquires and acquires a key frame reference picture having a high degree of similarity. The key frame reference picture is stored in the key frame memory 209.
  • the image encoding device 200 encodes information (information for specifying a key frame) indicating which image is a key frame, thereby generating a code string 250 including this information.
  • information for specifying a key frame specifies a conventional intra-predicted key frame and an inter-predicted key frame according to the present embodiment. That is, this information indicates whether each key frame is a key frame encoded using intra prediction or a key frame encoded using inter prediction.
  • the image encoding device 200 performs an encoding process for each image.
  • the image encoding device 200 determines whether the target image that is the image to be encoded included in the moving image data 251 is a key frame (S102). When the target image is a key frame (YES in S102), the image encoding device 200 encodes the target image by the key frame encoding process (S103).
  • FIG. 10 is a flowchart showing the operation of step S103.
  • the image encoding apparatus 200 determines whether there is a key frame reference picture similar to the target image by comparing the target image with the key frame reference picture stored in the key frame memory 209 (S201). .
  • being similar means that the difference is less than a predetermined value.
  • the prediction unit 201 When there is a key frame reference picture similar to the target image (YES in S202), the prediction unit 201 refers to the similar key frame reference picture stored in the key frame memory 209 and inter-codes the target image. (S203).
  • the prediction unit 201 performs intra prediction encoding on the target image (S204).
  • the image encoding device 200 decodes the key frame after encoding, and stores the obtained decoded image in the key frame memory 209 as a key frame reference picture (S205).
  • the key frame reference picture stored in the key frame memory 209 is compared with the target image, but a comparison with an image that can be acquired and stored in the key frame memory 209 is performed. Also good.
  • the image coding apparatus 200 may determine whether there is an image similar to the target image from reference images that can be acquired via a network.
  • the prediction unit 201 performs inter prediction encoding using the reference image.
  • the code string 250 includes information indicating that an image that can be acquired via a network or the like is used as a reference image, and information specifying the reference image to be used. Sent. In this way, information for specifying a reference image similar to the image decoding device can be notified while further improving the encoding efficiency.
  • the image decoding apparatus can acquire the reference image via the network using this information at the time of decoding, and can decode the image using the reference image.
  • intra prediction encoding is performed when there is no similar key frame reference picture.
  • intra prediction encoding may be performed at a predetermined frequency. .
  • key frames are used as key frame reference pictures here, only some key frames may be used as key frame reference pictures.
  • the image encoding device 200 When the target image is not a key frame but a normal frame (NO in S102), the image encoding device 200 performs a normal encoding process of encoding the target image with reference to neighboring pictures (S104). That is, the image coding apparatus 200 searches the frame memory 212 for a reference image with high compression efficiency without considering the decoding order, and performs inter prediction coding that refers to the obtained reference image. In other words, in this encoding process, the encoding considering the accessibility is not performed unlike the above-described key frame encoding process. In an environment where low delay is required, it is not always necessary to select a method with the best coding efficiency, such as referring to the immediately preceding frame.
  • the image encoding device 200 ends the processing. On the other hand, when the input of the moving image data 251 continues (NO in S105), the image encoding device 200 performs the processing after step S102 on the next image.
  • the code string 250 generated by the image encoding device 200 includes information indicating which image is a key frame or an access point (information for specifying a key frame). Further, the code string 250 includes information indicating whether intra prediction is used for each key frame or inter prediction using a key frame reference picture, and information indicating the used key frame reference picture. The code string 250 may include information indicating whether the key frame reference picture is an image included in the code string 250 or acquired via a network. Further, when a key frame reference picture is acquired via a network, identification information for identifying the key frame reference picture or information indicating a storage location so that the image decoding apparatus can acquire the key frame reference picture Is included. Note that at least a part of the information may be notified to the image decoding apparatus by a signal different from the code string 250.
  • the image encoding apparatus 200 performs the following processing on a video that can be distinguished from a region that can be regarded as a background and a region that can be regarded as a foreground region including a moving object, such as a video of an installed camera. You may go.
  • FIG. 11 is a flowchart showing the flow of processing in this case.
  • the image coding apparatus 200 compares each area included in the target image with the background image, and determines whether each area is a background area or a foreground area (S301). Specifically, the image coding apparatus 200 compares a frame that has been determined in advance as a background image with the target image, determines an area having a predetermined similarity to the background image as a background area, and the similarity is An area less than a predetermined value is determined as a foreground area. Alternatively, the image coding apparatus 200 calculates an average change amount between the target image and an image before a certain time, determines an area with a small average change amount as a background image, and determines an area with a large change amount as a foreground image. To do.
  • the image encoding device 200 When the target region that is the processing target region included in the target image is the background region (YES in S302), the image encoding device 200 includes information for specifying the key frame reference picture that is the background image as the target region, For example, only the ID of the key frame reference picture is encoded, and the image information (such as a difference signal) of the background area is not encoded (S303). For example, the image encoding device 200 uses a prediction mode that indicates only the background image. Thereby, the amount of information can be greatly reduced. Specifically, the image coding apparatus 200 designates a background image (key frame reference picture) as a reference image in the skip mode.
  • the image encoding device 200 encodes only the information specifying the background image without encoding the difference signal between the target image and the background image. Thereby, the image decoding apparatus outputs (displays) the background image designated by the information as it is as the image of the background area.
  • the normal encoding process is generally a moving image code such as performing prediction using a nearby reference image and encoding a difference signal or encoding a difference signal with a background image. This is predictive encoding used in the conversion, and is similar to the processing in step S104 shown in FIG. 9, for example.
  • the foreground area does not have to be in front of the background area.
  • the background refers to a region where there is a certain range of regions that do not change more than a certain amount, and the foreground may indicate a region that varies more than a certain amount.
  • the background region has a small amount of change, for example, it indicates a region having little influence on the subjective image quality even if a difference signal referred to in moving image coding is not sent.
  • the foreground area indicates a background image that is a comparison target or an image that is different from the image before a certain time, and is an image that greatly differs from the original image when the difference image referred to in moving image coding is not sent. is there.
  • the subjective image quality can be reduced by not encoding the difference for subtle movements due to wind or the like. It is possible to improve the coding efficiency.
  • a plurality of images that can be referred to via a network as key frame reference pictures (background images) an area that can be determined as a background area increases. Thereby, it is possible to record a video for a long time with a smaller code amount. Also, video can be transmitted even when a very narrow band network such as a wireless environment is used.
  • the background and foreground are determined for each area in one frame and the processing is switched.
  • the background and foreground may be determined for each frame and the processing may be switched. In this case, if the entire frame can be determined to be the background, the amount of code can be greatly reduced. If it is determined that the background is not the background, a normal encoding method may be used. In this way, the processing of the image encoding device 200 can be simplified.
  • FIG. 12 is a flowchart showing an operation flow in this case.
  • the image encoding device 200 compares a target frame with a key frame reference picture stored in the key frame memory 209 or obtainable via a network, and similar to the target frame. For example, it is determined whether or not a background image exists (S401). Specifically, the image coding apparatus 200 determines the similarity between the target image and each key frame reference picture, and determines that a key frame reference picture with a similarity equal to or greater than a predetermined value is similar to the target image.
  • the image encoding device 200 When there is a key frame reference picture similar to the target image (YES in S402), the image encoding device 200 does not encode the image information (difference signal or the like) of the target image and refers to a key frame similar to the target image. Only the information for specifying the picture, for example, the ID of the key frame reference picture is encoded (S403).
  • the image encoding device 200 encodes the target image using intra prediction (S404). Also, the image coding apparatus 200 sets the target image as a key frame reference picture (S405). Thereby, this image can be utilized at the time of encoding of subsequent images. Thereby, since the information amount of the code string 250 can be further reduced, encoding efficiency can be improved.
  • the image coding apparatus 200 may delete the key frame reference picture that has been referred to so far from the key frame memory 209. . Thereby, an increase in the capacity of the key frame memory 209 can be suppressed.
  • the image encoding device 200 may perform normal encoding processing instead of steps S404 and S405. This process is, for example, the same process as step S104 shown in FIG. Even in this case, when a key frame reference picture similar to the target image exists, encoding efficiency can be improved by encoding only the ID indicating the key frame reference picture.
  • 11 or 12 may be performed on all the images included in the moving image data 251 or may be performed on only a part of the images. For example, these processes may be performed only on key frames.
  • variable length coding unit 204 may use arithmetic coding, or may use a coding table designed according to entropy.
  • the image encoding device 200 encodes a plurality of images.
  • the image encoding device 200 selects a randomly accessible key frame 104 from a plurality of images.
  • the image encoding device 200 encodes the key frame 104 using inter prediction that refers to a key frame reference picture different from the key frame 104.
  • inter prediction refers to a key frame reference picture different from the key frame 104.
  • the image encoding device 200 selects a plurality of key frames from a plurality of images, and encodes target key frames included in the plurality of key frames with reference to other key frames among the plurality of key frames. To do. Thereby, since the number of images required for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
  • the image decoding apparatus is an image decoding method that realizes the situation illustrated in FIG. 1A and the like described in the first embodiment, and correctly decodes the code string 250 illustrated in FIGS. 4, 5A, and 6A.
  • the present invention relates to an image decoding method.
  • FIG. 13 is a block diagram of image decoding apparatus 300 according to the present embodiment.
  • the image decoding apparatus 300 shown in FIG. 13 generates a decoded image 350 by decoding the code string 250.
  • the code string 250 is, for example, the code string 250 generated by the image encoding device 200 described above.
  • the image decoding apparatus 300 includes a variable length decoding unit 301, an inverse quantization inverse transformation unit 302, an addition unit 303, a prediction control unit 304, a selection unit 305, a prediction unit 306, and a frame memory 310. .
  • variable length decoding unit 301 acquires the quantized signal 351 and the prediction parameter 355 by performing variable length decoding on the code string 250, outputs the quantized signal 351 to the inverse quantization inverse transform unit 302, and outputs the prediction parameter. 355 is output to the prediction control unit 304 and the prediction unit 306.
  • the inverse quantization inverse transform unit 302 generates a decoded differential signal 352 by performing inverse quantization and inverse transform on the quantized signal 351, and outputs the generated decoded differential signal 352 to the adder 303.
  • the prediction control unit 304 determines a reference image used for prediction processing based on the prediction parameter 355. This process will be described later.
  • the prediction unit 306 generates and generates a prediction image 354 using information necessary for generation of the prediction image 354 such as a prediction mode included in the prediction parameter 355 and the reference image 353 output from the selection unit 305.
  • the predicted image 354 thus output is output to the adding unit 303.
  • the adding unit 303 generates a decoded image 350 by adding the predicted image 354 and the decoded difference signal 352.
  • the decoded image 350 is displayed on, for example, a display unit.
  • the decoded image 350 is stored in the frame memory 310.
  • the frame memory 310 includes a key frame memory 307 that stores a key frame reference picture, a neighboring frame memory 308 that stores a reference image that is temporally close to a decoding target image used for normal prediction, and has already been decoded in the decoding target image. And an in-plane frame memory 309 for storing image signals. Similar to the frame memory 212 of the first embodiment, there is no need to provide separate memory spaces for the three types of frame memories. For example, each reference image may be stored on the same memory. That is, the frame memory 310 may have any configuration that can output any reference image based on an instruction from the prediction control unit 304.
  • the key frame memory 307 stores not only the decoded image 350 but also image data 356 separately acquired from the outside as a reference image.
  • FIG. 14 is a flowchart showing a processing flow of the image decoding apparatus 300.
  • the image decoding apparatus 300 acquires encoded data of a target image, which is a decoding target frame, from the code string 250 (S501). Next, the image decoding apparatus 300 determines whether the target image that is a frame to be decoded is a key frame (S502). When the target image is a key frame (YES in S502), the image decoding device 300 performs a key frame decoding process (S503). The key frame decryption process will be described in detail later.
  • the image decoding apparatus 300 may determine whether the target image is a key frame (whether it is an access point) using information for specifying a key frame included in the code string 250. . Specifically, this information indicates whether or not the target image is a key frame, or indicates which image among the plurality of images is a key frame. Further, this information indicates whether each key frame is a key frame encoded using intra prediction or a key frame encoded using inter prediction. For example, this information is parameter information included in the header information of the code string 250. This information may be information recorded in a field in which information common to the image encoding apparatus is recorded in a system different from the code string 250.
  • the image decoding apparatus 300 can make a determination at high speed without accessing other information.
  • the image decoding apparatus 300 refers to predetermined field information as in the latter case, there is no need for information in the code string 250, so that the encoding efficiency can be further improved.
  • the image decoding apparatus 300 performs a normal decoding process (S504).
  • the normal decoding process will not be described in detail here, but the normal decoding process is a process similar to the decoding method of the inter prediction frame described in Non-Patent Document 1.
  • the image decoding apparatus 300 acquires the reference image indicated by the information included in the code string 250 from the frame memory 310, and performs prediction while using the reference image based on the prediction parameter 355 included in the code string 250.
  • An image 354 is created and a decoding process is performed using the predicted image 354.
  • the image decoding apparatus 300 acquires encoded data of the next image (S501). On the other hand, when the data of the target image is the end of the code string 250 (YES in S505), the image decoding device 300 ends the decoding process.
  • FIG. 15 is a flowchart of the key frame decoding process.
  • the key frame is an intra prediction frame or an inter prediction frame that refers to a key frame reference picture. Therefore, the image decoding apparatus 300 first determines whether the key frame that is the target image is an inter prediction frame that refers to the key frame reference picture (S511). Specifically, the header information of the target image included in the code string 250 includes information indicating a key frame reference picture or information indicating whether inter prediction is used. The image decoding apparatus 300 decodes the information and performs the above determination based on the obtained information.
  • the image decoding apparatus 300 acquires the key frame reference picture (S512). Specifically, the image decoding apparatus 300 acquires a key frame reference picture via a network, for example. Alternatively, the image decoding device 300 acquires a key frame reference picture that is a key frame that has already been decoded.
  • the code string 250 includes information indicating whether the key frame reference picture is a picture acquired via a network or a decoded picture, and information for specifying the key frame reference picture. .
  • the image decoding apparatus 300 acquires a key frame reference picture using this information. Further, the image decoding device 300 may acquire a key frame reference picture in advance via a network.
  • the image decoding apparatus 300 decodes the target image by the inter prediction decoding process using the key frame reference picture (S513).
  • the image decoding apparatus 300 when the code string 250 includes only information (for example, ID) that specifies a key frame reference picture (when no difference signal is included), the image decoding apparatus 300 The information for specifying the key frame reference picture is decoded with respect to the background area, and the key frame reference picture specified by the information is output as it is as the decoded image of the background area.
  • the code string 250 includes information indicating that the reference image is output as it is or that a differential signal is not included.
  • the image decoding apparatus 300 refers to the information and determines whether or not to output the key frame reference picture as it is.
  • the image decoding apparatus 300 decodes information for specifying the key frame reference picture, and uses the key frame reference picture specified by the information as a decoded image of the target image as it is. Output.
  • the image decoding apparatus 300 decodes the target image by inter prediction decoding (S514).
  • the image decoding apparatus 300 stores the decoded image of the key frame in the key frame memory 307 as a key frame reference picture (S515).
  • the image decoding device 300 selects, for example, a representative image or an image at a predetermined cycle from the obtained decoded images as a key frame reference picture, and stores these images in a storage on the network. By accumulating, the image encoding device may be able to access these images.
  • variable length decoding unit 301 may use arithmetic decoding or may use a decoding table designed according to entropy. That is, a method associated with the image encoding device paired with the image decoding device 300 may be used.
  • the key frame reference picture shared by the image encoding device 200 and the image decoding device 300 can be modified as follows.
  • the key frame reference picture may be an image encoded by an image encoding method different from the image included in the code string 250 (the image to be encoded or decoded). That is, the key frame reference picture may be an image encoded by an encoding method different from that of the processing target key frame.
  • the key frame reference pictures are MPEG-2, MPEG-4-AVC (H.264), JPEG Or an image encoded by another encoding method such as JPEG2000.
  • the image encoding device 200 or the image decoding device 300 acquires this key frame reference picture via a network. Thereby, the communication load between the image coding apparatus 200 or the image decoding apparatus 300 and the network can be reduced. Further, the capacity of the storage (key frame memory 209 or 307) included in the image encoding device 200 or the image decoding device 300 can be reduced. Furthermore, since image data distributed in the world including the Internet can be used as a key frame reference picture, the encoding efficiency can be further improved.
  • an image encoded by a different image encoding method is not limited to an image acquired via a network, and may be an image to be encoded (decoded).
  • the image encoding device 200 or the image decoding device 300 has a function of decoding an image encoded by an encoding method different from that of the image included in the code string 250, and the image is encoded by the different encoding method.
  • the obtained image may be acquired, the image may be decoded, and the obtained decoded image may be stored as a key frame reference picture.
  • the key frame reference picture may be an image with a resolution different from that of the image included in the code string 250 (the image to be encoded or decoded). That is, the key frame reference picture may be an image having a resolution different from that of the key frame to be processed.
  • the resolution of the key frame reference picture may be 3840 ⁇ 2160. Since the image coding apparatus 200 and the image decoding apparatus 300 according to the present embodiment perform inter prediction processing using key frame reference pictures for key frames, more inter prediction is performed when the resolution of the key frame reference pictures is large. Can improve the efficiency. Thereby, since the data amount of the differential signal to be transmitted is reduced, the coding efficiency can be improved.
  • the key frame reference picture may be a still image
  • a large number of photographic images on the network such as the Internet can be used as the key frame reference picture.
  • the resolutions of photographic images are various, the number of images that can be used as key frame reference pictures can be increased by supporting different resolutions. Thereby, encoding efficiency can be improved.
  • the image resolution of the key frame reference picture may be smaller than the image resolution to be encoded or decoded. Because there are many images on the network as described above, by preparing images that can be used for prediction even if the resolution is not the same, the difference image can be reduced and the encoding efficiency can be increased. be able to. Further, as illustrated in FIG. 7C, when referring to a plurality of images, the image encoding device and the image decoding device generate prediction images more similar to key frames by referring to images with different resolutions. It becomes possible to do. Thereby, encoding efficiency can further be improved.
  • the image decoding apparatus 300 decodes a plurality of images.
  • the image decoding apparatus 300 determines a randomly accessible key frame 104 from a plurality of images.
  • the image decoding apparatus 300 decodes the key frame 104 using inter prediction that refers to a key frame reference picture different from the key frame 104.
  • inter prediction refers to a key frame reference picture different from the key frame 104.
  • the image decoding apparatus 300 discriminates a plurality of key frames from a plurality of images, and decodes target key frames included in the plurality of key frames with reference to other key frames among the plurality of key frames. Thereby, since the number of images required for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
  • FIG. 16 is a schematic diagram showing a system 500 according to the present embodiment.
  • the system 500 has a database 501 that can be accessed from the image decoding apparatus 300.
  • the database 501 stores a plurality of images g1t1 to gNtM that are key frame reference pictures including the background image described above. Further, as shown in FIG. 16, the database 501 may store a plurality of sets d0 to dL each including a plurality of images.
  • the image decoding device 300 is, for example, the image decoding device 300 described in the second embodiment.
  • the system 500 includes a control unit 502. Based on a trigger (signal) transmitted from the image decoding apparatus 300 or a trigger signal obtained from a time (a time signal included in the system 500), the control unit 502 specifies a specific one from a plurality of key frame reference pictures held in the database 501.
  • the image or the image group is transmitted to the image decoding device 300.
  • the transmitted image or image group is stored in a data buffer (key frame memory 307) that is a local storage of the image decoding apparatus 300.
  • the image stored in the data buffer is used for key frame inter prediction as a key frame reference picture.
  • FIG. 17 is a diagram illustrating an operation flow of the system 500 and the image decoding apparatus 300 according to the present embodiment.
  • the image decoding apparatus 300 transmits a trigger signal (control signal) for acquiring an image necessary or likely to be necessary for decoding to the system 500 (S601).
  • the system 500 receives the trigger signal transmitted from the image decoding device 300, selects an image that is necessary or likely to be necessary from a plurality of stored images (S602), and selects the selected image as an image decoding device. It transmits to 300 (S603).
  • the image decoding device 300 receives the image transmitted from the system 500 and stores it in the local storage (S604). Accordingly, the image decoding apparatus 300 can acquire a necessary key frame reference picture, and thus can appropriately decode a code string.
  • the trigger signal includes, for example, position information indicating the current position of the image decoding device 300.
  • the system 500 holds, for each stored image or image group, position information indicating the location where the image included in the image or image group is taken.
  • the system 500 selects an image or a group of images captured at a position close to the current position indicated by the trigger signal, and transmits the selected image or group of images to the image decoding device 300.
  • a mobile terminal such as a tablet terminal provided with the image decoding device 300 displays a monitoring video set near the current position among a large number of monitoring cameras set nationwide.
  • the image decoding apparatus 300 acquires an image related to the position from the system 500, so that the memory size of the terminal can be reduced as compared to acquiring all the images.
  • the amount of transmission data can be reduced.
  • the trigger information may include position information indicating a position specified by the user.
  • the user designates an area where a suspicious person or the like has occurred from a remote location.
  • the image decoding apparatus 300 can acquire an image related to the area in advance, so that video switching can be performed smoothly.
  • the trigger signal includes time information.
  • the time information indicates the current time.
  • the system 500 holds time information indicating the time or time zone when the image included in the image or image group was captured.
  • the system 500 selects an image or a group of images taken at a time or time zone close to the current time, and transmits the selected image or group of images to the image decoding apparatus 300.
  • the memory size of the terminal can be reduced and the amount of transmission data can be reduced as compared to acquiring all images.
  • the time information is not limited to the current time, and may indicate a time designated by the user.
  • the time information indicates the time by one of seconds, minutes, hours, days, months, seasons (multiple months) or a combination thereof. Since a large unit is used as the unit indicated by the time information, the system 500 can store the image in units of seasons, for example, so that the memory size of the system 500 can be reduced. Further, since the number of images transmitted to the terminal is reduced, the memory size of the terminal can be reduced and the amount of transmission data can be reduced. In addition, the data amount of the trigger signal can be reduced.
  • the trigger signal may include weather information indicating the weather.
  • the system 500 holds weather information indicating the weather when the image is captured for each stored image or image group. Thereby, the system 500 selects an image or a group of images captured in the same or similar weather as the weather indicated by the trigger information, and transmits the selected image or group of images to the image decoding device 300.
  • the memory size of the terminal can be reduced and the amount of transmission data can be reduced as compared to acquiring all images.
  • this technique may be applied only to cameras such as surveillance cameras installed in the outdoors and cameras that are photographing the outdoors. As a result, the amount of data stored in the system 500 can be reduced, and the data amount of the trigger signal can be reduced.
  • the trigger signal may include two or more of the position information, time information, and weather information described above.
  • the system 500 selects an image that meets all of the plurality of conditions indicated by the trigger signal. Thereby, since the selected image can be specified more, the amount of data transmitted to the image decoding apparatus 300 can be further reduced.
  • the same image as the image transmitted to the image decoding device 300 is transmitted to the image encoding device 200.
  • the image encoding device 200 generates a code string using the same image as the image transmitted to the image decoding device 300, and transmits the generated code sequence to the image decoding device 300.
  • the system 500 does not store the image specified by the trigger signal, for example, an image that matches the specification by the trigger signal is newly acquired from the image encoding device 200 via a network or the like. To get and store. As a result, it is possible to cope with changes in the environment and to improve the encoding efficiency continuously.
  • time information generated in the system 500 may be used as the time information. Thereby, the data amount of data transmitted between the image decoding apparatus 300 and the system 500 can be reduced. In this case, it is necessary for the image encoding device 200 and the image decoding device 300 to share which time information is used. For example, information indicating which time information is used is notified to the image encoding device 200 and the image decoding device 300.
  • the system 500 may include the image decoding device 300. In this case, transmission between the image decoding apparatus 300 and the system 500 does not go through the network.
  • the present invention is not limited thereto.
  • the system 500 holds individual images in association with each time, each place, or each weather, but the present invention is not limited thereto.
  • the times are different, if the image contents are the same or similar, only one image may be stored in association with a plurality of times. Thereby, since the number of images stored in the system 500 can be reduced, the data amount of images stored in the system 500 can be reduced.
  • the image decoding device 300 indicates that the reference image cannot be acquired when the image encoding device 200 cannot acquire the reference image (key frame reference picture) used for generating the code string. Notify system 500 or user. In that case, the image decoding device 300 generates a decoded image using another image (an image not shared with the image encoding device 200) acquired by a predetermined method for prediction, and displays the decoded image. May be. Thereby, although it is not a decoded image expected by the image coding apparatus 200, a similar decoded image can be displayed by using a similar image as a predicted image on the image decoding apparatus 300 side, and thus no video is displayed. Can be prevented.
  • each processing unit included in the image encoding device and the image decoding device according to the above embodiment is typically realized as an LSI that is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
  • circuits are not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • An FPGA Field Programmable Gate Array
  • reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the image encoding device and the image decoding device include a processing circuit and a storage device (storage) electrically connected to the processing circuit (accessible from the processing circuit).
  • the processing circuit includes at least one of dedicated hardware and a program execution unit. Further, when the processing circuit includes a program execution unit, the storage device stores a software program executed by the program execution unit. The processing circuit executes the image encoding method or the image decoding method according to the above embodiment using the storage device.
  • the present invention may be the software program or a non-transitory computer-readable recording medium on which the program is recorded.
  • the program can be distributed via a transmission medium such as the Internet.
  • division of functional blocks in the block diagram is an example, and a plurality of functional blocks can be realized as one functional block, a single functional block can be divided into a plurality of functions, or some functions can be transferred to other functional blocks. May be.
  • functions of a plurality of functional blocks having similar functions may be processed in parallel or time-division by a single hardware or software.
  • the order in which the steps included in the prediction image generation method, the encoding method, or the decoding method are executed is for illustration in order to specifically describe the present invention, and the order other than the above is used. May be. Also, some of the above steps may be executed simultaneously (in parallel) with other steps.
  • each embodiment may be realized by centralized processing using a single device (system), or may be realized by distributed processing using a plurality of devices.
  • the computer that executes the program may be singular or plural. That is, centralized processing may be performed, or distributed processing may be performed.
  • the prediction image generation device As described above, the prediction image generation device, the encoding device, and the decoding device according to one or more aspects of the present invention have been described based on the embodiment. However, the present invention is not limited to this embodiment. Absent. Unless it deviates from the gist of the present invention, the embodiment in which various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments are also applicable to one or more of the present invention. It may be included within the scope of the embodiments.
  • the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
  • the system has an image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
  • image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
  • Other configurations in the system can be appropriately changed according to circumstances.
  • FIG. 18 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service.
  • a communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
  • the content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
  • PDA Personal Digital Assistant
  • each device may be directly connected to the telephone network ex104 without going from the base station ex106, which is a fixed wireless station, to ex110.
  • the devices may be directly connected to each other via short-range wireless or the like.
  • the camera ex113 is a device that can shoot moving images such as a digital video camera
  • the camera ex116 is a device that can shoot still images and movies such as a digital camera.
  • the mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division Multiple Access) system, or an LTE (Long Terminal Term Evolution). It is possible to use any of the above-mentioned systems, HSPA (High Speed Packet Access) mobile phone, PHS (Personal Handyphone System), or the like.
  • the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
  • live distribution content that is shot by a user using the camera ex113 (for example, music live video) is encoded as described in each of the above embodiments (that is, in one aspect of the present invention).
  • the streaming server ex103 stream-distributes the content data transmitted to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 that can decode the encoded data.
  • Each device that receives the distributed data decodes the received data and reproduces it (that is, functions as an image decoding device according to one embodiment of the present invention).
  • the captured data may be encoded by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be shared with each other.
  • the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in common with each other.
  • still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
  • the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
  • these encoding / decoding processes are generally performed in the computer ex111 and the LSI ex500 included in each device.
  • the LSI ex500 may be configured as a single chip or a plurality of chips.
  • moving image encoding / decoding software is incorporated into some recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111, etc., and encoding / decoding processing is performed using the software. May be.
  • moving image data acquired by the camera may be transmitted.
  • the moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
  • the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
  • the encoded data can be received and reproduced by the client.
  • the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and personal broadcasting can be realized even for a user who does not have special rights or facilities.
  • the digital broadcasting system ex200 also includes at least the moving image encoding device (image encoding device) or the moving image decoding according to each of the above embodiments. Any of the devices (image decoding devices) can be incorporated.
  • the broadcast station ex201 multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves.
  • This video data is data encoded by the moving image encoding method described in each of the above embodiments (that is, data encoded by the image encoding apparatus according to one aspect of the present invention).
  • the broadcasting satellite ex202 transmits a radio wave for broadcasting, and this radio wave is received by a home antenna ex204 capable of receiving satellite broadcasting.
  • the received multiplexed data is decoded and reproduced by an apparatus such as the television (receiver) ex300 or the set top box (STB) ex217 (that is, functions as an image decoding apparatus according to one embodiment of the present invention).
  • a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system using the recording medium ex215 on which the multiplexed data is recorded.
  • a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television.
  • the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
  • FIG. 20 is a diagram illustrating a television (receiver) ex300 that uses the video decoding method and the video encoding method described in each of the above embodiments.
  • the television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data.
  • the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
  • the television ex300 also decodes the audio data and the video data, or encodes the information, the audio signal processing unit ex304, the video signal processing unit ex305 (the image encoding device or the image according to one embodiment of the present invention) A signal processing unit ex306 that functions as a decoding device), a speaker ex307 that outputs the decoded audio signal, and an output unit ex309 that includes a display unit ex308 such as a display that displays the decoded video signal. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that performs overall control of each unit, and a power supply circuit unit ex311 that supplies power to each unit.
  • the interface unit ex317 includes a bridge unit ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording unit such as a hard disk.
  • a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
  • the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored.
  • Each part of the television ex300 is connected to each other via a synchronous bus.
  • the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in each of the above embodiments.
  • the decoded audio signal and video signal are output from the output unit ex309 to the outside. At the time of output, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Also, the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or to a recording medium will be described.
  • the television ex300 receives a user operation from the remote controller ex220 and the like, encodes an audio signal with the audio signal processing unit ex304, and converts the video signal with the video signal processing unit ex305 based on the control of the control unit ex310. Encoding is performed using the encoding method described in (1).
  • the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320, ex321, etc. so that the audio signal and the video signal are synchronized.
  • a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
  • the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good.
  • the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
  • the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218,
  • the reader / recorder ex218 may share with each other.
  • FIG. 21 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to an optical disk.
  • the information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below.
  • the optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disk to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information.
  • the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
  • the reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary To play back information.
  • the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
  • the disk motor ex405 rotates the recording medium ex215.
  • the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
  • the system control unit ex407 controls the entire information reproduction / recording unit ex400.
  • the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary.
  • the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner.
  • the system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
  • the optical head ex401 has been described as irradiating a laser spot.
  • a configuration in which higher-density recording is performed using near-field light may be used.
  • FIG. 22 shows a schematic diagram of a recording medium ex215 that is an optical disk.
  • Guide grooves grooves
  • address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
  • This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus.
  • the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
  • the area used for recording user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
  • the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
  • an optical disk such as a single-layer DVD or BD has been described as an example.
  • the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used.
  • an optical disc with a multi-dimensional recording / reproducing structure such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
  • the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
  • the configuration of the car navigation ex211 may be, for example, the configuration shown in FIG. 20 with a GPS receiving unit added, and the same may be considered for the computer ex111, the mobile phone ex114, and the like.
  • FIG. 23A is a diagram showing the mobile phone ex114 using the video decoding method and the video encoding method described in the above embodiment.
  • the mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of capturing video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data.
  • the mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio, a captured video,
  • an audio input unit ex356 such as a microphone for inputting audio
  • a captured video In the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data
  • a slot ex364 is provided.
  • the mobile phone ex114 has a power supply circuit part ex361, an operation input control part ex362, and a video signal processing part ex355 with respect to a main control part ex360 that comprehensively controls each part of the main body including the display part ex358 and the operation key part ex366.
  • a camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
  • the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
  • the cellular phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. Then, this is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing are performed by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
  • the mobile phone ex114 also amplifies the received data received via the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing by the modulation / demodulation unit ex352, and performs voice signal processing unit After being converted into an analog audio signal by ex354, this is output from the audio output unit ex357.
  • the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362.
  • the main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350.
  • almost the reverse process is performed on the received data and output to the display unit ex358.
  • the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments. Encode (that is, function as an image encoding device according to an aspect of the present invention), and send the encoded video data to the multiplexing / demultiplexing unit ex353.
  • the audio signal processing unit ex354 encodes the audio signal picked up by the audio input unit ex356 while the camera unit ex365 images a video, a still image, etc., and sends the encoded audio data to the multiplexing / separating unit ex353. To do.
  • the multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result.
  • the multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
  • the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370.
  • the encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355.
  • the video signal processing unit ex355 decodes the video signal by decoding using the video decoding method corresponding to the video encoding method described in each of the above embodiments (that is, an image according to an aspect of the present invention).
  • video and still images included in the moving image file linked to the home page are displayed from the display unit ex358 via the LCD control unit ex359.
  • the audio signal processing unit ex354 decodes the audio signal, and the audio is output from the audio output unit ex357.
  • the terminal such as the mobile phone ex114 is referred to as a transmission terminal having only an encoder and a receiving terminal having only a decoder.
  • a transmission terminal having only an encoder
  • a receiving terminal having only a decoder.
  • multiplexed data in which music data or the like is multiplexed with video data is received and transmitted, but data in which character data or the like related to video is multiplexed in addition to audio data It may be video data itself instead of multiplexed data.
  • the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
  • multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to.
  • identification information indicating which standard the video data conforms to.
  • FIG. 24 is a diagram showing a structure of multiplexed data.
  • multiplexed data is obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream.
  • the video stream indicates the main video and sub-video of the movie
  • the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio
  • the presentation graphics stream indicates the subtitles of the movie.
  • the main video indicates a normal video displayed on the screen
  • the sub-video is a video displayed on a small screen in the main video.
  • the interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen.
  • the video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing.
  • the audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
  • Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
  • FIG. 25 is a diagram schematically showing how multiplexed data is multiplexed.
  • a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240.
  • the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246.
  • the multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
  • FIG. 26 shows in more detail how the video stream is stored in the PES packet sequence.
  • the first row in FIG. 26 shows a video frame sequence of the video stream.
  • the second level shows a PES packet sequence.
  • a plurality of Video Presentation Units in the video stream are divided into pictures, B pictures, and P pictures, and are stored in the payload of the PES packet.
  • Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
  • PTS Presentation Time-Stamp
  • DTS Decoding Time-Stamp
  • FIG. 27 shows the format of TS packets that are finally written in the multiplexed data.
  • the TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data.
  • the PES packet is divided and stored in the TS payload.
  • a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data.
  • TP_Extra_Header information such as ATS (Arrival_Time_Stamp) is described.
  • ATS indicates the transfer start time of the TS packet to the PID filter of the decoder.
  • source packets are arranged as shown in the lower part of FIG. 27, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
  • TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption.
  • PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0.
  • the PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data.
  • the descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data.
  • the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
  • FIG. 28 is a diagram for explaining the data structure of the PMT in detail.
  • a PMT header describing the length of data included in the PMT is arranged at the head of the PMT.
  • a plurality of descriptors related to multiplexed data are arranged.
  • the copy control information and the like are described as descriptors.
  • a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged.
  • the stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream.
  • the multiplexed data is recorded together with the multiplexed data information file.
  • the multiplexed data information file is management information of multiplexed data, has a one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
  • the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time as shown in FIG.
  • the system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later.
  • the ATS interval included in the multiplexed data is set to be equal to or less than the system rate.
  • the playback start time is the PTS of the first video frame of the multiplexed data
  • the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
  • the attribute information for each stream included in the multiplexed data is registered for each PID.
  • the attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream.
  • the video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is.
  • the audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
  • the stream type included in the PMT is used.
  • video stream attribute information included in the multiplexed data information is used.
  • the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT.
  • FIG. 31 shows the steps of the moving picture decoding method according to the present embodiment.
  • step exS100 the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data.
  • step exS101 it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do.
  • step exS102 the above embodiments are performed. Decoding is performed by the moving picture decoding method shown in the form.
  • the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
  • FIG. 32 shows a configuration of an LSI ex500 that is made into one chip.
  • the LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510.
  • the power supply circuit unit ex505 is activated to an operable state by supplying power to each unit when the power supply is on.
  • the LSI ex500 uses the AV I / O ex509 to perform the microphone ex117 and the camera ex113 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512, and the like.
  • the AV signal is input from the above.
  • the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
  • the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed.
  • the encoding process of the video signal is the encoding process described in the above embodiments.
  • the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside.
  • the output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so as to be synchronized when multiplexing.
  • the memory ex511 is described as an external configuration of the LSI ex500.
  • a configuration included in the LSI ex500 may be used.
  • the number of buffers ex508 is not limited to one, and a plurality of buffers may be provided.
  • the LSI ex500 may be made into one chip or a plurality of chips.
  • control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex501 is not limited to this configuration.
  • the signal processing unit ex507 may further include a CPU.
  • the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507.
  • the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
  • LSI LSI
  • IC system LSI
  • super LSI ultra LSI depending on the degree of integration
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA Field Programmable Gate Array
  • Such a programmable logic device typically loads or reads a program constituting software or firmware from a memory or the like, so that the moving image encoding method or the moving image described in each of the above embodiments is used.
  • An image decoding method can be performed.
  • FIG. 33 shows a configuration ex800 in the present embodiment.
  • the drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments.
  • the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments is instructed to decode the video data.
  • the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
  • the drive frequency switching unit ex803 includes the CPU ex502 and the drive frequency control unit ex512 in FIG.
  • the decoding processing unit ex801 that executes the moving picture decoding method shown in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG.
  • the CPU ex502 identifies which standard the video data conforms to. Then, based on the signal from the CPU ex502, the drive frequency control unit ex512 sets the drive frequency. Further, based on the signal from the CPU ex502, the signal processing unit ex507 decodes the video data.
  • the identification information described in the fifth embodiment may be used.
  • the identification information is not limited to that described in the fifth embodiment, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal. Further, the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a look-up table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to the look-up table.
  • FIG. 34 shows steps for executing the method of the present embodiment.
  • the signal processing unit ex507 acquires identification information from the multiplexed data.
  • the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information.
  • the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency.
  • step exS203 the CPU ex502 drives the signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
  • the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency. For example, when the drive frequency is set low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
  • the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method.
  • the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
  • the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered.
  • the voltage applied to the LSIex500 or the apparatus including the LSIex500 is set high.
  • the driving of the CPU ex502 is stopped.
  • the CPU ex502 is temporarily stopped because there is room in processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is a margin for processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
  • a plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone.
  • the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input.
  • the signal processing unit ex507 corresponding to each standard is used individually, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
  • a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1
  • the processing unit is partly shared.
  • An example of this configuration is shown as ex900 in FIG. 36A.
  • the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common.
  • the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for other processing contents specific to one aspect of the present invention that do not correspond to the MPEG4-AVC standard, a dedicated decoding processing unit A configuration using ex901 is conceivable.
  • a dedicated decoding processing unit ex901 is used for key frame processing, and other entropy decoding, inverse quantization, It is conceivable to share a decoding processing unit for any of the deblocking filter, motion compensation, or all processes.
  • the decoding processing unit for executing the moving picture decoding method described in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
  • ex1000 in FIG. 36B shows another example in which processing is partially shared.
  • a dedicated decoding processing unit ex1001 corresponding to the processing content specific to one aspect of the present invention
  • a dedicated decoding processing unit ex1002 corresponding to the processing content specific to another conventional standard
  • a common decoding processing unit ex1003 corresponding to the processing contents common to the moving image decoding method according to the above and other conventional moving image decoding methods.
  • the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in one aspect of the present invention or processing content specific to other conventional standards, and can execute other general-purpose processing. Also good.
  • the configuration of the present embodiment can be implemented by LSI ex500.
  • the processing content common to the moving picture decoding method according to one aspect of the present invention and the moving picture decoding method of the conventional standard reduces the circuit scale of the LSI by sharing the decoding processing unit, In addition, the cost can be reduced.
  • the present invention can be applied to an image encoding method, an image decoding method, an image encoding device, and an image decoding device.
  • the present invention can also be used for high-resolution information display devices or imaging devices such as televisions, digital video recorders, car navigation systems, mobile phones, digital cameras, and digital video cameras that include an image encoding device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Provided is an image encoding method that can efficiently encode images, or an image decoding method that can efficiently decode images. The image encoding method is a method for encoding multiple images, and includes: a selection step (S101) for selecting a randomly accessible keyframe (104) from among multiple images; and encoding steps (S103, S204) that perform encoding on the keyframe (104) using inter-prediction that references a keyframe reference picture which is different from the keyframe (104).

Description

画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置Image encoding method, image decoding method, image encoding device, and image decoding device
 本発明は、画像符号化方法又は画像復号方法に関する。 The present invention relates to an image encoding method or an image decoding method.
 H.26xと称されるITU-T規格又はMPEG-xと称されるISO/IEC規格に代表される画像符号化方式を用いる高品質な映像伝送システムの普及が予測される。なお、現在の最新の国際標準規格はHEVC(High Efficiency Video Coding)と呼ばれる方式である(非特許文献1)。 H. The widespread use of high-quality video transmission systems that use image coding methods typified by the ITU-T standard called 26x or the ISO / IEC standard called MPEG-x is expected. The latest international standard is a method called HEVC (High Efficiency Video Coding) (Non-Patent Document 1).
 また、ロングターム参照ピクチャ(長期参照画像)を用いて符号化を行なう方法に関する技術として、特許文献1に記載の技術がある。 Also, as a technique related to a method for performing encoding using a long term reference picture (long-term reference picture), there is a technique described in Patent Document 1.
 ところで、複数のカメラ映像を切り替えて復号したり、長時間の録画映像の中で任意の時間のカメラ映像を正しく再生する場合、予め決められたフレームでのみ切り替え行う、又は再生を開始することができる。ここで、予め決められたフレームとは、例えば、MPEG-4 AVC、及びHEVC等では、イントラ(画面内)符号化されたIフレームである。 By the way, when a plurality of camera videos are switched and decoded, or when a camera video of an arbitrary time is reproduced correctly in a long-time recorded video, switching may be performed only at a predetermined frame or playback may be started. it can. Here, the predetermined frame is an I frame that is intra (in-screen) encoded in MPEG-4 AVC, HEVC, and the like.
特開平10-23423号公報Japanese Patent Laid-Open No. 10-23423
 しかしながら、従来技術に係る画像符号化方法又は画像復号方法において、頻繁に映像を切り替えるためには、イントラ符号化されたIフレーム等を高頻度で符号化する必要がある。これにより、符号化効率が悪くなるという課題があった。 However, in the image encoding method or the image decoding method according to the related art, in order to frequently switch videos, it is necessary to encode intra-coded I frames and the like with high frequency. As a result, there is a problem that the encoding efficiency is deteriorated.
 そこで、本発明は、画像を効率的に符号化できる画像符号化方法、又は画像を効率的に復号できる画像復号方法を提供することを目的とする。 Therefore, an object of the present invention is to provide an image encoding method capable of efficiently encoding an image or an image decoding method capable of efficiently decoding an image.
 上記目的を達成するために、本発明の一態様に係る画像符号化方法は、複数の画像を符号化する画像符号化方法であって、前記複数の画像から、ランダムアクセス可能なキーフレームを選択する選択ステップと、前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて符号化する符号化ステップとを含む。 To achieve the above object, an image encoding method according to an aspect of the present invention is an image encoding method for encoding a plurality of images, and selects randomly accessible key frames from the plurality of images. And a coding step of coding the key frame using inter prediction that refers to a reference picture different from the key frame.
 また、本発明の一態様に係る画像復号方法は、複数の画像を復号する画像復号方法であって、前記複数の画像から、ランダムアクセス可能なキーフレームを判別する判別ステップと、前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて復号する復号ステップとを含む。 An image decoding method according to an aspect of the present invention is an image decoding method for decoding a plurality of images, wherein a determination step of determining a randomly accessible key frame from the plurality of images, and the key frame And a decoding step of decoding using inter prediction that refers to a reference picture different from the key frame.
 なお、これらの全般的または具体的な態様は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、システム、方法、集積回路、コンピュータプログラム及び記録媒体の任意な組み合わせで実現されてもよい。 These general or specific aspects may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium such as a computer-readable CD-ROM. The system, method, integrated circuit, computer program Also, any combination of recording media may be realized.
 本発明は、画像を効率的に符号化できる画像符号化方法、又は画像を効率的に復号できる画像復号方法を提供できる。 The present invention can provide an image encoding method capable of efficiently encoding an image or an image decoding method capable of efficiently decoding an image.
図1Aは、画像配信システムの一例を示す図である。FIG. 1A is a diagram illustrating an example of an image distribution system. 図1Bは、画像復号装置における再生動作を説明するための図である。FIG. 1B is a diagram for explaining a reproduction operation in the image decoding apparatus. 図2は、従来の符号列の一例を示す図である。FIG. 2 is a diagram illustrating an example of a conventional code string. 図3Aは、画像配信システムの動作の一例を示す図である。FIG. 3A is a diagram illustrating an example of the operation of the image distribution system. 図4は、実施の形態1に係る符号列の一例を示す図である。FIG. 4 is a diagram illustrating an example of a code string according to the first embodiment. 図5Aは、実施の形態1に係る符号列の一例を示す図である。FIG. 5A is a diagram illustrating an example of a code string according to Embodiment 1. 図5Bは、従来の符号列の一例を示す図である。FIG. 5B is a diagram illustrating an example of a conventional code string. 図6Aは、実施の形態1に係る符号列の一例を示す図である。FIG. 6A is a diagram illustrating an example of a code string according to Embodiment 1. 図6Bは、従来の符号列の一例を示す図である。FIG. 6B is a diagram illustrating an example of a conventional code string. 図7Aは、実施の形態1に係る符号列の一例を示す図である。FIG. 7A is a diagram illustrating an example of a code string according to Embodiment 1. 図7Bは、実施の形態1に係る符号列の一例を示す図である。FIG. 7B is a diagram illustrating an example of a code string according to Embodiment 1. 図7Cは、実施の形態1に係る符号列の一例を示す図である。FIG. 7C is a diagram illustrating an example of a code string according to Embodiment 1. 図8は、実施の形態1に係る画像符号化装置のブロック図である。FIG. 8 is a block diagram of the image coding apparatus according to Embodiment 1. 図9は、実施の形態1に係る画像符号化処理のフローチャートである。FIG. 9 is a flowchart of the image encoding process according to the first embodiment. 図10は、実施の形態1に係るキーフレーム符号化処理のフローチャートである。FIG. 10 is a flowchart of the key frame encoding process according to the first embodiment. 図11は、実施の形態1に係る画像符号化処理のフローチャートである。FIG. 11 is a flowchart of the image encoding process according to the first embodiment. 図12は、実施の形態1に係る画像符号化処理のフローチャートである。FIG. 12 is a flowchart of the image encoding process according to the first embodiment. 図13は、実施の形態2に係る画像復号装置のブロック図である。FIG. 13 is a block diagram of an image decoding apparatus according to the second embodiment. 図14は、実施の形態2に係る画像復号処理のフローチャートである。FIG. 14 is a flowchart of image decoding processing according to the second embodiment. 図15は、実施の形態2に係るキーフレーム復号処理のフローチャートである。FIG. 15 is a flowchart of key frame decryption processing according to the second embodiment. 図16は、実施の形態3に係るシステムの構成を示す図である。FIG. 16 is a diagram illustrating a configuration of a system according to the third embodiment. 図17は、実施の形態3に係るシステムの動作を示す図である。FIG. 17 is a diagram illustrating the operation of the system according to the third embodiment. 図18は、コンテンツ配信サービスを実現するコンテンツ供給システムの全体構成図である。FIG. 18 is an overall configuration diagram of a content supply system that implements a content distribution service. 図19は、デジタル放送用システムの全体構成図である。FIG. 19 is an overall configuration diagram of a digital broadcasting system. 図20は、テレビの構成例を示すブロック図である。FIG. 20 is a block diagram illustrating a configuration example of a television. 図21は、光ディスクである記録メディアに情報の読み書きを行う情報再生/記録部の構成例を示すブロック図である。FIG. 21 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk. 図22は、光ディスクである記録メディアの構造例を示す図である。FIG. 22 is a diagram illustrating a structure example of a recording medium that is an optical disk. 図23Aは、携帯電話の一例を示す図である。FIG. 23A is a diagram illustrating an example of a mobile phone. 図23Bは、携帯電話の構成例を示すブロック図である。FIG. 23B is a block diagram illustrating a configuration example of a mobile phone. 図24は、多重化データの構成を示す図である。FIG. 24 is a diagram showing a structure of multiplexed data. 図25は、各ストリームが多重化データにおいてどのように多重化されているかを模式的に示す図である。FIG. 25 is a diagram schematically showing how each stream is multiplexed in the multiplexed data. 図26は、PESパケット列に、ビデオストリームがどのように格納されるかを更に詳しく示した図である。FIG. 26 is a diagram showing in more detail how the video stream is stored in the PES packet sequence. 図27は、多重化データにおけるTSパケットとソースパケットの構造を示す図である。FIG. 27 is a diagram illustrating the structure of TS packets and source packets in multiplexed data. 図28は、PMTのデータ構成を示す図である。FIG. 28 shows the data structure of the PMT. 図29は、多重化データ情報の内部構成を示す図である。FIG. 29 is a diagram showing an internal configuration of multiplexed data information. 図30は、ストリーム属性情報の内部構成を示す図である。FIG. 30 shows the internal structure of stream attribute information. 図31は、映像データを識別するステップを示す図である。FIG. 31 is a diagram illustrating steps for identifying video data. 図32は、各実施の形態の動画像符号化方法および動画像復号化方法を実現する集積回路の構成例を示すブロック図である。FIG. 32 is a block diagram illustrating a configuration example of an integrated circuit that implements the moving picture coding method and the moving picture decoding method according to each embodiment. 図33は、駆動周波数を切り替える構成を示す図である。FIG. 33 is a diagram illustrating a configuration for switching the driving frequency. 図34は、映像データを識別し、駆動周波数を切り替えるステップを示す図である。FIG. 34 is a diagram illustrating steps for identifying video data and switching between driving frequencies. 図35は、映像データの規格と駆動周波数を対応づけたルックアップテーブルの一例を示す図である。FIG. 35 is a diagram illustrating an example of a look-up table in which video data standards are associated with drive frequencies. 図36Aは、信号処理部のモジュールを共有化する構成の一例を示す図である。FIG. 36A is a diagram illustrating an example of a configuration for sharing a module of a signal processing unit. 図36Bは、信号処理部のモジュールを共有化する構成の他の一例を示す図である。FIG. 36B is a diagram illustrating another example of a configuration for sharing a module of a signal processing unit.
 (本発明の基礎となった知見)
 本発明者は、「背景技術」の欄において記載した、画像を符号化する画像符号化装置、または、画像を復号する画像復号装置に関して、以下の問題が生じることを見出した。
(Knowledge that became the basis of the present invention)
The present inventor has found that the following problems occur with respect to the image encoding device for encoding an image or the image decoding device for decoding an image described in the “Background Art” section.
 近年、デジタル映像機器の技術進歩が著しい。これにより、ビデオカメラ又はテレビチューナから入力された映像信号(時系列順に並んだ複数のピクチャ)が圧縮符号化され、得られた信号がDVD又はハードディスク等の記録メディアに記録される機会が増えている。画像符号化規格としてはH.264/AVC(MPEG-4 AVC)があり、次世代の標準規格としてHEVC(High Efficiency Video Coding)規格(非特許文献1)がある。また、画像を長期的に保存し、それを参照ピクチャとして使うことにより符号化効率を高める技術が特許文献1に開示されている。 In recent years, technological progress of digital video equipment has been remarkable. As a result, video signals (a plurality of pictures arranged in chronological order) input from a video camera or a TV tuner are compression-encoded, and the opportunity for the obtained signals to be recorded on a recording medium such as a DVD or a hard disk increases. Yes. As an image coding standard, H.264 is used. H.264 / AVC (MPEG-4 AVC), and the next generation standard is the HEVC (High Efficiency Video Coding) standard (Non-patent Document 1). Further, Patent Document 1 discloses a technique for improving encoding efficiency by storing an image for a long period of time and using it as a reference picture.
 HEVC規格(非特許文献1)には特許文献1に記載の技術を使用できるようにロングターム参照ピクチャという仕組みがある。復号画像がロングターム参照ピクチャに指定されることで当該復号画像は長期的にフレームメモリに保存される。これにより、それ以降の復号画像に対してはそのロングターム参照ピクチャを参照することが可能となる。 The HEVC standard (Non-Patent Document 1) has a long term reference picture so that the technique described in Patent Document 1 can be used. By specifying the decoded image as a long term reference picture, the decoded image is stored in the frame memory for a long time. This makes it possible to refer to the long-term reference picture for subsequent decoded images.
 しかしながら、長時間映像を記録した場合、又は複数の映像を記録した場合、特許文献1の技術を用いると、任意の時間から再生を開始する、又は、複数の映像の再生を切り替えるといった場合に、再生開始までに長時間待つ必要がある、又は符号化効率が向上しないという課題があった。 However, when a video is recorded for a long time, or when a plurality of videos are recorded, using the technique of Patent Document 1, when playback is started from an arbitrary time or when playback of a plurality of videos is switched, There is a problem that it is necessary to wait for a long time before starting reproduction or the encoding efficiency is not improved.
 より具体的には、イントラ符号化されたIフレームから再生開始点まで復号する必要がある。よって、より短時間での再生開始又は映像切替を実現するためには、一般的に符号化効率が悪いIフレームを高頻度で符号化する必要がある。これにより、符号化効率が向上しない。 More specifically, it is necessary to decode from the intra-coded I frame to the playback start point. Therefore, in order to realize reproduction start or video switching in a shorter time, it is generally necessary to encode I frames with poor encoding efficiency at a high frequency. Thereby, encoding efficiency does not improve.
 図1A及び図1Bは、本実施の形態で解決しようとする課題を説明するための模式図である。図1Aは、複数カメラからの映像を再生端末(復号装置)で適宜切り替えて再生する場合を示す。 1A and 1B are schematic diagrams for explaining a problem to be solved in the present embodiment. FIG. 1A shows a case where videos from a plurality of cameras are appropriately switched and played back by a playback terminal (decoding device).
 図1Aは、例えば、カメラ101A~101Cで撮影した映像をそれぞれの画像符号化装置102A~102Cが出力する符号列(BitStream)103A~103Cの例を示している。なお、符号列103A~103Cは、キーフレーム104(KeyFrame)と呼ばれる復号の開始点となるフレームと、通常フレーム105と呼ばれるキーフレーム以外のフレームとで構成される。 FIG. 1A shows an example of code strings (BitStream) 103A to 103C that are output from the image encoding devices 102A to 102C, for example, images captured by the cameras 101A to 101C. The code strings 103A to 103C are composed of a frame serving as a decoding start point called a key frame 104 (KeyFrame) and a frame other than the key frame called a normal frame 105.
 キーフレーム104は、従来技術では、面内予測符号化されたフレーム(イントラ予測フレーム)である。例えば、タブレット端末又はスマートフォンなどの携帯デバイスである画像復号装置106では、伝送帯域に制約がある。これにより、画像復号装置106は、これら複数の画像符号化装置102A~102Cから伝送される符号列103A~103Cの全てを同時に受信するのではなく、ユーザが見たい(表示又は再生したい)映像を選択し、その映像のストリームだけを再生する。例えば、画像復号装置106が、符号列103A、符号列103B、符号列103Cの順で再生を行う場合には、図1Bに示すように、画像復号装置106は、キーフレーム104の単位でしかこの切り替えを行えない。 In the prior art, the key frame 104 is a frame (intra prediction frame) that has been subjected to intra-frame prediction encoding. For example, in the image decoding apparatus 106 which is a portable device such as a tablet terminal or a smartphone, there is a restriction on the transmission band. As a result, the image decoding apparatus 106 does not receive all of the code sequences 103A to 103C transmitted from the plurality of image encoding apparatuses 102A to 102C at the same time, but rather the video that the user wants to view (display or play). Select and play only the video stream. For example, when the image decoding apparatus 106 performs reproduction in the order of the code string 103A, the code string 103B, and the code string 103C, the image decoding apparatus 106 can only perform this in units of key frames 104 as shown in FIG. 1B. Cannot switch.
 この動作を、従来の符号列103の構成を示す図2を用いて説明する。ここでキーフレーム104はイントラ予測フレーム(I)である。また、ピクチャ内の「I」は、イントラ予測が用いられるIフレーム(イントラ予測フレーム)を示し、「P」は、1枚のフレームのみが参照される単方向予測が用いられるPフレームを示す。ピクチャ内の()内の数値は、処理順(表示順)を示す。また、矢印は参照関係を示し、矢印の元のピクチャが矢印の先のピクチャの予測処理に用いられる(参照される)ことを示す。 This operation will be described with reference to FIG. 2 showing the configuration of the conventional code string 103. Here, the key frame 104 is an intra prediction frame (I). In addition, “I” in a picture indicates an I frame (intra prediction frame) in which intra prediction is used, and “P” indicates a P frame in which unidirectional prediction in which only one frame is referenced is used. Numerical values in parentheses in the picture indicate processing order (display order). An arrow indicates a reference relationship, and indicates that the original picture of the arrow is used (referenced) for the prediction process of the picture after the arrow.
 図2に示すように、キーフレーム104以外の通常フレーム105、例えば3番目のフレームP(3)から映像を再生する場合を想定する。この場合、フレームP(3)はフレームP(2)がなければ復号できず、フレームP(2)はフレームP(1)がなければ復号できず、フレームP(1)はフレームI(0)がなければ復号できないという関係にある。よって、フレームP(3)から映像を再生することはできない。 As shown in FIG. 2, it is assumed that a video is reproduced from a normal frame 105 other than the key frame 104, for example, the third frame P (3). In this case, frame P (3) cannot be decoded without frame P (2), frame P (2) cannot be decoded without frame P (1), and frame P (1) is frame I (0). It is in a relationship that cannot be decrypted without it. Therefore, the video cannot be reproduced from the frame P (3).
 このため、複数のカメラ映像を頻繁に切り替えることを可能にするためには、キーフレーム104を符号列103に頻繁に入れる必要がある。一方、従来技術においてキーフレーム104として用いられるイントラ予測フレームは、一般的にインター(画面間)予測フレームよりも符号化効率が悪いことが知られている。これは、イントラ予測フレームでは、動画像を時間的な連続性の特徴を用いて画像を圧縮することができないためである。ここで、インター予測フレームとは、他のフレームを参照するインター予測を用いて符号化されるフレームである。さらに、カメラが固定されている環境にある監視カメラなどでは、インター予測が非常に効果的であるため、符号列103のデータ量のなかでイントラ予測フレームの占める割合が非常に高いことが知られている。そこで、イントラ予測フレームを減らしつつ、再生開始点(キーフレーム)とする点を増やすことが重要である。 For this reason, in order to be able to frequently switch a plurality of camera images, it is necessary to frequently put the key frame 104 in the code string 103. On the other hand, it is known that an intra prediction frame used as the key frame 104 in the prior art generally has lower encoding efficiency than an inter (inter-screen) prediction frame. This is because an intra-prediction frame cannot compress a moving image using temporal continuity characteristics. Here, the inter prediction frame is a frame encoded using inter prediction with reference to another frame. Furthermore, in a surveillance camera or the like in an environment where the camera is fixed, since inter prediction is very effective, it is known that the ratio of intra prediction frames in the data amount of the code string 103 is very high. ing. Therefore, it is important to increase the number of playback start points (key frames) while reducing the number of intra prediction frames.
 一方、同様の課題が長時間映像でも存在することを、図3を用いて説明する。画像符号化装置102は、例えば、24時間監視カメラで撮影された映像などの長時間映像を符号化することで符号列103を生成する。この場合、画像復号装置106は、映像を再生する際に、符号列103の復号開始点までジャンプすることが求められる。例えば、キーフレーム104が1時間に1枚しか無い場合には、画像復号装置106は、最悪1時間のデータを復号しなければ目的の場面を再生することができなくなる。これを防ぐため、任意再生ポイントとしてキーフレーム104を適宜入れることが知られている。一方で、前述と同様に従来のイントラ予測フレームを用いたキーフレーム104は符号化効率が悪いため、特に長時間録画した映像に対しては全データ量に対するイントラ予測フレームのデータの比率が大きいことが知られている。 On the other hand, it will be described with reference to FIG. 3 that a similar problem exists even in a long-time video. The image encoding device 102 generates a code string 103 by encoding a long-time video such as a video taken by a 24-hour monitoring camera, for example. In this case, the image decoding apparatus 106 is required to jump to the decoding start point of the code string 103 when reproducing the video. For example, if there is only one key frame 104 per hour, the image decoding device 106 cannot reproduce the target scene unless the worst one-hour data is decoded. In order to prevent this, it is known to appropriately insert a key frame 104 as an arbitrary playback point. On the other hand, since the key frame 104 using the conventional intra prediction frame has a low encoding efficiency as described above, the ratio of the data of the intra prediction frame to the total data amount is large especially for video recorded for a long time. It has been known.
 本実施の形態では、この両方のケースで必要となるキーフレームをインター予測して符号化効率を向上することにより、上記課題を解決する。本実施の形態では、複数映像の切り替えに必要な時間を短くすることができる、又は、長時間の映像を任意のタイミングで再生開始するために必要な時間を短くすることができる符号列を効率よく生成できる又は復号できる画像符号化方法又は画像復号方法について説明する。 In the present embodiment, the above problem is solved by inter-predicting key frames required in both cases to improve the encoding efficiency. In this embodiment, the time required for switching between multiple videos can be shortened, or a code string that can shorten the time required to start playback of a long video at an arbitrary timing is efficiently used. An image encoding method or image decoding method that can be generated or decoded well will be described.
 本発明の一態様に係る画像符号化方法は、複数の画像を符号化する画像符号化方法であって、前記複数の画像から、ランダムアクセス可能なキーフレームを選択する選択ステップと、前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて符号化する符号化ステップとを含む。 An image encoding method according to an aspect of the present invention is an image encoding method for encoding a plurality of images, wherein a selection step of selecting a randomly accessible key frame from the plurality of images, and the key frame And a coding step of coding using inter prediction with reference to a reference picture different from the key frame.
 これによれば、キーフレームがインター予測により符号化されるので、キーフレームがイントラ予測により符号化される場合に比べて符号化効率を向上できる。 According to this, since the key frame is encoded by the inter prediction, the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
 例えば、前記画像符号化方法は、さらに、前記複数の画像から前記インター予測を用いて符号化されたキーフレームを特定するための情報を符号化する情報符号化ステップを含んでもよい。 For example, the image encoding method may further include an information encoding step of encoding information for specifying a key frame encoded using the inter prediction from the plurality of images.
 例えば、前記参照ピクチャは、前記キーフレームと復号順及び表示順で隣接しないピクチャであってもよい。 For example, the reference picture may be a picture that is not adjacent to the key frame in decoding order and display order.
 例えば、前記参照ピクチャは、ロングターム参照ピクチャであってもよい。 For example, the reference picture may be a long term reference picture.
 例えば、前記選択ステップでは、前記複数の画像から、前記キーフレームを含む複数のキーフレームを選択し、前記符号化ステップでは、前記複数のキーフレームに含まれる対象キーフレームを、前記複数のキーフレームのうちの他のキーフレームを参照して符号化してもよい。 For example, in the selecting step, a plurality of key frames including the key frame are selected from the plurality of images, and in the encoding step, target key frames included in the plurality of key frames are selected as the plurality of key frames. The encoding may be performed with reference to other key frames.
 これによれば、キーフレームを復号するために必要な画像の数を低減できるので、ランダム再生時において映像が表示されるまので時間を低減できる。 According to this, since the number of images necessary for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
 例えば、前記参照ピクチャは、ネットワークを介して取得された画像であってもよい。 For example, the reference picture may be an image acquired via a network.
 これによれば、参照ピクチャとして用いることができる画像を増加できるので、符号化効率を向上できる。 According to this, since the number of images that can be used as reference pictures can be increased, the encoding efficiency can be improved.
 例えば、前記参照ピクチャは、前記キーフレームとは異なる符号化方法で符号化された画像であってもよい。 For example, the reference picture may be an image encoded by an encoding method different from the key frame.
 これによれば、参照ピクチャとして用いることができる画像を増加できるので、符号化効率を向上できる。 According to this, since the number of images that can be used as reference pictures can be increased, the encoding efficiency can be improved.
 例えば、前記符号化ステップでは、前記キーフレームの領域のうち背景領域を判定し、前記背景領域に対して、前記参照ピクチャを特定するための情報を符号化し、当該背景領域の画像情報を符号化しなくてもよい。 For example, in the encoding step, a background area of the key frame area is determined, information for specifying the reference picture is encoded for the background area, and image information of the background area is encoded. It does not have to be.
 これによれば、背景領域の符号化データのデータ量を削減できる。 According to this, the data amount of the encoded data in the background area can be reduced.
 例えば、前記符号化ステップでは、前記キーフレームと前記参照ピクチャとの類似度を判定し、前記類似度が所定値以上の場合、前記参照ピクチャを特定するための情報を符号化し、前記キーフレームの画像情報を符号化しなくてもよい。 For example, in the encoding step, a similarity between the key frame and the reference picture is determined, and when the similarity is equal to or greater than a predetermined value, information for specifying the reference picture is encoded, The image information may not be encoded.
 これによれば、符号化データのデータ量を削減できる。 According to this, the amount of encoded data can be reduced.
 また、本発明の一態様に係る画像復号方法は、複数の画像を復号する画像復号方法であって、前記複数の画像から、ランダムアクセス可能なキーフレームを判別する判別ステップと、前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて復号する復号ステップとを含む。 An image decoding method according to an aspect of the present invention is an image decoding method for decoding a plurality of images, wherein a determination step of determining a randomly accessible key frame from the plurality of images, and the key frame And a decoding step of decoding using inter prediction that refers to a reference picture different from the key frame.
 これによれば、キーフレームがインター予測により符号化されるので、キーフレームがイントラ予測により符号化される場合に比べて符号化効率を向上できる。 According to this, since the key frame is encoded by the inter prediction, the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
 例えば、前記画像復号方法は、さらに、前記複数の画像から前記インター予測を用いて符号化されたキーフレームを特定するための情報を復号する情報復号ステップを含み、前記判別ステップでは、さらに、前記情報に基づき前記キーフレームがインター予測を用いて符号化されたキーフレームであるかを判別し、前記復号ステップでは、前記インター予測を用いて符号化されたキーフレームをインター予測を用いて復号してもよい。 For example, the image decoding method further includes an information decoding step of decoding information for specifying a key frame encoded using the inter prediction from the plurality of images, and the determining step further includes the step of: Based on the information, it is determined whether the key frame is a key frame encoded using inter prediction. In the decoding step, the key frame encoded using the inter prediction is decoded using inter prediction. May be.
 例えば、前記参照ピクチャは、前記キーフレームと復号順及び表示順で隣接しないピクチャであってもよい。 For example, the reference picture may be a picture that is not adjacent to the key frame in decoding order and display order.
 例えば、前記参照ピクチャは、ロングターム参照ピクチャであってもよい。 For example, the reference picture may be a long term reference picture.
 例えば、前記判別ステップでは、前記複数の画像から、前記キーフレームを含む複数のキーフレームを判別し、前記復号ステップでは、前記複数のキーフレームに含まれる対象キーフレームを、前記複数のキーフレームのうちの他のキーフレームを参照して復号してもよい。 For example, in the determining step, a plurality of key frames including the key frame are determined from the plurality of images, and in the decoding step, target key frames included in the plurality of key frames are determined from the plurality of key frames. Decoding may be performed with reference to other key frames.
 これによれば、キーフレームを復号するために必要な画像の数を低減できるので、ランダム再生時において映像が表示されるまので時間を低減できる。 According to this, since the number of images necessary for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
 例えば、前記参照ピクチャは、ネットワークを介して取得された画像であってもよい。 For example, the reference picture may be an image acquired via a network.
 これによれば、参照ピクチャとして用いることができる画像を増加できるので、符号化効率を向上できる。 According to this, since the number of images that can be used as reference pictures can be increased, the encoding efficiency can be improved.
 例えば、前記参照ピクチャは、前記キーフレームとは異なる符号化方法で符号化された画像であってもよい。 For example, the reference picture may be an image encoded by an encoding method different from the key frame.
 これによれば、参照ピクチャとして用いることができる画像を増加できるので、符号化効率を向上できる。 According to this, since the number of images that can be used as reference pictures can be increased, the encoding efficiency can be improved.
 例えば、前記復号ステップでは、前記キーフレームの領域のうち背景領域に対して、前記参照ピクチャを特定するための情報を復号し、当該背景領域の画像として、前記情報で特定される前記参照ピクチャを出力してもよい。 For example, in the decoding step, information for specifying the reference picture is decoded for a background area in the key frame area, and the reference picture specified by the information is used as an image of the background area. It may be output.
 これによれば、背景領域の符号化データのデータ量を削減できる。 According to this, the data amount of the encoded data in the background area can be reduced.
 例えば、前記復号ステップでは、前記参照ピクチャを特定するための情報を復号し、前記情報で特定される前記参照ピクチャを、前記キーフレームとして出力してもよい。 For example, in the decoding step, information for specifying the reference picture may be decoded, and the reference picture specified by the information may be output as the key frame.
 これによれば、符号化データのデータ量を削減できる。 According to this, the amount of encoded data can be reduced.
 例えば、前記画像復号方法は、さらに、複数の画像を格納している記憶装置から、指定した画像の撮影状況に合致する画像を取得する取得ステップと、前記取得された画像を前記参照ピクチャとして格納する格納ステップとを含んでもよい。 For example, the image decoding method further includes an acquisition step of acquiring an image that matches a shooting situation of a specified image from a storage device storing a plurality of images, and stores the acquired image as the reference picture. Storage step.
 これによれば、必要になる可能性が高い参照ピクチャを選択的に取得できる。 According to this, it is possible to selectively acquire reference pictures that are likely to be required.
 例えば、前記撮影状況は、画像が撮影された時刻、画像が撮影された場所、又は、画像が撮影された時の天候であってもよい。 For example, the shooting situation may be the time when the image was shot, the place where the image was shot, or the weather when the image was shot.
 本発明の一態様に係る画像符号化装置は、複数の画像を符号化する画像符号化装置であって、前記複数の画像から、ランダムアクセス可能なキーフレームを選択する選択部と、前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて符号化する符号化部とを備える。 An image encoding apparatus according to an aspect of the present invention is an image encoding apparatus that encodes a plurality of images, the selection unit selecting a randomly accessible key frame from the plurality of images, and the key frame Is encoded using inter prediction that refers to a reference picture different from the key frame.
 これによれば、キーフレームがインター予測により符号化されるので、キーフレームがイントラ予測により符号化される場合に比べて符号化効率を向上できる。 According to this, since the key frame is encoded by the inter prediction, the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
 また、本発明の一態様に係る画像復号装置は、複数の画像を復号する画像復号装置であって、前記複数の画像から、ランダムアクセス可能なキーフレームを判別する判別部と、前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて復号する復号部とを備える。 An image decoding device according to an aspect of the present invention is an image decoding device that decodes a plurality of images, and includes a determination unit that determines a randomly accessible key frame from the plurality of images, and the key frame. And a decoding unit that performs decoding using inter prediction that refers to a reference picture different from the key frame.
 これによれば、キーフレームがインター予測により符号化されるので、キーフレームがイントラ予測により符号化される場合に比べて符号化効率を向上できる。 According to this, since the key frame is encoded by the inter prediction, the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
 なお、これらの包括的または具体的な態様は、システム、方法、集積回路、コンピュータプログラムまたはコンピュータ読み取り可能なCD-ROMなどの記録媒体で実現されてもよく、システム、方法、集積回路、コンピュータプログラム及び記録媒体の任意な組み合わせで実現されてもよい。 Note that these comprehensive or specific modes may be realized by a system, a method, an integrated circuit, a computer program, or a recording medium such as a computer-readable CD-ROM, and the system, method, integrated circuit, and computer program. Also, any combination of recording media may be realized.
 以下、実施の形態について、図面を参照しながら具体的に説明する。なお、以下で説明する実施の形態は、いずれも包括的または具体的な例を示すものである。以下の実施の形態で示される数値、形状、材料、構成要素、構成要素の配置位置及び接続形態、ステップ、ステップの順序などは、一例であり、本発明を限定する主旨ではない。 Hereinafter, embodiments will be specifically described with reference to the drawings. It should be noted that each of the embodiments described below shows a comprehensive or specific example. The numerical values, shapes, materials, constituent elements, arrangement positions and connecting forms of the constituent elements, steps, order of steps, and the like shown in the following embodiments are merely examples, and are not intended to limit the present invention.
 なお、本実施の形態では、フレームを、ピクチャ又は画像と言い換えて記載する場合がある。さらに、符号化対象又は復号対象のフレーム(ピクチャ又は画像)をカレントピクチャ又はアクセスフレームなどと言い換える場合がある。これら以外にもコーデック技術分野において一般的に用いられる種々の用語にも言い換えられる。 In this embodiment, a frame may be described in other words as a picture or an image. Further, a frame (picture or image) to be encoded or decoded may be referred to as a current picture or an access frame. In addition to these, various terms commonly used in the field of codec technology can also be used.
 以下、本発明の実施の形態について、図面を参照しながら説明する。 Hereinafter, embodiments of the present invention will be described with reference to the drawings.
 (実施の形態1)
 本実施の形態では、複数カメラからの映像の適宜切り替えを可能にする、又は長時間の映像を任意の点から再生できつつ、高効率の符号化を実現できる画像符号化装置及び画像符号化方法について説明する。
(Embodiment 1)
In the present embodiment, an image encoding device and an image encoding method capable of appropriately switching videos from a plurality of cameras or realizing high-efficiency encoding while reproducing a long-time video from an arbitrary point. Will be described.
 本実施の形態に係る画像符号化装置200が出力する符号列250を、図4を用いて説明する。図4は、画像符号化装置200が出力する符号列250の一例を示す図である。また、図4は、前述の従来の構成で示した図2に対する本実施の形態における符号列250の一例を示す。図2の説明と同様に、矢印は参照関係を示す。また、各ピクチャ内の文字及び数字の意味も図2と同様である。 A code string 250 output from the image coding apparatus 200 according to the present embodiment will be described with reference to FIG. FIG. 4 is a diagram illustrating an example of a code string 250 output from the image encoding device 200. FIG. 4 shows an example of a code string 250 in the present embodiment corresponding to FIG. 2 shown in the above-described conventional configuration. As in the description of FIG. 2, the arrows indicate reference relationships. The meanings of characters and numbers in each picture are the same as those in FIG.
 図2の場合と同様に、キーフレーム104以外の通常フレーム105から再生を開始することはできない。また、図2では、キーフレーム104はIフレームであったが、本実施の形態では、キーフレーム104としてIフレーム以外が設定される場合がある。具体的には、図2に示す例では、キーフレーム104であるフレームP(t)はPフレームである。 As in the case of FIG. 2, reproduction cannot be started from the normal frame 105 other than the key frame 104. In FIG. 2, the key frame 104 is an I frame, but in the present embodiment, a key frame other than the I frame may be set as the key frame 104. Specifically, in the example illustrated in FIG. 2, the frame P (t) that is the key frame 104 is a P frame.
 また、このフレームP(t)は、フレームI(0)をロングターム参照する。なお、フレームI(0)のようにロングターム参照に用いることができるフレームのことをロングターム参照ピクチャと呼ぶ。このロングターム参照ピクチャは、画像符号化装置及び画像復号装置のメモリに蓄積され、いつでも参照することができるという特徴を持つ。 Also, this frame P (t) makes a long term reference to the frame I (0). Note that a frame that can be used for long-term reference like the frame I (0) is called a long-term reference picture. This long term reference picture is stored in the memory of the image encoding device and the image decoding device, and can be referred to at any time.
 また、フレームP(t)から再生するために必要な画像はフレームI(0)だけであり、フレームI(0)がフレームメモリに格納されていれば、画像復号装置は、フレームP(t)から再生開始することができる。 Further, if the image necessary for reproduction from the frame P (t) is only the frame I (0), and the frame I (0) is stored in the frame memory, the image decoding apparatus can detect the frame P (t). Playback can be started from
 このように、本実施の形態では、キーフレーム104としてロングターム参照が用いられるPフレームが用いられる。これにより、キーフレーム104としてIフレームを用いる場合に比べて、符号化効率を向上できる。また、キーフレーム104の近傍の画像が復号されていない場合であっても、ロングターム参照ピクチャのみを復号することで、当該キーフレーム104を復号できる。このように、Pフレームをキーフレーム104に設定した場合の復号が必要になる画像の枚数を削減できる。 As described above, in this embodiment, a P frame using a long term reference is used as the key frame 104. Thereby, compared with the case where I frame is used as the key frame 104, encoding efficiency can be improved. Even if an image in the vicinity of the key frame 104 is not decoded, the key frame 104 can be decoded by decoding only the long-term reference picture. Thus, the number of images that need to be decoded when the P frame is set as the key frame 104 can be reduced.
 次に、Pフレームであるキーフレーム104をロングターム参照ピクチャとして用いる場合を説明する。図5Aは、本実施の形態に係る符号列250の一例を示す図である。また、図5Bは、比較のための図であり、従来の符号列103の一例を示す図である。 Next, a case where the key frame 104 which is a P frame is used as a long term reference picture will be described. FIG. 5A is a diagram showing an example of a code string 250 according to the present embodiment. FIG. 5B is a diagram for comparison and shows an example of a conventional code string 103.
 例えば、ロングターム参照ピクチャは、監視カメラ映像などにおいて、背景などが長時間同じ映像である場合に用いられる。図5A及び図5Bでは、通常フレーム105は、直前のフレームではなく、ロングターム参照ピクチャを参照する。これにより、図2及び図4に示す構成と比べて、符号化効率が落ちるものの、エラー耐性を向上できる。 For example, a long term reference picture is used when the background is the same video for a long time in a surveillance camera video or the like. In FIGS. 5A and 5B, the normal frame 105 refers to the long term reference picture, not the immediately preceding frame. Thereby, compared with the structure shown in FIG.2 and FIG.4, although the encoding efficiency falls, error tolerance can be improved.
 例えば、フレームP(2)が通信エラー又はデータ破損により復号できない場合、図2及び図4に示す構成では、フレームP(3)からフレームP(t-1)のフレームを復号できない。一方、図5A及び図5Bに示す構成では、フレームP(2)が復号できない場合であっても、フレームP(2)は他のフレームの予測に使われない。よって、フレームI(0)が正しく復号できていれば、画像復号装置は、フレームP(3)からフレームP(t-1)のフレームの復号を正しく行うことができる。 For example, when the frame P (2) cannot be decoded due to a communication error or data corruption, the configuration shown in FIGS. 2 and 4 cannot decode the frames from the frame P (3) to the frame P (t−1). On the other hand, in the configuration shown in FIGS. 5A and 5B, even if the frame P (2) cannot be decoded, the frame P (2) is not used for prediction of other frames. Therefore, if the frame I (0) has been correctly decoded, the image decoding apparatus can correctly decode the frames from the frame P (3) to the frame P (t−1).
 このような構成の場合にも、図5Bに示すように、従来の符号列103では、定期的にロングターム参照ピクチャであるイントラ予測フレーム(キーフレーム104)が使用される。これにより、ロングターム参照ピクチャを適宜更新できるので、背景等の静止領域の変化に対応できる。 Also in such a configuration, as shown in FIG. 5B, in the conventional code string 103, an intra prediction frame (key frame 104), which is a long-term reference picture, is used periodically. As a result, the long term reference picture can be updated as appropriate, so that it is possible to cope with a change in a still area such as a background.
 一方、図5Aに示す、本実施の形態に係る符号列250は、イントラ予測フレームの代わりに、ロングターム参照ピクチャI(0)のみを参照するインター予測フレームP(t)(キーフレーム104)が用いられる。これにより、図4に示す構成と同様に、符号化効率を向上できる。 On the other hand, in the code sequence 250 according to the present embodiment shown in FIG. 5A, an inter prediction frame P (t) (key frame 104) that refers only to the long term reference picture I (0) is used instead of the intra prediction frame. Used. Thereby, encoding efficiency can be improved similarly to the structure shown in FIG.
 なお、図4及び図5Aでは、通常フレーム105としてPフレームだけが用いられる場合を説明した。インター予測フレームは時間的に近いフレームを参照するほうが、一般的に符号化効率が良くなる。よって、Pフレームだけを用いることで、処理量を低減しつつ、遅延を少なくかつ、符号化効率の悪化を抑制できる。 4 and 5A, the case where only the P frame is used as the normal frame 105 has been described. The inter prediction frame generally has better encoding efficiency when referring to a temporally close frame. Therefore, by using only the P frame, it is possible to reduce the processing amount, reduce the delay, and suppress the deterioration of the encoding efficiency.
 以下、ロングターム参照ピクチャと隣接フレームとを参照する複数予測フレーム(Bフレーム)を用いる場合を説明する。Bフレームを用いることで、さらに符号化効率を高めることができる。 Hereinafter, a case where a plurality of prediction frames (B frames) referring to a long term reference picture and an adjacent frame is used will be described. By using the B frame, the encoding efficiency can be further increased.
 図6Aは、本実施の形態に係る符号列250の一例を示す図である。また、図6Bは、比較のための図であり、従来の符号列103の一例を示す図である。 FIG. 6A is a diagram illustrating an example of a code string 250 according to the present embodiment. FIG. 6B is a diagram for comparison, and shows an example of a conventional code string 103.
 図6A及び図6Bでは、通常フレーム105は、ロングターム参照ピクチャと、直前のフレームとを参照する。 6A and 6B, the normal frame 105 refers to the long-term reference picture and the immediately preceding frame.
 ここで、図6Bに示す従来の符号列103では、Bフレームを連続させるために、ロングターム参照ピクチャであるフレームB(t)は、キーフレーム104に設定できず、通常フレーム105に設定されている。なお、この場合でも、キーフレーム104を増やすためには図2又は図5Bと同様にイントラ予測フレームIをキーフレーム104として用いる必要がある。 Here, in the conventional code sequence 103 shown in FIG. 6B, the frame B (t), which is a long-term reference picture, cannot be set as the key frame 104 but is set as the normal frame 105 in order to make the B frames continuous. Yes. Even in this case, in order to increase the key frame 104, it is necessary to use the intra prediction frame I as the key frame 104 as in FIG. 2 or FIG. 5B.
 一方で、図6Aに示す本実施の形態に係る符号列250では、キーフレーム104としてインター予測フレームであるフレームP(t)を用いる。これにより、符号化効率を維持しつつ、アクセス性を高めることができる。 On the other hand, in the code string 250 according to the present embodiment shown in FIG. 6A, the frame P (t) that is an inter prediction frame is used as the key frame 104. Thereby, accessibility can be improved while maintaining encoding efficiency.
 以下、前述した図5B等に示す通常フレーム105であるPフレームと、図4等に示すキーフレームであるPフレームとの役割の違いに関して、図7Aを用いて、より詳しく説明する。図7Aは、本実施の形態に係る画像符号化装置及び画像復号装置における処理を説明するための模式図である。図7Aに示す符号列は、図4に示す符号列250と同じ構成である。そのため、フレームI(0)、P(8)及びI(16)がキーフレーム104として扱われる。 Hereinafter, the difference in role between the P frame that is the normal frame 105 shown in FIG. 5B and the P frame that is the key frame shown in FIG. 4 will be described in more detail with reference to FIG. 7A. FIG. 7A is a schematic diagram for explaining processing in the image encoding device and the image decoding device according to the present embodiment. The code string shown in FIG. 7A has the same configuration as the code string 250 shown in FIG. Therefore, the frames I (0), P (8), and I (16) are treated as the key frame 104.
 この場合、画像復号装置は伝送遅延後に復号を開始できる。また、画像復号装置は、フレームI(0)を復号後にフレームP(8)の復号を行なうことや、フレームI(16)から復号を開始することができる。また、高い頻度で再生ポイントを作りたい場合には、フレームP(8)のようにPフレームのキーフレーム104を入れることで、符号化効率を高めることが可能である。 In this case, the image decoding apparatus can start decoding after a transmission delay. Further, the image decoding apparatus can decode the frame P (8) after decoding the frame I (0), or can start decoding from the frame I (16). Also, when it is desired to create playback points at a high frequency, it is possible to increase the encoding efficiency by inserting a key frame 104 of a P frame as in the frame P (8).
 また、図7Bに示すように、ストレージ400に、キーフレーム104が保存されていてもよい。なお、符号列の構成は、図7Aと同じである。また、ここに示す例では、フレームI(16)はフレームI(0)と同じ画像である。 As shown in FIG. 7B, the key frame 104 may be stored in the storage 400. The configuration of the code string is the same as that in FIG. 7A. In the example shown here, the frame I (16) is the same image as the frame I (0).
 監視カメラ等で撮影された長時間映像では、動きが少ないため、このようにすることも可能である。この場合、フレームI(16)に関する符号列情報は、フレームI(0)と同じことを示すパラメタ情報であってもよい。それにより、さらに符号化効率を高めることができる。 This is also possible because long-time images taken with a surveillance camera have little movement. In this case, the code string information regarding the frame I (16) may be parameter information indicating the same as the frame I (0). Thereby, encoding efficiency can be further improved.
 また、図7Bでは、さらに画像復号装置が備えるストレージ400に例えば背景情報を示すフレームI(0)が保存される。例えば、画像復号装置は、このフレームI(0)を取得(例えば、ネットワークを介してダウンロードする)してストレージに蓄積している。この場合、画像復号装置は、フレームP(8)を、ストレージに蓄積しているフレームI(0)の画像を参照することにより復号できる。これにより、符号化効率を高めつつ、再生ポイントを追加(キーフレームを追加)できる。 Further, in FIG. 7B, a frame I (0) indicating background information, for example, is stored in the storage 400 provided in the image decoding apparatus. For example, the image decoding apparatus acquires (for example, downloads via the network) the frame I (0) and stores it in the storage. In this case, the image decoding apparatus can decode the frame P (8) by referring to the image of the frame I (0) stored in the storage. Thereby, it is possible to add a reproduction point (add a key frame) while improving encoding efficiency.
 また、複数の入力画像を切り替える場合であれば、ストレージ400に、複数の入力画像毎の参照用画像(例えば背景画像)を保存しておくことで、Pフレームであるキーフレーム104の単位で参照する画像を切り替えることが可能となり、帯域の狭いネットワークが用いられる場合であっても高品質の映像の切り替えを実現できる。 In the case of switching a plurality of input images, a reference image (for example, a background image) for each of the plurality of input images is stored in the storage 400, so that reference is made in units of key frames 104 that are P frames. The images to be switched can be switched, and high-quality video switching can be realized even when a narrow-band network is used.
 さらに、図7Cは、ストレージ400内に複数のキーフレーム104の画像、或いはキーフレーム104の参照用画像であるキーフレーム参照ピクチャが蓄積されている場合、又は、クラウド或いはネットワークを介して複数のキーフレーム104或いはキーフレーム参照ピクチャにアクセスできる場合を示している。 Furthermore, FIG. 7C illustrates a case where a plurality of key frame 104 images or a key frame reference picture that is a reference image of the key frame 104 is stored in the storage 400, or a plurality of key frames via a cloud or a network. A case where the frame 104 or the key frame reference picture can be accessed is shown.
 また、図7Cでは、キーフレーム104としてBフレームが設定される。具体的には、画像復号装置は、キーフレーム104であるフレームB(8)を、ストレージ400に格納されている複数のキーフレーム参照ピクチャ、又はネットワークを介して取得したキーフレーム参照ピクチャを用いて復号する。これにより、画像復号装置は、フレームB(8)から復号を開始できるため、複数のカメラ映像の切り替え、又は飛び込み再生を実現できる。 In FIG. 7C, a B frame is set as the key frame 104. Specifically, the image decoding apparatus uses a plurality of key frame reference pictures stored in the storage 400 or a key frame reference picture acquired via a network as the frame B (8), which is the key frame 104. Decrypt. Thereby, since the image decoding apparatus can start decoding from the frame B (8), switching of a plurality of camera videos or jumping reproduction can be realized.
 なお、図7A~図7Bでは、画像符号化装置から画像復号装置にデータを伝送した場合の伝送遅延を記載しているが、例えば、事前に画像復号装置からアクセスできるストレージにデータがある場合には、伝送遅延は発生しない。 7A to 7B show the transmission delay when data is transmitted from the image encoding device to the image decoding device. For example, when there is data in storage accessible from the image decoding device in advance, FIG. No transmission delay occurs.
 また、ここで説明したIフレーム、Pフレーム及びBフレームといった名称及び参照関係はこれに限らない。例えばBフレームとして記載したフレームが周囲との関係でPフレームとして符号化されてもよいし、Pフレームとして記載したフレームがBフレームとして符号化されてもよい。例えば、図7Cで示したようにキーフレーム104がBフレームとして符号化される場合に、ストレージ400を使用しなくてもよい。この場合、キーフレーム104であるBフレームは、例えば過去の複数のキーフレーム104のみを参照してもよい。 Also, the names and reference relationships such as I frame, P frame, and B frame described here are not limited thereto. For example, a frame described as a B frame may be encoded as a P frame in relation to the surroundings, or a frame described as a P frame may be encoded as a B frame. For example, as shown in FIG. 7C, when the key frame 104 is encoded as a B frame, the storage 400 may not be used. In this case, the B frame which is the key frame 104 may refer to only a plurality of past key frames 104, for example.
 また、上記説明では、通常フレーム105が直前のフレームだけを参照しているが、二つ前のフレームを参照してもよいし、双方向予測又は複数予測が用いられてもよい。 In the above description, the normal frame 105 refers only to the immediately preceding frame, but the previous two frames may be referred to, or bi-directional prediction or multiple prediction may be used.
 以上のように、本実施の形態では、従来技術においてIフレームが用いられていたキーフレームの少なくとも一部にPフレームを用いる。これにより、キーフレームの符号化効率を改善できるので、キーフレーム(ランダムアクセスポイント)を増加させた場合における符号化効率の低下を抑制できる。 As described above, in this embodiment, P frames are used for at least a part of key frames in which I frames are used in the prior art. Thereby, since the encoding efficiency of a key frame can be improved, the fall of the encoding efficiency at the time of increasing a key frame (random access point) can be suppressed.
 ここで、キーフレームとは、ランダムアクセス可能なフレームであり、言い換えると、画像復号装置が再生(復号又は表示)を開始できるフレームである。また、例えば、映像に含まれる複数の画像のうち、どの画像がキーフレームであるかを示す情報は、符号列に含まれる。 Here, the key frame is a frame that can be randomly accessed, in other words, a frame that can be reproduced (decoded or displayed) by the image decoding apparatus. For example, information indicating which image is a key frame among a plurality of images included in a video is included in the code string.
 また、キーフレームは、キーフレーム参照ピクチャのみを参照してインター予測される。例えば、キーフレーム参照ピクチャとは、通常の参照ピクチャとは異なり、処理対象のキーフレームと復号順(符号化順)及び表示順で隣接しないピクチャである。具体的には、キーフレーム参照ピクチャは、ロングターム参照ピクチャである。 Also, key frames are inter-predicted by referring only to key frame reference pictures. For example, unlike a normal reference picture, a key frame reference picture is a picture that is not adjacent to a processing target key frame in decoding order (encoding order) and display order. Specifically, the key frame reference picture is a long term reference picture.
 また、キーフレーム参照ピクチャは、復号済み(符号化済み)のキーフレーム、又は、ネットワークを介して取得された画像である。 Also, the key frame reference picture is a decoded (encoded) key frame or an image obtained via a network.
 次に、これまで説明した符号列250を生成する画像符号化装置200について図8を用いて説明する。図8は、本実施の形態に係る画像符号化装置200の構成を示すブロック図である。 Next, the image encoding device 200 that generates the code string 250 described so far will be described with reference to FIG. FIG. 8 is a block diagram showing the configuration of the image coding apparatus 200 according to the present embodiment.
 この画像符号化装置200は動画像データ251を符号化することで符号列250を生成する。画像符号化装置200は、予測部201と、減算部202と、変換量子化部203と、可変長符号化部204と、逆量子化逆変換部205と、加算部206と、予測制御部207と、選択部208と、フレームメモリ212とを備える。 The image encoding apparatus 200 generates a code string 250 by encoding the moving image data 251. The image coding apparatus 200 includes a prediction unit 201, a subtraction unit 202, a transform quantization unit 203, a variable length coding unit 204, an inverse quantization inverse transform unit 205, an addition unit 206, and a prediction control unit 207. A selection unit 208 and a frame memory 212.
 予測部201は、動画像データ251に含まれる対象画像と、選択部208で選択された参照画像256とを元に予測画像257を生成し、生成された予測画像257を減算部202及び加算部206に出力する。また、予測部201は、予測画像257の生成に用いたパラメタである予測パラメタ258を可変長符号化部204に出力する。 The prediction unit 201 generates a prediction image 257 based on the target image included in the moving image data 251 and the reference image 256 selected by the selection unit 208, and the generated prediction image 257 is subtracted by the subtraction unit 202 and the addition unit. It outputs to 206. In addition, the prediction unit 201 outputs a prediction parameter 258 that is a parameter used to generate the predicted image 257 to the variable length coding unit 204.
 減算部202は、動画像データ251に含まれる対象画像と予測画像257との差分である差分信号252を算出し、算出された差分信号252を変換量子化部203に出力する。変換量子化部203は、差分信号252を変換量子化することで量子化信号253を生成し、生成された量子化信号253を可変長符号化部204及び逆量子化逆変換部205に出力する。 The subtraction unit 202 calculates a difference signal 252 that is a difference between the target image included in the moving image data 251 and the predicted image 257, and outputs the calculated difference signal 252 to the transform quantization unit 203. The transform quantization unit 203 transform-quantizes the difference signal 252 to generate a quantized signal 253, and outputs the generated quantized signal 253 to the variable length coding unit 204 and the inverse quantization inverse transform unit 205. .
 可変長符号化部204は、量子化信号253と、予測部201及び予測制御部207から出力された予測パラメタ258とを可変長符号化することで符号列250を生成する。例えば、予測パラメタ258は、使用された予測方法、予測モード及び参照ピクチャを示す情報等を含む。 The variable length coding unit 204 generates a code string 250 by performing variable length coding on the quantized signal 253 and the prediction parameter 258 output from the prediction unit 201 and the prediction control unit 207. For example, the prediction parameter 258 includes information indicating a used prediction method, a prediction mode, and a reference picture.
 一方、逆量子化逆変換部205は、量子化信号253を逆量子化及び逆変換することで復号差分信号254を生成し、生成された復号差分信号254を加算部206に出力する。加算部206は、予測画像257と復号差分信号254とを加算することで復号画像255を生成し、生成された復号画像255をフレームメモリ212に出力する。 Meanwhile, the inverse quantization inverse transform unit 205 generates a decoded differential signal 254 by performing inverse quantization and inverse transform on the quantized signal 253, and outputs the generated decoded differential signal 254 to the adder 206. The adding unit 206 generates a decoded image 255 by adding the predicted image 257 and the decoded difference signal 254, and outputs the generated decoded image 255 to the frame memory 212.
 フレームメモリ212は、キーフレーム用のロングターム参照ピクチャであるキーフレーム参照ピクチャが蓄積されるキーフレームメモリ209と、その他の既に復号済みで参照可能な画像が蓄積される近傍フレームメモリ210と、現在の符号化対象の画像に含まれる部分復号画像が蓄積される面内フレームメモリ211とを含む。これらのメモリは予測制御部207によって制御され、予測画像257の作成に必要な参照画像が、選択部208に出力される。 The frame memory 212 includes a key frame memory 209 in which a key frame reference picture, which is a long term reference picture for a key frame, is stored, a neighboring frame memory 210 in which other already decoded and referenceable images are stored, And an in-plane frame memory 211 in which partially decoded images included in the image to be encoded are stored. These memories are controlled by the prediction control unit 207, and a reference image necessary for creating the predicted image 257 is output to the selection unit 208.
 予測制御部207は、動画像データ251に基づき、どのメモリに格納されている参照画像を用いるかを決定する。ここで、キーフレームメモリ209は、復号画像255だけではなく、別途外部から取得された画像データ259を参照画像として格納してもよい。 The prediction control unit 207 determines in which memory the reference image stored is to be used based on the moving image data 251. Here, the key frame memory 209 may store not only the decoded image 255 but also image data 259 separately obtained from the outside as a reference image.
 なお、ここでは、フレームメモリ212が3種類のメモリを含む例を説明したが、実際の実装においては、個別のメモリ空間を設ける必要性はなく、例えば同一のメモリ上に各参照画像が格納されてもよい。つまり、フレームメモリ212は、予測制御部207からの指示に基づき、いずれかの参照画像を出力できる構成であればよい。 Here, an example in which the frame memory 212 includes three types of memories has been described. However, in actual implementation, there is no need to provide individual memory spaces, and for example, each reference image is stored in the same memory. May be. That is, the frame memory 212 may be configured to output any reference image based on an instruction from the prediction control unit 207.
 なお、予測部201での予測画像257の生成処理、並びに、変換量子化部203、逆量子化逆変換部205及び可変長符号化部204での処理の方法に関しては、例えば、非特許文献1に記載の方式を用いることができる。なお、これらの処理として、その他の動画像符号化方法の方式が用いられてもよい。 Regarding the generation processing of the predicted image 257 in the prediction unit 201 and the processing methods in the transform quantization unit 203, the inverse quantization inverse transform unit 205, and the variable length coding unit 204, for example, Non-Patent Document 1 Can be used. Note that, as these processes, other moving image encoding methods may be used.
 次に、画像符号化装置200による画像符号化処理の流れを説明する。図9は、予測制御部207、選択部208及びフレームメモリ212に関する動作のフローチャートである。 Next, the flow of image encoding processing by the image encoding device 200 will be described. FIG. 9 is a flowchart of operations related to the prediction control unit 207, the selection unit 208, and the frame memory 212.
 まず、予測制御部207は、動画像データ251に含まれる複数の画像のうちキーフレームに設定する画像を選択する(S101)。具体的には、予測制御部207は、ランダムアクセス可能なポイントであるアクセスポイントに位置するフレームをキーフレームに設定する。例えば、アクセスしたい(切り替えたい、又は飛び越えたい)頻度が予め設定されており、予測制御部207は、この頻度に応じて、複数枚数おきにキーフレームを設定する。または、予測制御部207は、物体が大きく動いた(画像の動きが大きい)、又は予め指定された物体が画像に含まれる画像をキーフレームに設定してもよい。また、予測制御部207は、この両方の手法を組み合わせてもよい。 First, the prediction control unit 207 selects an image to be set as a key frame among a plurality of images included in the moving image data 251 (S101). Specifically, the prediction control unit 207 sets a frame located at an access point, which is a randomly accessible point, as a key frame. For example, the frequency of access (switching or jumping over) is set in advance, and the prediction control unit 207 sets a key frame every plural number according to this frequency. Alternatively, the prediction control unit 207 may set, as a key frame, an image in which an object has moved greatly (the movement of the image is large) or an object that is designated in advance is included in the image. Further, the prediction control unit 207 may combine both methods.
 このように、自動的にアクセスポイントであるキーフレームを入れることで、スムーズに複数の映像を切り替えられる。また、物体が発生した際にアクセスポイントであるキーフレームを入れることで、調べたい物体が映った画像を素早く表示することが可能となるため、非常に有用である。 In this way, by automatically inserting key frames that are access points, multiple images can be switched smoothly. Also, by inserting a key frame that is an access point when an object is generated, it is possible to quickly display an image showing the object to be examined, which is very useful.
 また、画像符号化装置200は、他の装置等からキーフレーム参照ピクチャを取得し、キーフレームメモリ209に格納してもよい。この場合には、画像符号化装置200は、動画像データ251と、他の装置に格納されている複数のキーフレーム参照ピクチャとを比較し、類似度の高いキーフレーム参照ピクチャを取得し、取得したキーフレーム参照ピクチャをキーフレームメモリ209に格納する。 Also, the image encoding device 200 may acquire a key frame reference picture from another device or the like and store it in the key frame memory 209. In this case, the image encoding device 200 compares the moving image data 251 with a plurality of key frame reference pictures stored in another device, and acquires and acquires a key frame reference picture having a high degree of similarity. The key frame reference picture is stored in the key frame memory 209.
 また、画像符号化装置200は、どの画像がキーフレームであるかを示す情報(キーフレームを特定するための情報)を符号化することで、この情報を含む符号列250を生成する。なお、キーフレームを特定するための情報は、従来のイントラ予測されたキーフレームと、本実施の形態に係るインター予測されたキーフレームとをそれぞれ特定する。つまり、この情報は、各キーフレームが、イントラ予測を用いて符号化されたキーフレームか、インター予測を用いて符号化されたキーフレームかを示す。 Also, the image encoding device 200 encodes information (information for specifying a key frame) indicating which image is a key frame, thereby generating a code string 250 including this information. Note that the information for specifying a key frame specifies a conventional intra-predicted key frame and an inter-predicted key frame according to the present embodiment. That is, this information indicates whether each key frame is a key frame encoded using intra prediction or a key frame encoded using inter prediction.
 次に、画像符号化装置200は、各画像の符号化処理を行う。まず、画像符号化装置200は、動画像データ251に含まれる、符号化対象の画像である対象画像がキーフレームであるかを判定する(S102)。対象画像がキーフレームである場合は(S102でYES)、画像符号化装置200は、キーフレーム符号化処理により対象画像を符号化する(S103)。 Next, the image encoding device 200 performs an encoding process for each image. First, the image encoding device 200 determines whether the target image that is the image to be encoded included in the moving image data 251 is a key frame (S102). When the target image is a key frame (YES in S102), the image encoding device 200 encodes the target image by the key frame encoding process (S103).
 キーフレーム符号化処理の流れを、図10を用いて説明する。図10は、ステップS103の動作を示すフローチャートである。 The flow of the key frame encoding process will be described with reference to FIG. FIG. 10 is a flowchart showing the operation of step S103.
 画像符号化装置200は、対象画像と、キーフレームメモリ209に格納されているキーフレーム参照ピクチャとを比較することで、対象画像に類似するキーフレーム参照ピクチャが存在するかを判定する(S201)。ここで、類似するとは差分が所定値未満であることを意味する。 The image encoding apparatus 200 determines whether there is a key frame reference picture similar to the target image by comparing the target image with the key frame reference picture stored in the key frame memory 209 (S201). . Here, being similar means that the difference is less than a predetermined value.
 対象画像に類似するキーフレーム参照ピクチャが存在する場合(S202でYES)、予測部201は、キーフレームメモリ209に格納されている類似するキーフレーム参照ピクチャを参照して、対象画像をインター予測符号化する(S203)。 When there is a key frame reference picture similar to the target image (YES in S202), the prediction unit 201 refers to the similar key frame reference picture stored in the key frame memory 209 and inter-codes the target image. (S203).
 一方、対象画像に類似するキーフレーム参照ピクチャが存在しない場合(S202でNO)、予測部201は、対象画像をイントラ予測符号化する(S204)。 On the other hand, when there is no key frame reference picture similar to the target image (NO in S202), the prediction unit 201 performs intra prediction encoding on the target image (S204).
 また、画像符号化装置200は、キーフレームを符号化後に復号し、得られた復号画像を、キーフレーム参照ピクチャとしてキーフレームメモリ209に格納する(S205)。 Also, the image encoding device 200 decodes the key frame after encoding, and stores the obtained decoded image in the key frame memory 209 as a key frame reference picture (S205).
 なお、ここでの説明では、キーフレームメモリ209に格納されているキーフレーム参照ピクチャと対象画像とが比較されているが、取得してキーフレームメモリ209へ格納できる画像との比較が行われてもよい。例えば、画像符号化装置200は、ネットワークを介して取得できる参照画像から対象画像に類似する画像があるかを判定してもよい。この場合には、予測部201は、その参照画像を用いたインター予測符号化を行う。また、この場合には、ネットワーク等を介して取得できる画像が参照画像として使用されることを示す情報と、使用される参照画像を特定する情報とが符号列250に含まれ、画像復号装置に送られる。このようにすることで、さらに符号化効率を向上しつつ、画像復号装置に類似する参照画像を特定する情報を通知できる。これにより、画像復号装置は、復号時に、この情報を用いて、ネットワークを介して参照画像を取得し、当該参照画像を用いて画像を復号できる。 In the description here, the key frame reference picture stored in the key frame memory 209 is compared with the target image, but a comparison with an image that can be acquired and stored in the key frame memory 209 is performed. Also good. For example, the image coding apparatus 200 may determine whether there is an image similar to the target image from reference images that can be acquired via a network. In this case, the prediction unit 201 performs inter prediction encoding using the reference image. In this case, the code string 250 includes information indicating that an image that can be acquired via a network or the like is used as a reference image, and information specifying the reference image to be used. Sent. In this way, information for specifying a reference image similar to the image decoding device can be notified while further improving the encoding efficiency. Thereby, the image decoding apparatus can acquire the reference image via the network using this information at the time of decoding, and can decode the image using the reference image.
 また、ここでは、類似するキーフレーム参照ピクチャが存在しない場合に、イントラ予測符号化が行われているが、この処理に加え、予め定められた頻度で、イントラ予測符号化が行われてもよい。 Here, intra prediction encoding is performed when there is no similar key frame reference picture. However, in addition to this processing, intra prediction encoding may be performed at a predetermined frequency. .
 また、ここでは、全てのキーフレームがキーフレーム参照ピクチャとして用いられるが、一部のキーフレームのみがキーフレーム参照ピクチャとして用いられてもよい。 In addition, although all key frames are used as key frame reference pictures here, only some key frames may be used as key frame reference pictures.
 再度、図9の説明を行なう。対象画像がキーフレームでなく通常フレームである場合(S102でNO)、画像符号化装置200は、近傍ピクチャを参照して対象画像を符号化する通常の符号化処理を行う(S104)。つまり、画像符号化装置200は、復号順などを考慮せず、圧縮効率が高くなる参照画像をフレームメモリ212から探索し、得られた参照画像を参照するインター予測符号化を行う。言い換えると、この符号化処理では、前述のキーフレーム符号化処理のように、アクセス性を考慮した符号化は行われない。なお、低遅延が求められる環境では、直前のフレームを参照するなど、必ずしも符号化効率が最もよい手法が選択される必要はない。 Again, the description of FIG. 9 will be given. When the target image is not a key frame but a normal frame (NO in S102), the image encoding device 200 performs a normal encoding process of encoding the target image with reference to neighboring pictures (S104). That is, the image coding apparatus 200 searches the frame memory 212 for a reference image with high compression efficiency without considering the decoding order, and performs inter prediction coding that refers to the obtained reference image. In other words, in this encoding process, the encoding considering the accessibility is not performed unlike the above-described key frame encoding process. In an environment where low delay is required, it is not always necessary to select a method with the best coding efficiency, such as referring to the immediately preceding frame.
 動画像データ251に含まれる全ての画像の処理が終了した場合(S105でYES)、画像符号化装置200は処理を終了する。一方、動画像データ251の入力が続く場合には(S105でNO)、画像符号化装置200は、次の画像に対してステップS102以降の処理を行う。 When all the images included in the moving image data 251 have been processed (YES in S105), the image encoding device 200 ends the processing. On the other hand, when the input of the moving image data 251 continues (NO in S105), the image encoding device 200 performs the processing after step S102 on the next image.
 また、画像符号化装置200により生成される符号列250には、どの画像がキーフレームであるか、又はアクセスポイントを示す情報(キーフレームを特定するための情報)が含まれる。また、符号列250は、各キーフレームにイントラ予測が用いられたか、キーフレーム参照ピクチャが用いられるインター予測が用いられたかを示す情報、及び用いられたキーフレーム参照ピクチャを示す情報を含む。なお、符号列250は、キーフレーム参照ピクチャが、符号列250に含まれる画像であるか、ネットワークを介して取得されたかを示す情報を含んでもよい。さらに、ネットワークを介してキーフレーム参照ピクチャが取得された場合には、当該キーフレーム参照ピクチャを画像復号装置が取得できるように、当該キーフレーム参照ピクチャを特定する識別情報又は格納先等を示す情報が含まれる。なお、これらの情報の少なくとも一部は、符号列250とは別の信号により画像復号装置に通知されてもよい。 Also, the code string 250 generated by the image encoding device 200 includes information indicating which image is a key frame or an access point (information for specifying a key frame). Further, the code string 250 includes information indicating whether intra prediction is used for each key frame or inter prediction using a key frame reference picture, and information indicating the used key frame reference picture. The code string 250 may include information indicating whether the key frame reference picture is an image included in the code string 250 or acquired via a network. Further, when a key frame reference picture is acquired via a network, identification information for identifying the key frame reference picture or information indicating a storage location so that the image decoding apparatus can acquire the key frame reference picture Is included. Note that at least a part of the information may be notified to the image decoding apparatus by a signal different from the code string 250.
 なお、例えば設置されたカメラの映像など、背景としてみなせる領域と、移動物体等が含まれる前景領域としてみなせる領域とが区別できる映像に対して、画像符号化装置200は、次のような処理を行ってもよい。図11は、この場合の処理の流れを示すフローチャートである。 For example, the image encoding apparatus 200 performs the following processing on a video that can be distinguished from a region that can be regarded as a background and a region that can be regarded as a foreground region including a moving object, such as a video of an installed camera. You may go. FIG. 11 is a flowchart showing the flow of processing in this case.
 まず、画像符号化装置200は、対象画像に含まれる各領域と背景画像とを比較し、各領域が背景領域か前景領域かを判定する(S301)。具体的には、画像符号化装置200は、背景画像として予め決めておいたフレームと対象画像との比較し、背景画像と類似度が所定値以上の領域を背景領域と判定し、類似度が所定値未満の領域を前景領域と判定する。または、画像符号化装置200は、対象画像と、一定時間前の画像との平均変化量を算出し、平均変化量が少ない領域を背景画像と判定し、変化量が大きい領域を前景画像と判定する。 First, the image coding apparatus 200 compares each area included in the target image with the background image, and determines whether each area is a background area or a foreground area (S301). Specifically, the image coding apparatus 200 compares a frame that has been determined in advance as a background image with the target image, determines an area having a predetermined similarity to the background image as a background area, and the similarity is An area less than a predetermined value is determined as a foreground area. Alternatively, the image coding apparatus 200 calculates an average change amount between the target image and an image before a certain time, determines an area with a small average change amount as a background image, and determines an area with a large change amount as a foreground image. To do.
 対象画像に含まれる処理対象の領域である対象領域が背景領域である場合(S302でYES)、画像符号化装置200は、対象領域を背景画像であるキーフレーム参照ピクチャを特定するための情報、例えばキーフレーム参照ピクチャのIDのみを符号化し、当該背景領域の画像情報(差分信号等)を符号化しない(S303)。例えば、画像符号化装置200は、背景画像だけを示す予測モードを用いる。これにより、情報量を大きく削減することができる。具体的には、画像符号化装置200は、スキップモードで、参照画像として背景画像(キーフレーム参照ピクチャ)を指定する。つまり、画像符号化装置200は、対象画像と背景画像との差分信号を符号化せず、背景画像を指定する情報をのみを符号化する。これにより、画像復号装置は、背景領域の画像として、当該情報で指定された背景画像をそのまま出力(表示)する。 When the target region that is the processing target region included in the target image is the background region (YES in S302), the image encoding device 200 includes information for specifying the key frame reference picture that is the background image as the target region, For example, only the ID of the key frame reference picture is encoded, and the image information (such as a difference signal) of the background area is not encoded (S303). For example, the image encoding device 200 uses a prediction mode that indicates only the background image. Thereby, the amount of information can be greatly reduced. Specifically, the image coding apparatus 200 designates a background image (key frame reference picture) as a reference image in the skip mode. That is, the image encoding device 200 encodes only the information specifying the background image without encoding the difference signal between the target image and the background image. Thereby, the image decoding apparatus outputs (displays) the background image designated by the information as it is as the image of the background area.
 一方、対象領域が背景領域でなく、前景領域である場合(S302でNO)、通常の符号化処理を行う(S304)。ここで、通常の符号化処理とは、例えば、近傍の参照画像を用いた予測を行い、差分信号を符号化する、又は、背景画像との差分信号を符号化など、一般的に動画像符号化で用いられる予測符号化であり、例えば、図9に示すステップS104の処理と同様である。 On the other hand, when the target area is not the background area but the foreground area (NO in S302), normal encoding processing is performed (S304). Here, the normal encoding process is generally a moving image code such as performing prediction using a nearby reference image and encoding a difference signal or encoding a difference signal with a background image. This is predictive encoding used in the conversion, and is similar to the processing in step S104 shown in FIG. 9, for example.
 なお、前景領域は背景領域よりも手前にある必要はない。背景とは、一定量以上の変化がない領域が一定以上の範囲存在する場合のその領域のことを示し、前景とは、一定量以上の変化がある領域を示してもよい。言い換えると、背景領域は例えば変化量が少ないため、動画像符号化でいう差分信号を送らないでも主観画質に影響が少ない領域のことを示す。前景領域は、比較対象とした背景画像又は一定時間前の画像と、異なる画像を含む領域を示し、動画像符号化でいう差分画像を送らない場合、復号画像が原画像と大きくかけ離れる画像である。 Note that the foreground area does not have to be in front of the background area. The background refers to a region where there is a certain range of regions that do not change more than a certain amount, and the foreground may indicate a region that varies more than a certain amount. In other words, since the background region has a small amount of change, for example, it indicates a region having little influence on the subjective image quality even if a difference signal referred to in moving image coding is not sent. The foreground area indicates a background image that is a comparison target or an image that is different from the image before a certain time, and is an image that greatly differs from the original image when the difference image referred to in moving image coding is not sent. is there.
 このように、背景領域と判定した場合に強制的に背景画像を用いることにより、例えば固定されたカメラの映像であっても、風などによる微妙な動きに対する差分を符号化しないことにより主観画質を向上できるとともに、符号化効率を高めることができる。また、本実施の形態では、キーフレーム参照ピクチャ(背景画像)としてネットワークを介して参照できる複数の画像を用いることで、背景領域と判定できる領域が増える。これにより、さらに少ない符号量で長時間の映像を記録できる。また、無線環境等の非常に狭帯域なネットワークが用いられる場合であっても映像を伝送できる。 In this way, by using the background image forcibly when it is determined as the background region, for example, even for a fixed camera image, the subjective image quality can be reduced by not encoding the difference for subtle movements due to wind or the like. It is possible to improve the coding efficiency. Further, in the present embodiment, by using a plurality of images that can be referred to via a network as key frame reference pictures (background images), an area that can be determined as a background area increases. Thereby, it is possible to record a video for a long time with a smaller code amount. Also, video can be transmitted even when a very narrow band network such as a wireless environment is used.
 なお、ここでは1フレーム内の領域毎に背景と前景とを判定し、処理を切り替えたが、フレーム単位で、背景と前景とを判定し、処理を切り替えてもよい。この場合、フレーム全体を背景と判定できれば、大きく符号量を削減できるし、背景ではないと判定される場合は、通常の符号化方法を用いればよい。このよう、画像符号化装置200の処理を単純化できる。 Here, the background and foreground are determined for each area in one frame and the processing is switched. However, the background and foreground may be determined for each frame and the processing may be switched. In this case, if the entire frame can be determined to be the background, the amount of code can be greatly reduced. If it is determined that the background is not the background, a normal encoding method may be used. In this way, the processing of the image encoding device 200 can be simplified.
 図12は、この場合の動作の流れを示すフローチャートである。まず、画像符号化装置200は、キーフレームメモリ209に格納されている、又は、ネットワークを介して取得できるキーフレーム参照ピクチャと、対象画像とを比較し、対象画像と類似するキーフレーム参照ピクチャ(例えば背景画像)が存在するかを判定する(S401)。具体的には、画像符号化装置200は、対象画像と各キーフレーム参照ピクチャと類似度を判定し、類似度が所定値以上のキーフレーム参照ピクチャを、対象画像と類似すると判定する。 FIG. 12 is a flowchart showing an operation flow in this case. First, the image encoding device 200 compares a target frame with a key frame reference picture stored in the key frame memory 209 or obtainable via a network, and similar to the target frame. For example, it is determined whether or not a background image exists (S401). Specifically, the image coding apparatus 200 determines the similarity between the target image and each key frame reference picture, and determines that a key frame reference picture with a similarity equal to or greater than a predetermined value is similar to the target image.
 対象画像に類似するキーフレーム参照ピクチャが存在する場合(S402でYES)、画像符号化装置200は、対象画像の画像情報(差分信号等)を符号化せず、対象画像に類似するキーフレーム参照ピクチャを特定するための情報、例えばキーフレーム参照ピクチャのIDのみを符号化する(S403)。 When there is a key frame reference picture similar to the target image (YES in S402), the image encoding device 200 does not encode the image information (difference signal or the like) of the target image and refers to a key frame similar to the target image. Only the information for specifying the picture, for example, the ID of the key frame reference picture is encoded (S403).
 一方、対象画像に類似するキーフレーム参照ピクチャが存在しない場合は(S402でNO)、画像符号化装置200は、対象画像を、イントラ予測を用いて符号化する(S404)。また、画像符号化装置200は、対象画像をキーフレーム参照ピクチャに設定する(S405)。これにより、以降の画像の符号化時に、この画像を活用することができる。これにより、符号列250の情報量をさらに削減することができるので、符号化効率を高めることができる。 On the other hand, when there is no key frame reference picture similar to the target image (NO in S402), the image encoding device 200 encodes the target image using intra prediction (S404). Also, the image coding apparatus 200 sets the target image as a key frame reference picture (S405). Thereby, this image can be utilized at the time of encoding of subsequent images. Thereby, since the information amount of the code string 250 can be further reduced, encoding efficiency can be improved.
 なお、画像符号化装置200は、ステップS405で対象画像を新たなキーフレーム参照ピクチャに設定する際に、今まで参照された頻度が低いキーフレーム参照ピクチャをキーフレームメモリ209から削除してもよい。これにより、キーフレームメモリ209の容量の増大を抑制できる。 Note that when the target image is set as a new key frame reference picture in step S405, the image coding apparatus 200 may delete the key frame reference picture that has been referred to so far from the key frame memory 209. . Thereby, an increase in the capacity of the key frame memory 209 can be suppressed.
 また、画像符号化装置200は、ステップS404及びS405の代わりに、通常の符号化処理を行ってもよい。この処理は、例えば、図11に示すステップS104と同様の処理である。この場合であっても、対象画像に類似するキーフレーム参照ピクチャが存在する場合に、キーフレーム参照ピクチャを示すIDのみを符号化することで符号化効率を向上できる。 Further, the image encoding device 200 may perform normal encoding processing instead of steps S404 and S405. This process is, for example, the same process as step S104 shown in FIG. Even in this case, when a key frame reference picture similar to the target image exists, encoding efficiency can be improved by encoding only the ID indicating the key frame reference picture.
 また、図11又は図12に示す処理は、動画像データ251に含まれる全ての画像に対して行われてもよし、一部の画像のみに行なわれてもよい。例えば、これらの処理は、キーフレームに対してのみ行われてもよい。 11 or 12 may be performed on all the images included in the moving image data 251 or may be performed on only a part of the images. For example, these processes may be performed only on key frames.
 なお、画像符号化装置200における、特に説明していない処理に関しては、例えば非特許文献1に記載の方法を用いてもよいし、別の符号化方法を用いてもよい。例えば、可変長符号化部204は、算術符号化を用いてもよいし、エントロピーに応じて設計された符号化テーブルを用いてもよい。 In addition, regarding the processing that is not particularly described in the image encoding device 200, for example, the method described in Non-Patent Document 1 may be used, or another encoding method may be used. For example, the variable length coding unit 204 may use arithmetic coding, or may use a coding table designed according to entropy.
 以上のように、本実施の形態に係る画像符号化装置200は、複数の画像を符号化する。画像符号化装置200は、複数の画像から、ランダムアクセス可能なキーフレーム104を選択する。画像符号化装置200は、キーフレーム104を、当該キーフレーム104とは異なるキーフレーム参照ピクチャを参照するインター予測を用いて符号化する。これにより、キーフレーム104がインター予測により符号化されるので、キーフレーム104がイントラ予測により符号化される場合に比べて符号化効率を向上できる。 As described above, the image encoding device 200 according to the present embodiment encodes a plurality of images. The image encoding device 200 selects a randomly accessible key frame 104 from a plurality of images. The image encoding device 200 encodes the key frame 104 using inter prediction that refers to a key frame reference picture different from the key frame 104. Thereby, since the key frame 104 is encoded by inter prediction, encoding efficiency can be improved compared with the case where the key frame 104 is encoded by intra prediction.
 また、画像符号化装置200は、複数の画像から複数のキーフレームを選択し、複数のキーフレームに含まれる対象キーフレームを、複数のキーフレームのうちの他のキーフレームを参照して符号化する。これにより、キーフレームを復号するために必要な画像の数を低減できるので、ランダム再生時において映像が表示されるまので時間を低減できる。 Further, the image encoding device 200 selects a plurality of key frames from a plurality of images, and encodes target key frames included in the plurality of key frames with reference to other key frames among the plurality of key frames. To do. Thereby, since the number of images required for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
 (実施の形態2)
 本実施の形態では、複数カメラからの映像の適宜切り替え、又は長時間の映像を任意の点から再生を実現できる画像復号装置及び画像復号方法について説明する。
(Embodiment 2)
In the present embodiment, an image decoding apparatus and an image decoding method capable of appropriately switching videos from a plurality of cameras or reproducing a long-time video from an arbitrary point will be described.
 なお、本実施の形態に係る画像復号装置は、実施の形態1において説明した図1A等の状況を実現する画像復号方法であり、図4、図5A及び図6Aに示す符号列250を正しく復号する画像復号方法に関するものである。 The image decoding apparatus according to the present embodiment is an image decoding method that realizes the situation illustrated in FIG. 1A and the like described in the first embodiment, and correctly decodes the code string 250 illustrated in FIGS. 4, 5A, and 6A. The present invention relates to an image decoding method.
 まず、実施の形態1で説明した符号列250を復号する画像復号装置300の構造を、図13を用いて説明する。図13は、本実施の形態に係る画像復号装置300のブロック図である。 First, the structure of the image decoding apparatus 300 that decodes the code string 250 described in Embodiment 1 will be described with reference to FIG. FIG. 13 is a block diagram of image decoding apparatus 300 according to the present embodiment.
 図13に示す画像復号装置300は符号列250を復号することで復号画像350を生成する。符号列250は、例えば、上述した画像符号化装置200により生成された符号列250である。この画像復号装置300は、可変長復号部301と、逆量子化逆変換部302と、加算部303と、予測制御部304と、選択部305と、予測部306と、フレームメモリ310とを備える。 The image decoding apparatus 300 shown in FIG. 13 generates a decoded image 350 by decoding the code string 250. The code string 250 is, for example, the code string 250 generated by the image encoding device 200 described above. The image decoding apparatus 300 includes a variable length decoding unit 301, an inverse quantization inverse transformation unit 302, an addition unit 303, a prediction control unit 304, a selection unit 305, a prediction unit 306, and a frame memory 310. .
 可変長復号部301は、符号列250を可変長復号することにより、量子化信号351と、予測パラメタ355とを取得し、量子化信号351を逆量子化逆変換部302に出力し、予測パラメタ355を予測制御部304及び予測部306に出力する。 The variable length decoding unit 301 acquires the quantized signal 351 and the prediction parameter 355 by performing variable length decoding on the code string 250, outputs the quantized signal 351 to the inverse quantization inverse transform unit 302, and outputs the prediction parameter. 355 is output to the prediction control unit 304 and the prediction unit 306.
 逆量子化逆変換部302は、量子化信号351を逆量子化及び逆変換することにより、復号差分信号352を生成し、生成された復号差分信号352を加算部303に出力する。 The inverse quantization inverse transform unit 302 generates a decoded differential signal 352 by performing inverse quantization and inverse transform on the quantized signal 351, and outputs the generated decoded differential signal 352 to the adder 303.
 予測制御部304は、予測パラメタ355に基づき、予測処理に用いる参照画像を決定する。なお、この処理に関しては、後述する。 The prediction control unit 304 determines a reference image used for prediction processing based on the prediction parameter 355. This process will be described later.
 予測部306は、予測パラメタ355に含まれる、予測モードなどの予測画像354の生成に必要な情報と、選択部305から出力される参照画像353とを用いて、予測画像354を生成し、生成された予測画像354を加算部303に出力する。加算部303は、予測画像354と復号差分信号352とを加算することにより復号画像350を生成する。この復号画像350は、例えば、表示部等に表示される。 The prediction unit 306 generates and generates a prediction image 354 using information necessary for generation of the prediction image 354 such as a prediction mode included in the prediction parameter 355 and the reference image 353 output from the selection unit 305. The predicted image 354 thus output is output to the adding unit 303. The adding unit 303 generates a decoded image 350 by adding the predicted image 354 and the decoded difference signal 352. The decoded image 350 is displayed on, for example, a display unit.
 また、この復号画像350は、フレームメモリ310に格納される。フレームメモリ310は、キーフレーム参照ピクチャを格納するキーフレームメモリ307と、通常予測に用いられる復号対象画像と時間的に近い参照画像を格納する近傍フレームメモリ308と、復号対象画像内で既に復号した画像信号を蓄積する面内フレームメモリ309とを含む。なお、実施の形態1のフレームメモリ212と同様に、3種類のフレームメモリに対して、個別のメモリ空間を設ける必要性はなく、例えば同一のメモリ上に各参照画像が格納されてもよい。つまり、フレームメモリ310は、予測制御部304からの指示に基づき、いずれかの参照画像を出力できる構成であればよい。 The decoded image 350 is stored in the frame memory 310. The frame memory 310 includes a key frame memory 307 that stores a key frame reference picture, a neighboring frame memory 308 that stores a reference image that is temporally close to a decoding target image used for normal prediction, and has already been decoded in the decoding target image. And an in-plane frame memory 309 for storing image signals. Similar to the frame memory 212 of the first embodiment, there is no need to provide separate memory spaces for the three types of frame memories. For example, each reference image may be stored on the same memory. That is, the frame memory 310 may have any configuration that can output any reference image based on an instruction from the prediction control unit 304.
 また、キーフレームメモリ307は、復号画像350だけではなく、別途外部から取得された画像データ356を参照画像として格納する。 In addition, the key frame memory 307 stores not only the decoded image 350 but also image data 356 separately acquired from the outside as a reference image.
 次に、画像復号装置300による画像復号方法の流れを説明する。図14は、画像復号装置300の処理の流れを示すフローチャートである。 Next, the flow of the image decoding method performed by the image decoding apparatus 300 will be described. FIG. 14 is a flowchart showing a processing flow of the image decoding apparatus 300.
 まず、画像復号装置300は、復号対象のフレームである対象画像の符号化データを符号列250から取得する(S501)。次に、画像復号装置300は、復号対象のフレームである対象画像がキーフレームであるかを判断する(S502)。対象画像がキーフレームである場合(S502でYES)、画像復号装置300は、キーフレーム復号処理を行う(S503)。なお、キーフレーム復号処理に関しては、後で詳しく説明する。 First, the image decoding apparatus 300 acquires encoded data of a target image, which is a decoding target frame, from the code string 250 (S501). Next, the image decoding apparatus 300 determines whether the target image that is a frame to be decoded is a key frame (S502). When the target image is a key frame (YES in S502), the image decoding device 300 performs a key frame decoding process (S503). The key frame decryption process will be described in detail later.
 また、画像復号装置300は、対象画像がキーフレームであるかどうか(アクセスポイントであるかどうか)を、符号列250に含まれる、キーフレームを特定するための情報を用いて判定してもよい。具体的には、この情報は、対象画像がキーフレームであるか否かを示す、又は、複数の画像のうちどの画像がキーフレームであるかを示す。さらにこの情報は、各キーフレームが、イントラ予測を用いて符号化されたキーフレームか、インター予測を用いて符号化されたキーフレームかを示す。例えば、この情報は、符号列250のヘッダ情報に含まれるパラメタ情報である。なお、この情報は、別途符号列250とは異なるシステムで画像符号化装置と共通な情報が記録されるフィールドに記録された情報であってもよい。前者の場合には、符号列250中に情報が記録されているため、画像復号装置300は、他の情報にアクセスせず高速に判断することができる。一方、後者のように、画像復号装置300が予め決められたフィールド情報を参照する場合には、符号列250に情報が無くてもよいため、より符号化効率を高めることができる。 Further, the image decoding apparatus 300 may determine whether the target image is a key frame (whether it is an access point) using information for specifying a key frame included in the code string 250. . Specifically, this information indicates whether or not the target image is a key frame, or indicates which image among the plurality of images is a key frame. Further, this information indicates whether each key frame is a key frame encoded using intra prediction or a key frame encoded using inter prediction. For example, this information is parameter information included in the header information of the code string 250. This information may be information recorded in a field in which information common to the image encoding apparatus is recorded in a system different from the code string 250. In the former case, since information is recorded in the code string 250, the image decoding apparatus 300 can make a determination at high speed without accessing other information. On the other hand, when the image decoding apparatus 300 refers to predetermined field information as in the latter case, there is no need for information in the code string 250, so that the encoding efficiency can be further improved.
 対象画像がキーフレームでない場合(S502でNO)は、画像復号装置300は、通常の復号処理を行う(S504)。なお、ここで通常の復号処理については詳しく説明しないが、通常の復号処理とは、非特許文献1に記載のインター予測フレームの復号方法と同様な処理である。具体的には、画像復号装置300は、符号列250に含まれる情報で示された参照画像をフレームメモリ310から取得し、符号列250に含まれる予測パラメタ355に基づいて参照画像を用いながら予測画像354を作成し、予測画像354を用いて復号処理を行う。対象画像のデータが符号列250の終端のデータではない場合は(S505でNO)、画像復号装置300は、次の画像の符号化データを取得する(S501)。一方、対象画像のデータが符号列250の終端である場合は(S505でYES)、画像復号装置300は、復号処理を終了する。 If the target image is not a key frame (NO in S502), the image decoding apparatus 300 performs a normal decoding process (S504). The normal decoding process will not be described in detail here, but the normal decoding process is a process similar to the decoding method of the inter prediction frame described in Non-Patent Document 1. Specifically, the image decoding apparatus 300 acquires the reference image indicated by the information included in the code string 250 from the frame memory 310, and performs prediction while using the reference image based on the prediction parameter 355 included in the code string 250. An image 354 is created and a decoding process is performed using the predicted image 354. If the data of the target image is not the end data of the code string 250 (NO in S505), the image decoding apparatus 300 acquires encoded data of the next image (S501). On the other hand, when the data of the target image is the end of the code string 250 (YES in S505), the image decoding device 300 ends the decoding process.
 次に、キーフレーム復号処理(S503)について詳しく説明する。図15は、キーフレーム復号処理のフローチャートである。 Next, the key frame decryption process (S503) will be described in detail. FIG. 15 is a flowchart of the key frame decoding process.
 実施の形態1で説明したように、キーフレームは、イントラ予測フレーム、又は、キーフレーム参照ピクチャを参照するインター予測フレームである。よって、画像復号装置300は、まず、対象画像であるキーフレームがキーフレーム参照ピクチャを参照するインター予測フレームであるかを判定する(S511)。具体的には、符号列250に含まれる対象画像のヘッダ情報は、キーフレーム参照ピクチャを示す情報、又は、インター予測が用いられているかを示す情報を含む。画像復号装置300は、その情報を復号し、得られた情報に基づき、上記判定を行う。 As described in the first embodiment, the key frame is an intra prediction frame or an inter prediction frame that refers to a key frame reference picture. Therefore, the image decoding apparatus 300 first determines whether the key frame that is the target image is an inter prediction frame that refers to the key frame reference picture (S511). Specifically, the header information of the target image included in the code string 250 includes information indicating a key frame reference picture or information indicating whether inter prediction is used. The image decoding apparatus 300 decodes the information and performs the above determination based on the obtained information.
 キーフレームにキーフレーム参照ピクチャが用いられている場合(S511でYES)、画像復号装置300は、そのキーフレーム参照ピクチャを取得する(S512)。具体的には、画像復号装置300は、例えばネットワークを介してキーフレーム参照ピクチャを取得する。または、画像復号装置300は、既に復号されたキーフレームであるキーフレーム参照ピクチャを取得する。なお、符号列250は、キーフレーム参照ピクチャが、ネットワークを介して取得されるピクチャであるか、復号済みのピクチャであるかを示す情報、及びそのキーフレーム参照ピクチャを特定するための情報を含む。画像復号装置300は、この情報を用いてキーフレーム参照ピクチャを取得する。また、画像復号装置300は、ネットワークを介してキーフレーム参照ピクチャを予め取得しておいてもよい。 When the key frame reference picture is used for the key frame (YES in S511), the image decoding apparatus 300 acquires the key frame reference picture (S512). Specifically, the image decoding apparatus 300 acquires a key frame reference picture via a network, for example. Alternatively, the image decoding device 300 acquires a key frame reference picture that is a key frame that has already been decoded. The code string 250 includes information indicating whether the key frame reference picture is a picture acquired via a network or a decoded picture, and information for specifying the key frame reference picture. . The image decoding apparatus 300 acquires a key frame reference picture using this information. Further, the image decoding device 300 may acquire a key frame reference picture in advance via a network.
 次に、画像復号装置300は、キーフレーム参照ピクチャを用いたインター予測復号処理により対象画像を復号する(S513)。 Next, the image decoding apparatus 300 decodes the target image by the inter prediction decoding process using the key frame reference picture (S513).
 なお、図11に示す処理のように、符号列250にキーフレーム参照ピクチャを指定する情報(例えばID)のみが含まれる場合(差分信号が含まれない場合)、画像復号装置300は、対象画像の領域のうち背景領域に対して、キーフレーム参照ピクチャを特定するための情報を復号し、当該背景領域の復号画像として、当該情報で特定されるキーフレーム参照ピクチャをそのまま出力する。また、この場合には、符号列250に、参照画像がそのまま出力される旨、又は差分信号が含まれない旨を示す情報が含まれる。画像復号装置300は、当該情報を参照して、キーフレーム参照ピクチャをそのまま出力するか否かを判定する。 Note that, as in the process illustrated in FIG. 11, when the code string 250 includes only information (for example, ID) that specifies a key frame reference picture (when no difference signal is included), the image decoding apparatus 300 The information for specifying the key frame reference picture is decoded with respect to the background area, and the key frame reference picture specified by the information is output as it is as the decoded image of the background area. In this case, the code string 250 includes information indicating that the reference image is output as it is or that a differential signal is not included. The image decoding apparatus 300 refers to the information and determines whether or not to output the key frame reference picture as it is.
 図12に示す処理の場合も同様に、画像復号装置300は、キーフレーム参照ピクチャを特定するための情報を復号し、当該情報で特定されるキーフレーム参照ピクチャを、対象画像の復号画像としてそのまま出力する。 Similarly, in the case of the processing illustrated in FIG. 12, the image decoding apparatus 300 decodes information for specifying the key frame reference picture, and uses the key frame reference picture specified by the information as a decoded image of the target image as it is. Output.
 一方、キーフレーム参照ピクチャが使用されない場合(S511でNO)、画像復号装置300は、インター予測復号により対象画像を復号する(S514)。 On the other hand, when the key frame reference picture is not used (NO in S511), the image decoding apparatus 300 decodes the target image by inter prediction decoding (S514).
 また、画像復号装置300は、キーフレームの復号画像を、キーフレーム参照ピクチャとしてキーフレームメモリ307に格納する(S515)。 The image decoding apparatus 300 stores the decoded image of the key frame in the key frame memory 307 as a key frame reference picture (S515).
 なお、画像復号装置300は、キーフレーム参照ピクチャとして、例えば、得られた複数の復号画像のうち、代表的な画像、又は所定の周期で画像を選択し、これらの画像をネットワーク上のストレージに蓄積することで、画像符号化装置がこれらの画像にアクセスできるようにしてもよい。 Note that the image decoding device 300 selects, for example, a representative image or an image at a predetermined cycle from the obtained decoded images as a key frame reference picture, and stores these images in a storage on the network. By accumulating, the image encoding device may be able to access these images.
 なお、画像復号装置300における、特に説明していない処理に関しては、例えば非特許文献1に記載の方法を用いてもよいし、別の復号方法を用いてもよい。例えば、可変長復号部301は、算術復号を用いてもよいし、エントロピーに応じて設計された復号テーブルを用いてよい。つまり、画像復号装置300と対となる画像符号化装置と関連付けされた方式を用いればよい。 In addition, regarding the processing that is not particularly described in the image decoding apparatus 300, for example, the method described in Non-Patent Document 1 may be used, or another decoding method may be used. For example, the variable length decoding unit 301 may use arithmetic decoding or may use a decoding table designed according to entropy. That is, a method associated with the image encoding device paired with the image decoding device 300 may be used.
 なお、上記の画像符号化装置200と画像復号装置300とが共通で有するキーフレーム参照ピクチャは、次のように変形することができる。 Note that the key frame reference picture shared by the image encoding device 200 and the image decoding device 300 can be modified as follows.
 (1)キーフレーム参照ピクチャは、符号列250に含まれる画像(符号化又は復号対象の画像)とは、異なる画像符号化方法によって符号化された画像であってもよい。つまり、キーフレーム参照ピクチャは、処理対象のキーフレームとは異なる符号化方法で符号化された画像であってもよい。 (1) The key frame reference picture may be an image encoded by an image encoding method different from the image included in the code string 250 (the image to be encoded or decoded). That is, the key frame reference picture may be an image encoded by an encoding method different from that of the processing target key frame.
 例えば、画像符号化装置200及び画像復号装置300がHEVCに準拠した符号列250を生成又は復号する場合において、キーフレーム参照ピクチャは、MPEG-2、MPEG-4-AVC(H.264)、JPEG、又はJPEG2000といった別の符号化方式で符号化された画像であってもよい。例えば、画像符号化装置200又は画像復号装置300は、このキーフレーム参照ピクチャを、ネットワークを介して取得する。これにより、画像符号化装置200又は画像復号装置300とネットワークとの通信負荷を低減できる。また、画像符号化装置200又は画像復号装置300が備える、ストレージ(キーフレームメモリ209又は307)の容量を小さくできる。さらに、インターネットをはじめとする世の中に流通する画像データをキーフレーム参照ピクチャとして使うことが可能となるため、より符号化効率を高めることができる。 For example, when the image encoding device 200 and the image decoding device 300 generate or decode a code string 250 compliant with HEVC, the key frame reference pictures are MPEG-2, MPEG-4-AVC (H.264), JPEG Or an image encoded by another encoding method such as JPEG2000. For example, the image encoding device 200 or the image decoding device 300 acquires this key frame reference picture via a network. Thereby, the communication load between the image coding apparatus 200 or the image decoding apparatus 300 and the network can be reduced. Further, the capacity of the storage (key frame memory 209 or 307) included in the image encoding device 200 or the image decoding device 300 can be reduced. Furthermore, since image data distributed in the world including the Internet can be used as a key frame reference picture, the encoding efficiency can be further improved.
 なお、このように、異なる画像符号化方法に符号化された画像は、ネットワークを介して取得される画像に限らず、符号化(復号)対象の画像であってもよい。 Note that, as described above, an image encoded by a different image encoding method is not limited to an image acquired via a network, and may be an image to be encoded (decoded).
 このように、画像符号化装置200又は画像復号装置300は、符号列250に含まれる画像とは異なる符号化方式で符号化された画像を復号する機能を有し、当該異なる符号化方式で符号化された画像を取得し、当該画像を復号し、得られた復号画像をキーフレーム参照ピクチャとして格納してもよい。 As described above, the image encoding device 200 or the image decoding device 300 has a function of decoding an image encoded by an encoding method different from that of the image included in the code string 250, and the image is encoded by the different encoding method. The obtained image may be acquired, the image may be decoded, and the obtained decoded image may be stored as a key frame reference picture.
 (2)キーフレーム参照ピクチャは、符号列250に含まれる画像(符号化又は復号対象の画像)とは異なる解像度の画像であってもよい。つまり、キーフレーム参照ピクチャは、処理対象のキーフレームとは異なる解像度の画像であってもよい。 (2) The key frame reference picture may be an image with a resolution different from that of the image included in the code string 250 (the image to be encoded or decoded). That is, the key frame reference picture may be an image having a resolution different from that of the key frame to be processed.
 例えば、符号化又は復号対象の映像信号の解像度が1920×1080である場合に、キーフレーム参照ピクチャの解像度は3840×2160であってもよい。本実施の形態の画像符号化装置200及び画像復号装置300は、キーフレームに対して、キーフレーム参照ピクチャを用いたインター予測処理を行うため、キーフレーム参照ピクチャの解像度が大きい場合、よりインター予測の効率を向上できる。これにより、伝送すべき差分信号のデータ量が小さくなるので、符号化効率を向上できる。 For example, when the resolution of the video signal to be encoded or decoded is 1920 × 1080, the resolution of the key frame reference picture may be 3840 × 2160. Since the image coding apparatus 200 and the image decoding apparatus 300 according to the present embodiment perform inter prediction processing using key frame reference pictures for key frames, more inter prediction is performed when the resolution of the key frame reference pictures is large. Can improve the efficiency. Thereby, since the data amount of the differential signal to be transmitted is reduced, the coding efficiency can be improved.
 また、キーフレーム参照ピクチャは静止画でも良いため、インターネットをはじめとするネットワーク上に多数ある写真画像等をキーフレーム参照ピクチャとして用いることができる。ここで、写真画像の解像度は多様であるため、異なる解像度に対応することにより、キーフレーム参照ピクチャとして用いることができる画像の数を増やすことができる。これにより、符号化効率を向上できる。なお、逆に符号化又は復号対象の画像解像度よりキーフレーム参照ピクチャの画像解像度が小さくてもよい。なぜなら、前述のようにネットワーク上には多数の画像があるため、同一の解像度ではなくとも予測に用いることのできる画像を用意することにより、差分画像を小さくすることができ、符号化効率を高めることができる。また、図7Cに示したように、画像符号化装置及び画像復号装置は、複数の画像を参照する際に、別々の解像度の画像を参照することで、よりキーフレームと類似した予測画像を生成することが可能となる。これにより、符号化効率をさらに高めることができる。 Further, since the key frame reference picture may be a still image, a large number of photographic images on the network such as the Internet can be used as the key frame reference picture. Here, since the resolutions of photographic images are various, the number of images that can be used as key frame reference pictures can be increased by supporting different resolutions. Thereby, encoding efficiency can be improved. Conversely, the image resolution of the key frame reference picture may be smaller than the image resolution to be encoded or decoded. Because there are many images on the network as described above, by preparing images that can be used for prediction even if the resolution is not the same, the difference image can be reduced and the encoding efficiency can be increased. be able to. Further, as illustrated in FIG. 7C, when referring to a plurality of images, the image encoding device and the image decoding device generate prediction images more similar to key frames by referring to images with different resolutions. It becomes possible to do. Thereby, encoding efficiency can further be improved.
 以上のように、本実施の形態に係る画像復号装置300は、複数の画像を復号する。画像復号装置300は、複数の画像から、ランダムアクセス可能なキーフレーム104を判別する。画像復号装置300は、キーフレーム104を、当該キーフレーム104とは異なるキーフレーム参照ピクチャを参照するインター予測を用いて復号する。これにより、キーフレーム104がインター予測により符号化されるので、キーフレーム104がイントラ予測により符号化される場合に比べて符号化効率を向上できる。 As described above, the image decoding apparatus 300 according to the present embodiment decodes a plurality of images. The image decoding apparatus 300 determines a randomly accessible key frame 104 from a plurality of images. The image decoding apparatus 300 decodes the key frame 104 using inter prediction that refers to a key frame reference picture different from the key frame 104. Thereby, since the key frame 104 is encoded by inter prediction, encoding efficiency can be improved compared with the case where the key frame 104 is encoded by intra prediction.
 また、画像復号装置300は、複数の画像から複数のキーフレームを判別し、複数のキーフレームに含まれる対象キーフレームを、複数のキーフレームのうちの他のキーフレームを参照して復号する。これにより、キーフレームを復号するために必要な画像の数を低減できるので、ランダム再生時において映像が表示されるまので時間を低減できる。 Also, the image decoding apparatus 300 discriminates a plurality of key frames from a plurality of images, and decodes target key frames included in the plurality of key frames with reference to other key frames among the plurality of key frames. Thereby, since the number of images required for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
 (実施の形態3)
 本実施の形態では、実施の形態1及び2において記載したネットワーク上のストレージ又はクラウドデータベースの構成を説明する。本実施の形態に係る構成により、画像符号化装置又は画像復号装置とのデータの受け渡しにおいて、より少ないデータ量での伝送、及び、ストレージ内のデータ量の削減を実現できる。
(Embodiment 3)
In the present embodiment, the configuration of the storage on the network or the cloud database described in the first and second embodiments will be described. With the configuration according to the present embodiment, it is possible to realize transmission with a smaller amount of data and reduction of the amount of data in the storage in data transfer with the image encoding device or the image decoding device.
 以下、本実施の形態に係るシステム500について図16及び図17を用いて説明する。図16は、本実施の形態のシステム500を示す模式図である。 Hereinafter, system 500 according to the present embodiment will be described with reference to FIGS. 16 and 17. FIG. 16 is a schematic diagram showing a system 500 according to the present embodiment.
 このシステム500は、画像復号装置300からアクセスできるデータベース501を有する。このデータベース501は、前述の背景画像をはじめとするキーフレーム参照ピクチャである複数の画像g1t1~gNtMを格納している。また、データベース501は、図16に示すように、各々が複数の画像を含む複数の組d0~dLを格納していてもよい。 The system 500 has a database 501 that can be accessed from the image decoding apparatus 300. The database 501 stores a plurality of images g1t1 to gNtM that are key frame reference pictures including the background image described above. Further, as shown in FIG. 16, the database 501 may store a plurality of sets d0 to dL each including a plurality of images.
 ここで、画像復号装置300は、例えば、実施の形態2で説明した画像復号装置300である。 Here, the image decoding device 300 is, for example, the image decoding device 300 described in the second embodiment.
 また、システム500は、制御部502を備える。制御部502は、画像復号装置300から送信されるトリガ(信号)、又は時刻(システム500が有する時刻信号)から得られるトリガ信号に基づき、データベース501が保持する複数のキーフレーム参照ピクチャから特定の画像又は画像群を画像復号装置300に伝送する。伝送された画像又は画像群は、画像復号装置300が有するローカルストレージであるデータバッファ(キーフレームメモリ307)に格納される。そして、このデータバッファに格納された画像が、キーフレーム参照ピクチャとしてキーフレームのインター予測に用いられる。 In addition, the system 500 includes a control unit 502. Based on a trigger (signal) transmitted from the image decoding apparatus 300 or a trigger signal obtained from a time (a time signal included in the system 500), the control unit 502 specifies a specific one from a plurality of key frame reference pictures held in the database 501. The image or the image group is transmitted to the image decoding device 300. The transmitted image or image group is stored in a data buffer (key frame memory 307) that is a local storage of the image decoding apparatus 300. The image stored in the data buffer is used for key frame inter prediction as a key frame reference picture.
 以下、この処理の流れについて、図17を用いて説明する。図17は、本実施の形態に係るシステム500及び画像復号装置300の動作の流れを示す図である。 Hereinafter, the flow of this process will be described with reference to FIG. FIG. 17 is a diagram illustrating an operation flow of the system 500 and the image decoding apparatus 300 according to the present embodiment.
 まず、画像復号装置300は、復号に必要、又は必要になりそうな画像を取得するためのトリガ信号(制御信号)をシステム500に送信する(S601)。システム500は、画像復号装置300から送信されたトリガ信号を受信し、格納している複数の画像から、必要、又は必要になりそうな画像を選択し(S602)、選択した画像を画像復号装置300に伝送する(S603)。画像復号装置300は、システム500から送信された画像を受信し、ローカルストレージに格納する(S604)。これにより、画像復号装置300は、必要なキーフレーム参照ピクチャを取得できるので、適切に符号列を復号することが可能となる。 First, the image decoding apparatus 300 transmits a trigger signal (control signal) for acquiring an image necessary or likely to be necessary for decoding to the system 500 (S601). The system 500 receives the trigger signal transmitted from the image decoding device 300, selects an image that is necessary or likely to be necessary from a plurality of stored images (S602), and selects the selected image as an image decoding device. It transmits to 300 (S603). The image decoding device 300 receives the image transmitted from the system 500 and stores it in the local storage (S604). Accordingly, the image decoding apparatus 300 can acquire a necessary key frame reference picture, and thus can appropriately decode a code string.
 以下、ここで、トリガ信号及びトリガ信号に基づく画像の選択処理について説明する。 Hereinafter, here, the trigger signal and the image selection process based on the trigger signal will be described.
 トリガ信号は、例えば、画像復号装置300の現在位置を示す位置情報を含む。この場合、システム500は、格納している画像又は画像群ごとに、当該画像又は画像群に含まれる画像が撮影された場所を示す位置情報を保持している。システム500は、トリガ信号で示される現在位置に近い位置で撮影された画像又は画像群を選択し、選択した画像又は画像群を画像復号装置300へ送信する。 The trigger signal includes, for example, position information indicating the current position of the image decoding device 300. In this case, the system 500 holds, for each stored image or image group, position information indicating the location where the image included in the image or image group is taken. The system 500 selects an image or a group of images captured at a position close to the current position indicated by the trigger signal, and transmits the selected image or group of images to the image decoding device 300.
 これにより、例えば、ユーザが、全国に多数設定されている監視カメラのうち、現在位置の近くに設定されている監視映像を、画像復号装置300を備えるタブレット端末等の移動可能な端末が表示することで、隣接するビル群を確認する際に、画像復号装置300が、その位置に関連する画像をシステム500から取得することにより、全画像を取得するのに比べて端末のメモリサイズを小さくでき、かつ、伝送データ量を削減できる。 Thereby, for example, a mobile terminal such as a tablet terminal provided with the image decoding device 300 displays a monitoring video set near the current position among a large number of monitoring cameras set nationwide. Thus, when the adjacent building group is confirmed, the image decoding apparatus 300 acquires an image related to the position from the system 500, so that the memory size of the terminal can be reduced as compared to acquiring all the images. In addition, the amount of transmission data can be reduced.
 なお、ここでは、トリガ信号が現在位置を示す位置情報を含む例を示したが、トリガ情報は、ユーザにより指定された位置を示す位置情報を含んでもよい。例えば、ユーザは、遠隔地から、不審者等が発生した地域を指定する。これにより、画像復号装置300は、その地域に関連する画像を予め取得しておくことができるので、映像切替をスムーズに行うことができる。 Although an example in which the trigger signal includes position information indicating the current position is shown here, the trigger information may include position information indicating a position specified by the user. For example, the user designates an area where a suspicious person or the like has occurred from a remote location. As a result, the image decoding apparatus 300 can acquire an image related to the area in advance, so that video switching can be performed smoothly.
 別の例として、例えばトリガ信号は時間情報を含む。例えば、リアルタイムで監視画像を確認する場合には、時間情報は現在時刻を示す。この場合、システム500は、格納している画像又は画像群ごとに、当該画像又は画像群に含まれる画像が撮影された時刻又は時間帯を示す時間情報を保持している。これにより、システム500は、現在時刻に近い時刻又は時間帯に撮影された画像又は画像群を選択し、選択した画像又は画像群を画像復号装置300へ送信する。これにより、全画像を取得するのに比べて端末のメモリサイズを小さくでき、かつ、伝送データ量を削減できる。 As another example, for example, the trigger signal includes time information. For example, when the monitoring image is confirmed in real time, the time information indicates the current time. In this case, for each stored image or image group, the system 500 holds time information indicating the time or time zone when the image included in the image or image group was captured. As a result, the system 500 selects an image or a group of images taken at a time or time zone close to the current time, and transmits the selected image or group of images to the image decoding apparatus 300. As a result, the memory size of the terminal can be reduced and the amount of transmission data can be reduced as compared to acquiring all images.
 なお、時間情報は、現在時刻に限らず、ユーザにより指定された時刻を示してもよい。 The time information is not limited to the current time, and may indicate a time designated by the user.
 また、時間情報は、秒、分、時、日、月、及び季節(複数月)単位のいずれか又はこれららの組合せにより時間を示す。時間情報で示される単位として大きい単位が用いられることで、システム500は、例えば、季節単位で画像を格納すればよいので、システム500のメモリサイズを小さくできる。また、端末に送信される画像の数が減ることにより、端末のメモリサイズを小さくでき、かつ、伝送データ量を削減できる。また、トリガ信号のデータ量も削減できる。 Also, the time information indicates the time by one of seconds, minutes, hours, days, months, seasons (multiple months) or a combination thereof. Since a large unit is used as the unit indicated by the time information, the system 500 can store the image in units of seasons, for example, so that the memory size of the system 500 can be reduced. Further, since the number of images transmitted to the terminal is reduced, the memory size of the terminal can be reduced and the amount of transmission data can be reduced. In addition, the data amount of the trigger signal can be reduced.
 また、トリガ信号は、天候を示す天候情報を含んでもよい。この場合、システム500は、格納している画像又は画像群ごとに、当該画像が撮影された際の天候を示す天候情報を保持している。これにより、システム500は、トリガ情報で示される天候と同じ又は類似する天候において撮影された画像又は画像群を選択し、選択した画像又は画像群を画像復号装置300へ送信する。これにより、全画像を取得するのに比べて端末のメモリサイズを小さくでき、かつ、伝送データ量を削減できる。なお、この手法は、野外に設置されている監視カメラ等のカメラ、及び野外の様子を撮影しているカメラにのみ適用されてもよい。これにより、システム500に格納するデータ量を削減できるとともに、トリガ信号のデータ量を削減できる。 In addition, the trigger signal may include weather information indicating the weather. In this case, the system 500 holds weather information indicating the weather when the image is captured for each stored image or image group. Thereby, the system 500 selects an image or a group of images captured in the same or similar weather as the weather indicated by the trigger information, and transmits the selected image or group of images to the image decoding device 300. As a result, the memory size of the terminal can be reduced and the amount of transmission data can be reduced as compared to acquiring all images. Note that this technique may be applied only to cameras such as surveillance cameras installed in the outdoors and cameras that are photographing the outdoors. As a result, the amount of data stored in the system 500 can be reduced, and the data amount of the trigger signal can be reduced.
 なお、トリガ信号は、上述した位置情報、時間情報及び天候情報のうち2以上を含んでもよい。この場合、システム500は、トリガ信号で示される複数の条件の全てに合致する画像を選択する。これにより、選択される画像をより特定できるので、画像復号装置300に伝送されるデータ量をより削減できる。 Note that the trigger signal may include two or more of the position information, time information, and weather information described above. In this case, the system 500 selects an image that meets all of the plurality of conditions indicated by the trigger signal. Thereby, since the selected image can be specified more, the amount of data transmitted to the image decoding apparatus 300 can be further reduced.
 なお、画像復号装置300に送信された画像と同じ画像が画像符号化装置200に送信される。画像符号化装置200は、画像復号装置300に送信された画像と同じ画像を用いて符号列を生成し、生成した符号列を画像復号装置300に送信する。また、システム500は、トリガ信号で指定された画像を格納していない場合には、例えば、ネットワーク等を介して、又は、画像符号化装置200から、トリガ信号での指定に合致する画像を新たに取得し、格納する。これにより、環境の変化に対応することができ、継続的に符号化効率を向上することができる。 Note that the same image as the image transmitted to the image decoding device 300 is transmitted to the image encoding device 200. The image encoding device 200 generates a code string using the same image as the image transmitted to the image decoding device 300, and transmits the generated code sequence to the image decoding device 300. In addition, when the system 500 does not store the image specified by the trigger signal, for example, an image that matches the specification by the trigger signal is newly acquired from the image encoding device 200 via a network or the like. To get and store. As a result, it is possible to cope with changes in the environment and to improve the encoding efficiency continuously.
 なお、ここでは画像復号装置300からトリガ信号を伝送する例を述べたが、これに限らない。例えば、時間情報はシステム500内で生成される時刻情報が用いられてもよい。これにより画像復号装置300とシステム500との間で伝送されるデータのデータ量を削減できる。なお、この場合には、どの時刻情報が用いられているかを画像符号化装置200及び画像復号装置300で共有する必要がある。例えば、どの時刻情報が用いられているかを示す情報が画像符号化装置200及び画像復号装置300に通知される。 In addition, although the example which transmits a trigger signal from the image decoding apparatus 300 was described here, it is not restricted to this. For example, time information generated in the system 500 may be used as the time information. Thereby, the data amount of data transmitted between the image decoding apparatus 300 and the system 500 can be reduced. In this case, it is necessary for the image encoding device 200 and the image decoding device 300 to share which time information is used. For example, information indicating which time information is used is notified to the image encoding device 200 and the image decoding device 300.
 また、ここで、画像復号装置300とシステム500とが個別の装置である場合の例を示したがこれに限らない。例えば、システム500は、画像復号装置300を備えてもよい。この場合、画像復号装置300とシステム500との間の伝送はネットワークを介さない。 In addition, here, an example in which the image decoding apparatus 300 and the system 500 are separate apparatuses has been shown, but the present invention is not limited thereto. For example, the system 500 may include the image decoding device 300. In this case, transmission between the image decoding apparatus 300 and the system 500 does not go through the network.
 また、上記説明では、システム500には、各時間、各場所又は各天候等に対応付けてそれぞれ個別の画像が保持される例を説明したが、これに限らない。例えば、時間が異なる場合であっても、画像内容が同じ又は類似する場合には、複数の時間に一つの画像のみを対応付けて格納してもよい。これにより、システム500に格納される画像の数を削減できるので、システム500に格納する画像のデータ量を削減できる。 Further, in the above description, an example has been described in which the system 500 holds individual images in association with each time, each place, or each weather, but the present invention is not limited thereto. For example, even when the times are different, if the image contents are the same or similar, only one image may be stored in association with a plurality of times. Thereby, since the number of images stored in the system 500 can be reduced, the data amount of images stored in the system 500 can be reduced.
 なお、画像復号装置300は、画像符号化装置200が符号列の生成に用いた参照画像(キーフレーム参照ピクチャ)を取得できない場合は、参照画像を取得できなかったことを画像符号化装置200、システム500又はユーザに通知する。その場合、画像復号装置300は、予め決めておいた方法で取得した別の画像(画像符号化装置200と共有されていない画像)を予測に用いて復号画像を生成し、当該復号画像を表示してもよい。これにより、画像符号化装置200が期待する復号画像ではないが、画像復号装置300側で類似の画像を予測画像として用いることで、類似の復号画像を表示することができるため、映像が表示されないことを防止できる。 It should be noted that the image decoding device 300 indicates that the reference image cannot be acquired when the image encoding device 200 cannot acquire the reference image (key frame reference picture) used for generating the code string. Notify system 500 or user. In that case, the image decoding device 300 generates a decoded image using another image (an image not shared with the image encoding device 200) acquired by a predetermined method for prediction, and displays the decoded image. May be. Thereby, although it is not a decoded image expected by the image coding apparatus 200, a similar decoded image can be displayed by using a similar image as a predicted image on the image decoding apparatus 300 side, and thus no video is displayed. Can be prevented.
 以上、実施の形態に係る画像符号化方法に及び画像復号方法ついて説明したが、本発明は、この実施の形態に限定されるものではない。 The image encoding method and the image decoding method according to the embodiment have been described above, but the present invention is not limited to this embodiment.
 また、上記実施の形態に係る画像符号化装置及び画像復号装置に含まれる各処理部は典型的には集積回路であるLSIとして実現される。これらは個別に1チップ化されてもよいし、一部又は全てを含むように1チップ化されてもよい。 In addition, each processing unit included in the image encoding device and the image decoding device according to the above embodiment is typically realized as an LSI that is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
 また、集積回路化はLSIに限るものではなく、専用回路又は汎用プロセッサで実現してもよい。LSI製造後にプログラムすることが可能なFPGA(Field Programmable Gate Array)、又はLSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。 Further, the integration of circuits is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
 上記各実施の形態において、各構成要素は、専用のハードウェアで構成されるか、各構成要素に適したソフトウェアプログラムを実行することによって実現されてもよい。各構成要素は、CPU又はプロセッサなどのプログラム実行部が、ハードディスク又は半導体メモリなどの記録媒体に記録されたソフトウェアプログラムを読み出して実行することによって実現されてもよい。 In each of the above embodiments, each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
 言い換えると、画像符号化装置及び画像復号装置は、処理回路(processing circuitry)と、当該処理回路に電気的に接続された(当該処理回路からアクセス可能な)記憶装置(storage)とを備える。処理回路は、専用のハードウェア及びプログラム実行部の少なくとも一方を含む。また、記憶装置は、処理回路がプログラム実行部を含む場合には、当該プログラム実行部により実行されるソフトウェアプログラムを記憶する。処理回路は、記憶装置を用いて、上記実施の形態に係る画像符号化方法又は画像復号方法を実行する。 In other words, the image encoding device and the image decoding device include a processing circuit and a storage device (storage) electrically connected to the processing circuit (accessible from the processing circuit). The processing circuit includes at least one of dedicated hardware and a program execution unit. Further, when the processing circuit includes a program execution unit, the storage device stores a software program executed by the program execution unit. The processing circuit executes the image encoding method or the image decoding method according to the above embodiment using the storage device.
 さらに、本発明は上記ソフトウェアプログラムであってもよいし、上記プログラムが記録された非一時的なコンピュータ読み取り可能な記録媒体であってもよい。また、上記プログラムは、インターネット等の伝送媒体を介して流通させることができるのは言うまでもない。 Furthermore, the present invention may be the software program or a non-transitory computer-readable recording medium on which the program is recorded. Needless to say, the program can be distributed via a transmission medium such as the Internet.
 また、上記で用いた数字は、全て本発明を具体的に説明するために例示するものであり、本発明は例示された数字に制限されない。 Further, all the numbers used above are illustrated for specifically explaining the present invention, and the present invention is not limited to the illustrated numbers.
 また、ブロック図における機能ブロックの分割は一例であり、複数の機能ブロックを一つの機能ブロックとして実現したり、一つの機能ブロックを複数に分割したり、一部の機能を他の機能ブロックに移してもよい。また、類似する機能を有する複数の機能ブロックの機能を単一のハードウェア又はソフトウェアが並列又は時分割に処理してもよい。 In addition, division of functional blocks in the block diagram is an example, and a plurality of functional blocks can be realized as one functional block, a single functional block can be divided into a plurality of functions, or some functions can be transferred to other functional blocks. May be. In addition, functions of a plurality of functional blocks having similar functions may be processed in parallel or time-division by a single hardware or software.
 また、上記の予測画像生成方法、符号化方法又は復号方法に含まれるステップが実行される順序は、本発明を具体的に説明するために例示するためのものであり、上記以外の順序であってもよい。また、上記ステップの一部が、他のステップと同時(並列)に実行されてもよい。 In addition, the order in which the steps included in the prediction image generation method, the encoding method, or the decoding method are executed is for illustration in order to specifically describe the present invention, and the order other than the above is used. May be. Also, some of the above steps may be executed simultaneously (in parallel) with other steps.
 また、各実施の形態において説明した処理は、単一の装置(システム)を用いて集中処理することによって実現してもよく、又は、複数の装置を用いて分散処理することによって実現してもよい。また、上記プログラムを実行するコンピュータは、単数であってもよく、複数であってもよい。すなわち、集中処理を行ってもよく、又は分散処理を行ってもよい。 Further, the processing described in each embodiment may be realized by centralized processing using a single device (system), or may be realized by distributed processing using a plurality of devices. Good. Further, the computer that executes the program may be singular or plural. That is, centralized processing may be performed, or distributed processing may be performed.
 以上、本発明の一つ又は複数の態様に係る予測画像生成装置、符号化装置及び復号装置について、実施の形態に基づいて説明したが、本発明は、この実施の形態に限定されるものではない。本発明の趣旨を逸脱しない限り、当業者が思いつく各種変形を本実施の形態に施したものや、異なる実施の形態における構成要素を組み合わせて構築される形態も、本発明の一つ又は複数の態様の範囲内に含まれてもよい。 As described above, the prediction image generation device, the encoding device, and the decoding device according to one or more aspects of the present invention have been described based on the embodiment. However, the present invention is not limited to this embodiment. Absent. Unless it deviates from the gist of the present invention, the embodiment in which various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments are also applicable to one or more of the present invention. It may be included within the scope of the embodiments.
 (実施の形態4)
 上記各実施の形態で示した動画像符号化方法(画像符号化方法)または動画像復号化方法(画像復号方法)の構成を実現するためのプログラムを記憶メディアに記録することにより、上記各実施の形態で示した処理を独立したコンピュータシステムにおいて簡単に実施することが可能となる。記憶メディアは、磁気ディスク、光ディスク、光磁気ディスク、ICカード、半導体メモリ等、プログラムを記録できるものであればよい。
(Embodiment 4)
By recording a program for realizing the configuration of the moving image encoding method (image encoding method) or the moving image decoding method (image decoding method) shown in each of the above embodiments on a storage medium, each of the above embodiments It is possible to easily execute the processing shown in the form in the independent computer system. The storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
 さらにここで、上記各実施の形態で示した動画像符号化方法(画像符号化方法)や動画像復号化方法(画像復号方法)の応用例とそれを用いたシステムを説明する。当該システムは、画像符号化方法を用いた画像符号化装置、及び画像復号方法を用いた画像復号装置からなる画像符号化復号装置を有することを特徴とする。システムにおける他の構成について、場合に応じて適切に変更することができる。 Furthermore, application examples of the moving picture coding method (picture coding method) and the moving picture decoding method (picture decoding method) shown in the above embodiments and a system using the same will be described. The system has an image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method. Other configurations in the system can be appropriately changed according to circumstances.
 図18は、コンテンツ配信サービスを実現するコンテンツ供給システムex100の全体構成を示す図である。通信サービスの提供エリアを所望の大きさに分割し、各セル内にそれぞれ固定無線局である基地局ex106、ex107、ex108、ex109、ex110が設置されている。 FIG. 18 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service. A communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
 このコンテンツ供給システムex100は、インターネットex101にインターネットサービスプロバイダex102および電話網ex104、および基地局ex106からex110を介して、コンピュータex111、PDA(Personal Digital Assistant)ex112、カメラex113、携帯電話ex114、ゲーム機ex115などの各機器が接続される。 The content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
 しかし、コンテンツ供給システムex100は図18のような構成に限定されず、いずれかの要素を組合せて接続するようにしてもよい。また、固定無線局である基地局ex106からex110を介さずに、各機器が電話網ex104に直接接続されてもよい。また、各機器が近距離無線等を介して直接相互に接続されていてもよい。 However, the content supply system ex100 is not limited to the configuration shown in FIG. 18 and may be connected by combining any of the elements. In addition, each device may be directly connected to the telephone network ex104 without going from the base station ex106, which is a fixed wireless station, to ex110. In addition, the devices may be directly connected to each other via short-range wireless or the like.
 カメラex113はデジタルビデオカメラ等の動画撮影が可能な機器であり、カメラex116はデジタルカメラ等の静止画撮影、動画撮影が可能な機器である。また、携帯電話ex114は、GSM(登録商標)(Global System for Mobile Communications)方式、CDMA(Code Division Multiple Access)方式、W-CDMA(Wideband-Code Division Multiple Access)方式、若しくはLTE(Long Term Evolution)方式、HSPA(High Speed Packet Access)の携帯電話機、またはPHS(Personal Handyphone System)等であり、いずれでも構わない。 The camera ex113 is a device that can shoot moving images such as a digital video camera, and the camera ex116 is a device that can shoot still images and movies such as a digital camera. The mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division Multiple Access) system, or an LTE (Long Terminal Term Evolution). It is possible to use any of the above-mentioned systems, HSPA (High Speed Packet Access) mobile phone, PHS (Personal Handyphone System), or the like.
 コンテンツ供給システムex100では、カメラex113等が基地局ex109、電話網ex104を通じてストリーミングサーバex103に接続されることで、ライブ配信等が可能になる。ライブ配信では、ユーザがカメラex113を用いて撮影するコンテンツ(例えば、音楽ライブの映像等)に対して上記各実施の形態で説明したように符号化処理を行い(即ち、本発明の一態様に係る画像符号化装置として機能する)、ストリーミングサーバex103に送信する。一方、ストリーミングサーバex103は要求のあったクライアントに対して送信されたコンテンツデータをストリーム配信する。クライアントとしては、上記符号化処理されたデータを復号化することが可能な、コンピュータex111、PDAex112、カメラex113、携帯電話ex114、ゲーム機ex115等がある。配信されたデータを受信した各機器では、受信したデータを復号化処理して再生する(即ち、本発明の一態様に係る画像復号装置として機能する)。 In the content supply system ex100, the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like. In live distribution, content that is shot by a user using the camera ex113 (for example, music live video) is encoded as described in each of the above embodiments (that is, in one aspect of the present invention). Functions as an image encoding device), and transmits it to the streaming server ex103. On the other hand, the streaming server ex103 stream-distributes the content data transmitted to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 that can decode the encoded data. Each device that receives the distributed data decodes the received data and reproduces it (that is, functions as an image decoding device according to one embodiment of the present invention).
 なお、撮影したデータの符号化処理はカメラex113で行っても、データの送信処理をするストリーミングサーバex103で行ってもよいし、互いに分担して行ってもよい。同様に配信されたデータの復号化処理はクライアントで行っても、ストリーミングサーバex103で行ってもよいし、互いに分担して行ってもよい。また、カメラex113に限らず、カメラex116で撮影した静止画像および/または動画像データを、コンピュータex111を介してストリーミングサーバex103に送信してもよい。この場合の符号化処理はカメラex116、コンピュータex111、ストリーミングサーバex103のいずれで行ってもよいし、互いに分担して行ってもよい。 Note that the captured data may be encoded by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be shared with each other. Similarly, the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in common with each other. In addition to the camera ex113, still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111. The encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
 また、これら符号化・復号化処理は、一般的にコンピュータex111や各機器が有するLSIex500において処理する。LSIex500は、ワンチップであっても複数チップからなる構成であってもよい。なお、動画像符号化・復号化用のソフトウェアをコンピュータex111等で読み取り可能な何らかの記録メディア(CD-ROM、フレキシブルディスク、ハードディスクなど)に組み込み、そのソフトウェアを用いて符号化・復号化処理を行ってもよい。さらに、携帯電話ex114がカメラ付きである場合には、そのカメラで取得した動画データを送信してもよい。このときの動画データは携帯電話ex114が有するLSIex500で符号化処理されたデータである。 Further, these encoding / decoding processes are generally performed in the computer ex111 and the LSI ex500 included in each device. The LSI ex500 may be configured as a single chip or a plurality of chips. It should be noted that moving image encoding / decoding software is incorporated into some recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111, etc., and encoding / decoding processing is performed using the software. May be. Furthermore, when the mobile phone ex114 is equipped with a camera, moving image data acquired by the camera may be transmitted. The moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
 また、ストリーミングサーバex103は複数のサーバや複数のコンピュータであって、データを分散して処理したり記録したり配信するものであってもよい。 Further, the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
 以上のようにして、コンテンツ供給システムex100では、符号化されたデータをクライアントが受信して再生することができる。このようにコンテンツ供給システムex100では、ユーザが送信した情報をリアルタイムでクライアントが受信して復号化し、再生することができ、特別な権利や設備を有さないユーザでも個人放送を実現できる。 As described above, in the content supply system ex100, the encoded data can be received and reproduced by the client. Thus, in the content supply system ex100, the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and personal broadcasting can be realized even for a user who does not have special rights or facilities.
 なお、コンテンツ供給システムex100の例に限らず、図19に示すように、デジタル放送用システムex200にも、上記各実施の形態の少なくとも動画像符号化装置(画像符号化装置)または動画像復号化装置(画像復号装置)のいずれかを組み込むことができる。具体的には、放送局ex201では映像データに音楽データなどが多重化された多重化データが電波を介して通信または衛星ex202に伝送される。この映像データは上記各実施の形態で説明した動画像符号化方法により符号化されたデータである(即ち、本発明の一態様に係る画像符号化装置によって符号化されたデータである)。これを受けた放送衛星ex202は、放送用の電波を発信し、この電波を衛星放送の受信が可能な家庭のアンテナex204が受信する。受信した多重化データを、テレビ(受信機)ex300またはセットトップボックス(STB)ex217等の装置が復号化して再生する(即ち、本発明の一態様に係る画像復号装置として機能する)。 In addition to the example of the content supply system ex100, as shown in FIG. 19, the digital broadcasting system ex200 also includes at least the moving image encoding device (image encoding device) or the moving image decoding according to each of the above embodiments. Any of the devices (image decoding devices) can be incorporated. Specifically, in the broadcast station ex201, multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves. This video data is data encoded by the moving image encoding method described in each of the above embodiments (that is, data encoded by the image encoding apparatus according to one aspect of the present invention). Receiving this, the broadcasting satellite ex202 transmits a radio wave for broadcasting, and this radio wave is received by a home antenna ex204 capable of receiving satellite broadcasting. The received multiplexed data is decoded and reproduced by an apparatus such as the television (receiver) ex300 or the set top box (STB) ex217 (that is, functions as an image decoding apparatus according to one embodiment of the present invention).
 また、DVD、BD等の記録メディアex215に記録した多重化データを読み取り復号化する、または記録メディアex215に映像信号を符号化し、さらに場合によっては音楽信号と多重化して書き込むリーダ/レコーダex218にも上記各実施の形態で示した動画像復号化装置または動画像符号化装置を実装することが可能である。この場合、再生された映像信号はモニタex219に表示され、多重化データが記録された記録メディアex215により他の装置やシステムにおいて映像信号を再生することができる。また、ケーブルテレビ用のケーブルex203または衛星/地上波放送のアンテナex204に接続されたセットトップボックスex217内に動画像復号化装置を実装し、これをテレビのモニタex219で表示してもよい。このときセットトップボックスではなく、テレビ内に動画像復号化装置を組み込んでもよい。 Also, a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system using the recording medium ex215 on which the multiplexed data is recorded. Alternatively, a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television. At this time, the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
 図20は、上記各実施の形態で説明した動画像復号化方法および動画像符号化方法を用いたテレビ(受信機)ex300を示す図である。テレビex300は、上記放送を受信するアンテナex204またはケーブルex203等を介して映像データに音声データが多重化された多重化データを取得、または出力するチューナex301と、受信した多重化データを復調する、または外部に送信する多重化データに変調する変調/復調部ex302と、復調した多重化データを映像データと、音声データとに分離する、または信号処理部ex306で符号化された映像データ、音声データを多重化する多重/分離部ex303を備える。 FIG. 20 is a diagram illustrating a television (receiver) ex300 that uses the video decoding method and the video encoding method described in each of the above embodiments. The television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data. Alternatively, the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
 また、テレビex300は、音声データ、映像データそれぞれを復号化する、またはそれぞれの情報を符号化する音声信号処理部ex304、映像信号処理部ex305(本発明の一態様に係る画像符号化装置または画像復号装置として機能する)を有する信号処理部ex306と、復号化した音声信号を出力するスピーカex307、復号化した映像信号を表示するディスプレイ等の表示部ex308を有する出力部ex309とを有する。さらに、テレビex300は、ユーザ操作の入力を受け付ける操作入力部ex312等を有するインタフェース部ex317を有する。さらに、テレビex300は、各部を統括的に制御する制御部ex310、各部に電力を供給する電源回路部ex311を有する。インタフェース部ex317は、操作入力部ex312以外に、リーダ/レコーダex218等の外部機器と接続されるブリッジex313、SDカード等の記録メディアex216を装着可能とするためのスロット部ex314、ハードディスク等の外部記録メディアと接続するためのドライバex315、電話網と接続するモデムex316等を有していてもよい。なお記録メディアex216は、格納する不揮発性/揮発性の半導体メモリ素子により電気的に情報の記録を可能としたものである。テレビex300の各部は同期バスを介して互いに接続されている。 The television ex300 also decodes the audio data and the video data, or encodes the information, the audio signal processing unit ex304, the video signal processing unit ex305 (the image encoding device or the image according to one embodiment of the present invention) A signal processing unit ex306 that functions as a decoding device), a speaker ex307 that outputs the decoded audio signal, and an output unit ex309 that includes a display unit ex308 such as a display that displays the decoded video signal. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that performs overall control of each unit, and a power supply circuit unit ex311 that supplies power to each unit. In addition to the operation input unit ex312, the interface unit ex317 includes a bridge unit ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording unit such as a hard disk. A driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included. Note that the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored. Each part of the television ex300 is connected to each other via a synchronous bus.
 まず、テレビex300がアンテナex204等により外部から取得した多重化データを復号化し、再生する構成について説明する。テレビex300は、リモートコントローラex220等からのユーザ操作を受け、CPU等を有する制御部ex310の制御に基づいて、変調/復調部ex302で復調した多重化データを多重/分離部ex303で分離する。さらにテレビex300は、分離した音声データを音声信号処理部ex304で復号化し、分離した映像データを映像信号処理部ex305で上記各実施の形態で説明した復号化方法を用いて復号化する。復号化した音声信号、映像信号は、それぞれ出力部ex309から外部に向けて出力される。出力する際には、音声信号と映像信号が同期して再生するよう、バッファex318、ex319等に一旦これらの信号を蓄積するとよい。また、テレビex300は、放送等からではなく、磁気/光ディスク、SDカード等の記録メディアex215、ex216から多重化データを読み出してもよい。次に、テレビex300が音声信号や映像信号を符号化し、外部に送信または記録メディア等に書き込む構成について説明する。テレビex300は、リモートコントローラex220等からのユーザ操作を受け、制御部ex310の制御に基づいて、音声信号処理部ex304で音声信号を符号化し、映像信号処理部ex305で映像信号を上記各実施の形態で説明した符号化方法を用いて符号化する。符号化した音声信号、映像信号は多重/分離部ex303で多重化され外部に出力される。多重化する際には、音声信号と映像信号が同期するように、バッファex320、ex321等に一旦これらの信号を蓄積するとよい。なお、バッファex318、ex319、ex320、ex321は図示しているように複数備えていてもよいし、1つ以上のバッファを共有する構成であってもよい。さらに、図示している以外に、例えば変調/復調部ex302や多重/分離部ex303の間等でもシステムのオーバフロー、アンダーフローを避ける緩衝材としてバッファにデータを蓄積することとしてもよい。 First, a configuration in which the television ex300 decodes and reproduces multiplexed data acquired from the outside by the antenna ex204 and the like will be described. The television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in each of the above embodiments. The decoded audio signal and video signal are output from the output unit ex309 to the outside. At the time of output, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Also, the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or to a recording medium will be described. The television ex300 receives a user operation from the remote controller ex220 and the like, encodes an audio signal with the audio signal processing unit ex304, and converts the video signal with the video signal processing unit ex305 based on the control of the control unit ex310. Encoding is performed using the encoding method described in (1). The encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320, ex321, etc. so that the audio signal and the video signal are synchronized. Note that a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
 また、テレビex300は、放送等や記録メディア等から音声データ、映像データを取得する以外に、マイクやカメラのAV入力を受け付ける構成を備え、それらから取得したデータに対して符号化処理を行ってもよい。なお、ここではテレビex300は上記の符号化処理、多重化、および外部出力ができる構成として説明したが、これらの処理を行うことはできず、上記受信、復号化処理、外部出力のみが可能な構成であってもよい。 In addition to acquiring audio data and video data from broadcasts, recording media, and the like, the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good. Here, the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
 また、リーダ/レコーダex218で記録メディアから多重化データを読み出す、または書き込む場合には、上記復号化処理または符号化処理はテレビex300、リーダ/レコーダex218のいずれで行ってもよいし、テレビex300とリーダ/レコーダex218が互いに分担して行ってもよい。 In addition, when reading or writing multiplexed data from a recording medium by the reader / recorder ex218, the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218, The reader / recorder ex218 may share with each other.
 一例として、光ディスクからデータの読み込みまたは書き込みをする場合の情報再生/記録部ex400の構成を図21に示す。情報再生/記録部ex400は、以下に説明する要素ex401、ex402、ex403、ex404、ex405、ex406、ex407を備える。光ヘッドex401は、光ディスクである記録メディアex215の記録面にレーザスポットを照射して情報を書き込み、記録メディアex215の記録面からの反射光を検出して情報を読み込む。変調記録部ex402は、光ヘッドex401に内蔵された半導体レーザを電気的に駆動し記録データに応じてレーザ光の変調を行う。再生復調部ex403は、光ヘッドex401に内蔵されたフォトディテクタにより記録面からの反射光を電気的に検出した再生信号を増幅し、記録メディアex215に記録された信号成分を分離して復調し、必要な情報を再生する。バッファex404は、記録メディアex215に記録するための情報および記録メディアex215から再生した情報を一時的に保持する。ディスクモータex405は記録メディアex215を回転させる。サーボ制御部ex406は、ディスクモータex405の回転駆動を制御しながら光ヘッドex401を所定の情報トラックに移動させ、レーザスポットの追従処理を行う。システム制御部ex407は、情報再生/記録部ex400全体の制御を行う。上記の読み出しや書き込みの処理はシステム制御部ex407が、バッファex404に保持された各種情報を利用し、また必要に応じて新たな情報の生成・追加を行うと共に、変調記録部ex402、再生復調部ex403、サーボ制御部ex406を協調動作させながら、光ヘッドex401を通して、情報の記録再生を行うことにより実現される。システム制御部ex407は例えばマイクロプロセッサで構成され、読み出し書き込みのプログラムを実行することでそれらの処理を実行する。 As an example, FIG. 21 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to an optical disk. The information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below. The optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disk to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information. The modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data. The reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary To play back information. The buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215. The disk motor ex405 rotates the recording medium ex215. The servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process. The system control unit ex407 controls the entire information reproduction / recording unit ex400. In the reading and writing processes described above, the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary. The modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner. The system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
 以上では、光ヘッドex401はレーザスポットを照射するとして説明したが、近接場光を用いてより高密度な記録を行う構成であってもよい。 In the above, the optical head ex401 has been described as irradiating a laser spot. However, a configuration in which higher-density recording is performed using near-field light may be used.
 図22に光ディスクである記録メディアex215の模式図を示す。記録メディアex215の記録面には案内溝(グルーブ)がスパイラル状に形成され、情報トラックex230には、予めグルーブの形状の変化によってディスク上の絶対位置を示す番地情報が記録されている。この番地情報はデータを記録する単位である記録ブロックex231の位置を特定するための情報を含み、記録や再生を行う装置において情報トラックex230を再生し番地情報を読み取ることで記録ブロックを特定することができる。また、記録メディアex215は、データ記録領域ex233、内周領域ex232、外周領域ex234を含んでいる。ユーザデータを記録するために用いる領域がデータ記録領域ex233であり、データ記録領域ex233より内周または外周に配置されている内周領域ex232と外周領域ex234は、ユーザデータの記録以外の特定用途に用いられる。情報再生/記録部ex400は、このような記録メディアex215のデータ記録領域ex233に対して、符号化された音声データ、映像データまたはそれらのデータを多重化した多重化データの読み書きを行う。 FIG. 22 shows a schematic diagram of a recording medium ex215 that is an optical disk. Guide grooves (grooves) are formed in a spiral shape on the recording surface of the recording medium ex215, and address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove. This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus. Can do. Further, the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234. The area used for recording user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used. The information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
 以上では、1層のDVD、BD等の光ディスクを例に挙げ説明したが、これらに限ったものではなく、多層構造であって表面以外にも記録可能な光ディスクであってもよい。また、ディスクの同じ場所にさまざまな異なる波長の色の光を用いて情報を記録したり、さまざまな角度から異なる情報の層を記録したりなど、多次元的な記録/再生を行う構造の光ディスクであってもよい。 In the above description, an optical disk such as a single-layer DVD or BD has been described as an example. However, the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used. Also, an optical disc with a multi-dimensional recording / reproducing structure, such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
 また、デジタル放送用システムex200において、アンテナex205を有する車ex210で衛星ex202等からデータを受信し、車ex210が有するカーナビゲーションex211等の表示装置に動画を再生することも可能である。なお、カーナビゲーションex211の構成は例えば図20に示す構成のうち、GPS受信部を加えた構成が考えられ、同様なことがコンピュータex111や携帯電話ex114等でも考えられる。 Further, in the digital broadcasting system ex200, the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has. The configuration of the car navigation ex211 may be, for example, the configuration shown in FIG. 20 with a GPS receiving unit added, and the same may be considered for the computer ex111, the mobile phone ex114, and the like.
 図23Aは、上記実施の形態で説明した動画像復号化方法および動画像符号化方法を用いた携帯電話ex114を示す図である。携帯電話ex114は、基地局ex110との間で電波を送受信するためのアンテナex350、映像、静止画を撮ることが可能なカメラ部ex365、カメラ部ex365で撮像した映像、アンテナex350で受信した映像等が復号化されたデータを表示する液晶ディスプレイ等の表示部ex358を備える。携帯電話ex114は、さらに、操作キー部ex366を有する本体部、音声を出力するためのスピーカ等である音声出力部ex357、音声を入力するためのマイク等である音声入力部ex356、撮影した映像、静止画、録音した音声、または受信した映像、静止画、メール等の符号化されたデータもしくは復号化されたデータを保存するメモリ部ex367、又は同様にデータを保存する記録メディアとのインタフェース部であるスロット部ex364を備える。 FIG. 23A is a diagram showing the mobile phone ex114 using the video decoding method and the video encoding method described in the above embodiment. The mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of capturing video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data. The mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio, a captured video, In the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data A slot ex364 is provided.
 さらに、携帯電話ex114の構成例について、図23Bを用いて説明する。携帯電話ex114は、表示部ex358及び操作キー部ex366を備えた本体部の各部を統括的に制御する主制御部ex360に対して、電源回路部ex361、操作入力制御部ex362、映像信号処理部ex355、カメラインタフェース部ex363、LCD(Liquid Crystal Display)制御部ex359、変調/復調部ex352、多重/分離部ex353、音声信号処理部ex354、スロット部ex364、メモリ部ex367がバスex370を介して互いに接続されている。 Furthermore, a configuration example of the mobile phone ex114 will be described with reference to FIG. The mobile phone ex114 has a power supply circuit part ex361, an operation input control part ex362, and a video signal processing part ex355 with respect to a main control part ex360 that comprehensively controls each part of the main body including the display part ex358 and the operation key part ex366. , A camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
 電源回路部ex361は、ユーザの操作により終話及び電源キーがオン状態にされると、バッテリパックから各部に対して電力を供給することにより携帯電話ex114を動作可能な状態に起動する。 When the end of call and the power key are turned on by a user operation, the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
 携帯電話ex114は、CPU、ROM、RAM等を有する主制御部ex360の制御に基づいて、音声通話モード時に音声入力部ex356で収音した音声信号を音声信号処理部ex354でデジタル音声信号に変換し、これを変調/復調部ex352でスペクトラム拡散処理し、送信/受信部ex351でデジタルアナログ変換処理および周波数変換処理を施した後にアンテナex350を介して送信する。また携帯電話ex114は、音声通話モード時にアンテナex350を介して受信した受信データを増幅して周波数変換処理およびアナログデジタル変換処理を施し、変調/復調部ex352でスペクトラム逆拡散処理し、音声信号処理部ex354でアナログ音声信号に変換した後、これを音声出力部ex357から出力する。 The cellular phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. Then, this is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing are performed by the transmission / reception unit ex351, and then transmitted via the antenna ex350. The mobile phone ex114 also amplifies the received data received via the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing by the modulation / demodulation unit ex352, and performs voice signal processing unit After being converted into an analog audio signal by ex354, this is output from the audio output unit ex357.
 さらにデータ通信モード時に電子メールを送信する場合、本体部の操作キー部ex366等の操作によって入力された電子メールのテキストデータは操作入力制御部ex362を介して主制御部ex360に送出される。主制御部ex360は、テキストデータを変調/復調部ex352でスペクトラム拡散処理をし、送信/受信部ex351でデジタルアナログ変換処理および周波数変換処理を施した後にアンテナex350を介して基地局ex110へ送信する。電子メールを受信する場合は、受信したデータに対してこのほぼ逆の処理が行われ、表示部ex358に出力される。 Further, when an e-mail is transmitted in the data communication mode, the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362. The main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350. . In the case of receiving an e-mail, almost the reverse process is performed on the received data and output to the display unit ex358.
 データ通信モード時に映像、静止画、または映像と音声を送信する場合、映像信号処理部ex355は、カメラ部ex365から供給された映像信号を上記各実施の形態で示した動画像符号化方法によって圧縮符号化し(即ち、本発明の一態様に係る画像符号化装置として機能する)、符号化された映像データを多重/分離部ex353に送出する。また、音声信号処理部ex354は、映像、静止画等をカメラ部ex365で撮像中に音声入力部ex356で収音した音声信号を符号化し、符号化された音声データを多重/分離部ex353に送出する。 When transmitting video, still images, or video and audio in the data communication mode, the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments. Encode (that is, function as an image encoding device according to an aspect of the present invention), and send the encoded video data to the multiplexing / demultiplexing unit ex353. The audio signal processing unit ex354 encodes the audio signal picked up by the audio input unit ex356 while the camera unit ex365 images a video, a still image, etc., and sends the encoded audio data to the multiplexing / separating unit ex353. To do.
 多重/分離部ex353は、映像信号処理部ex355から供給された符号化された映像データと音声信号処理部ex354から供給された符号化された音声データを所定の方式で多重化し、その結果得られる多重化データを変調/復調部(変調/復調回路部)ex352でスペクトラム拡散処理をし、送信/受信部ex351でデジタルアナログ変換処理及び周波数変換処理を施した後にアンテナex350を介して送信する。 The multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result. The multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
 データ通信モード時にホームページ等にリンクされた動画像ファイルのデータを受信する場合、または映像およびもしくは音声が添付された電子メールを受信する場合、アンテナex350を介して受信された多重化データを復号化するために、多重/分離部ex353は、多重化データを分離することにより映像データのビットストリームと音声データのビットストリームとに分け、同期バスex370を介して符号化された映像データを映像信号処理部ex355に供給するとともに、符号化された音声データを音声信号処理部ex354に供給する。映像信号処理部ex355は、上記各実施の形態で示した動画像符号化方法に対応した動画像復号化方法によって復号化することにより映像信号を復号し(即ち、本発明の一態様に係る画像復号装置として機能する)、LCD制御部ex359を介して表示部ex358から、例えばホームページにリンクされた動画像ファイルに含まれる映像、静止画が表示される。また音声信号処理部ex354は、音声信号を復号し、音声出力部ex357から音声が出力される。 Decode multiplexed data received via antenna ex350 when receiving video file data linked to a homepage, etc. in data communication mode, or when receiving e-mail with video and / or audio attached Therefore, the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370. The encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355. The video signal processing unit ex355 decodes the video signal by decoding using the video decoding method corresponding to the video encoding method described in each of the above embodiments (that is, an image according to an aspect of the present invention). For example, video and still images included in the moving image file linked to the home page are displayed from the display unit ex358 via the LCD control unit ex359. The audio signal processing unit ex354 decodes the audio signal, and the audio is output from the audio output unit ex357.
 また、上記携帯電話ex114等の端末は、テレビex300と同様に、符号化器・復号化器を両方持つ送受信型端末の他に、符号化器のみの送信端末、復号化器のみの受信端末という3通りの実装形式が考えられる。さらに、デジタル放送用システムex200において、映像データに音楽データなどが多重化された多重化データを受信、送信するとして説明したが、音声データ以外に映像に関連する文字データなどが多重化されたデータであってもよいし、多重化データではなく映像データ自体であってもよい。 In addition to the transmission / reception type terminal having both the encoder and the decoder, the terminal such as the mobile phone ex114 is referred to as a transmission terminal having only an encoder and a receiving terminal having only a decoder. There are three possible mounting formats. Furthermore, in the digital broadcasting system ex200, it has been described that multiplexed data in which music data or the like is multiplexed with video data is received and transmitted, but data in which character data or the like related to video is multiplexed in addition to audio data It may be video data itself instead of multiplexed data.
 このように、上記各実施の形態で示した動画像符号化方法あるいは動画像復号化方法を上述したいずれの機器・システムに用いることは可能であり、そうすることで、上記各実施の形態で説明した効果を得ることができる。 As described above, the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
 また、本発明はかかる上記実施の形態に限定されるものではなく、本発明の範囲を逸脱することなく種々の変形または修正が可能である。 Further, the present invention is not limited to the above-described embodiment, and various changes and modifications can be made without departing from the scope of the present invention.
 (実施の形態5)
 上記各実施の形態で示した動画像符号化方法または装置と、MPEG-2、MPEG4-AVC、VC-1など異なる規格に準拠した動画像符号化方法または装置とを、必要に応じて適宜切替えることにより、映像データを生成することも可能である。
(Embodiment 5)
The moving picture coding method or apparatus shown in the above embodiments and the moving picture coding method or apparatus compliant with different standards such as MPEG-2, MPEG4-AVC, and VC-1 are appropriately switched as necessary. Thus, it is also possible to generate video data.
 ここで、それぞれ異なる規格に準拠する複数の映像データを生成した場合、復号する際に、それぞれの規格に対応した復号方法を選択する必要がある。しかしながら、復号する映像データが、どの規格に準拠するものであるか識別できないため、適切な復号方法を選択することができないという課題を生じる。 Here, when a plurality of pieces of video data conforming to different standards are generated, it is necessary to select a decoding method corresponding to each standard when decoding. However, since it is impossible to identify which standard the video data to be decoded complies with, there arises a problem that an appropriate decoding method cannot be selected.
 この課題を解決するために、映像データに音声データなどを多重化した多重化データは、映像データがどの規格に準拠するものであるかを示す識別情報を含む構成とする。上記各実施の形態で示す動画像符号化方法または装置によって生成された映像データを含む多重化データの具体的な構成を以下説明する。多重化データは、MPEG-2トランスポートストリーム形式のデジタルストリームである。 In order to solve this problem, multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to. A specific configuration of multiplexed data including video data generated by the moving picture encoding method or apparatus shown in the above embodiments will be described below. The multiplexed data is a digital stream in the MPEG-2 transport stream format.
 図24は、多重化データの構成を示す図である。図24に示すように多重化データは、ビデオストリーム、オーディオストリーム、プレゼンテーショングラフィックスストリーム(PG)、インタラクティブグラフィックスストリームのうち、1つ以上を多重化することで得られる。ビデオストリームは映画の主映像および副映像を、オーディオストリーム(IG)は映画の主音声部分とその主音声とミキシングする副音声を、プレゼンテーショングラフィックスストリームは、映画の字幕をそれぞれ示している。ここで主映像とは画面に表示される通常の映像を示し、副映像とは主映像の中に小さな画面で表示する映像のことである。また、インタラクティブグラフィックスストリームは、画面上にGUI部品を配置することにより作成される対話画面を示している。ビデオストリームは、上記各実施の形態で示した動画像符号化方法または装置、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠した動画像符号化方法または装置によって符号化されている。オーディオストリームは、ドルビーAC-3、Dolby Digital Plus、MLP、DTS、DTS-HD、または、リニアPCMのなどの方式で符号化されている。 FIG. 24 is a diagram showing a structure of multiplexed data. As shown in FIG. 24, multiplexed data is obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream. The video stream indicates the main video and sub-video of the movie, the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio, and the presentation graphics stream indicates the subtitles of the movie. Here, the main video indicates a normal video displayed on the screen, and the sub-video is a video displayed on a small screen in the main video. The interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen. The video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing. The audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
 多重化データに含まれる各ストリームはPIDによって識別される。例えば、映画の映像に利用するビデオストリームには0x1011が、オーディオストリームには0x1100から0x111Fまでが、プレゼンテーショングラフィックスには0x1200から0x121Fまでが、インタラクティブグラフィックスストリームには0x1400から0x141Fまでが、映画の副映像に利用するビデオストリームには0x1B00から0x1B1Fまで、主音声とミキシングする副音声に利用するオーディオストリームには0x1A00から0x1A1Fが、それぞれ割り当てられている。 Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
 図25は、多重化データがどのように多重化されるかを模式的に示す図である。まず、複数のビデオフレームからなるビデオストリームex235、複数のオーディオフレームからなるオーディオストリームex238を、それぞれPESパケット列ex236およびex239に変換し、TSパケットex237およびex240に変換する。同じくプレゼンテーショングラフィックスストリームex241およびインタラクティブグラフィックスex244のデータをそれぞれPESパケット列ex242およびex245に変換し、さらにTSパケットex243およびex246に変換する。多重化データex247はこれらのTSパケットを1本のストリームに多重化することで構成される。 FIG. 25 is a diagram schematically showing how multiplexed data is multiplexed. First, a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240. Similarly, the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246. The multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
 図26は、PESパケット列に、ビデオストリームがどのように格納されるかをさらに詳しく示している。図26における第1段目はビデオストリームのビデオフレーム列を示す。第2段目は、PESパケット列を示す。図26の矢印yy1,yy2,yy3,yy4に示すように、ビデオストリームにおける複数のVideo Presentation UnitであるIピクチャ、Bピクチャ、Pピクチャは、ピクチャ毎に分割され、PESパケットのペイロードに格納される。各PESパケットはPESヘッダを持ち、PESヘッダには、ピクチャの表示時刻であるPTS(Presentation Time-Stamp)やピクチャの復号時刻であるDTS(Decoding Time-Stamp)が格納される。 FIG. 26 shows in more detail how the video stream is stored in the PES packet sequence. The first row in FIG. 26 shows a video frame sequence of the video stream. The second level shows a PES packet sequence. As shown by arrows yy1, yy2, yy3, and yy4 in FIG. 26, a plurality of Video Presentation Units in the video stream are divided into pictures, B pictures, and P pictures, and are stored in the payload of the PES packet. . Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
 図27は、多重化データに最終的に書き込まれるTSパケットの形式を示している。TSパケットは、ストリームを識別するPIDなどの情報を持つ4ByteのTSヘッダとデータを格納する184ByteのTSペイロードから構成される188Byte固定長のパケットであり、上記PESパケットは分割されTSペイロードに格納される。BD-ROMの場合、TSパケットには、4ByteのTP_Extra_Headerが付与され、192Byteのソースパケットを構成し、多重化データに書き込まれる。TP_Extra_HeaderにはATS(Arrival_Time_Stamp)などの情報が記載される。ATSは当該TSパケットのデコーダのPIDフィルタへの転送開始時刻を示す。多重化データには図27下段に示すようにソースパケットが並ぶこととなり、多重化データの先頭からインクリメントする番号はSPN(ソースパケットナンバー)と呼ばれる。 FIG. 27 shows the format of TS packets that are finally written in the multiplexed data. The TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data. The PES packet is divided and stored in the TS payload. The In the case of a BD-ROM, a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data. In TP_Extra_Header, information such as ATS (Arrival_Time_Stamp) is described. ATS indicates the transfer start time of the TS packet to the PID filter of the decoder. In the multiplexed data, source packets are arranged as shown in the lower part of FIG. 27, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
 また、多重化データに含まれるTSパケットには、映像・音声・字幕などの各ストリーム以外にもPAT(Program Association Table)、PMT(Program Map Table)、PCR(Program Clock Reference)などがある。PATは多重化データ中に利用されるPMTのPIDが何であるかを示し、PAT自身のPIDは0で登録される。PMTは、多重化データ中に含まれる映像・音声・字幕などの各ストリームのPIDと各PIDに対応するストリームの属性情報を持ち、また多重化データに関する各種ディスクリプタを持つ。ディスクリプタには多重化データのコピーを許可・不許可を指示するコピーコントロール情報などがある。PCRは、ATSの時間軸であるATC(Arrival Time Clock)とPTS・DTSの時間軸であるSTC(System Time Clock)の同期を取るために、そのPCRパケットがデコーダに転送されるATSに対応するSTC時間の情報を持つ。 In addition, TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption. PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0. The PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data. The descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data. In order to synchronize the ATC (Arrival Time Clock), which is the ATS time axis, and the STC (System Time Clock), which is the PTS / DTS time axis, the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
 図28はPMTのデータ構造を詳しく説明する図である。PMTの先頭には、そのPMTに含まれるデータの長さなどを記したPMTヘッダが配置される。その後ろには、多重化データに関するディスクリプタが複数配置される。上記コピーコントロール情報などが、ディスクリプタとして記載される。ディスクリプタの後には、多重化データに含まれる各ストリームに関するストリーム情報が複数配置される。ストリーム情報は、ストリームの圧縮コーデックなどを識別するためストリームタイプ、ストリームのPID、ストリームの属性情報(フレームレート、アスペクト比など)が記載されたストリームディスクリプタから構成される。ストリームディスクリプタは多重化データに存在するストリームの数だけ存在する。 FIG. 28 is a diagram for explaining the data structure of the PMT in detail. A PMT header describing the length of data included in the PMT is arranged at the head of the PMT. After that, a plurality of descriptors related to multiplexed data are arranged. The copy control information and the like are described as descriptors. After the descriptor, a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged. The stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream. There are as many stream descriptors as the number of streams existing in the multiplexed data.
 記録媒体などに記録する場合には、上記多重化データは、多重化データ情報ファイルと共に記録される。 When recording on a recording medium or the like, the multiplexed data is recorded together with the multiplexed data information file.
 多重化データ情報ファイルは、図29に示すように多重化データの管理情報であり、多重化データと1対1に対応し、多重化データ情報、ストリーム属性情報とエントリマップから構成される。 As shown in FIG. 29, the multiplexed data information file is management information of multiplexed data, has a one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
 多重化データ情報は図29に示すようにシステムレート、再生開始時刻、再生終了時刻から構成されている。システムレートは多重化データの、後述するシステムターゲットデコーダのPIDフィルタへの最大転送レートを示す。多重化データ中に含まれるATSの間隔はシステムレート以下になるように設定されている。再生開始時刻は多重化データの先頭のビデオフレームのPTSであり、再生終了時刻は多重化データの終端のビデオフレームのPTSに1フレーム分の再生間隔を足したものが設定される。 The multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time as shown in FIG. The system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later. The ATS interval included in the multiplexed data is set to be equal to or less than the system rate. The playback start time is the PTS of the first video frame of the multiplexed data, and the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
 ストリーム属性情報は図30に示すように、多重化データに含まれる各ストリームについての属性情報が、PID毎に登録される。属性情報はビデオストリーム、オーディオストリーム、プレゼンテーショングラフィックスストリーム、インタラクティブグラフィックスストリーム毎に異なる情報を持つ。ビデオストリーム属性情報は、そのビデオストリームがどのような圧縮コーデックで圧縮されたか、ビデオストリームを構成する個々のピクチャデータの解像度がどれだけであるか、アスペクト比はどれだけであるか、フレームレートはどれだけであるかなどの情報を持つ。オーディオストリーム属性情報は、そのオーディオストリームがどのような圧縮コーデックで圧縮されたか、そのオーディオストリームに含まれるチャンネル数は何であるか、何の言語に対応するか、サンプリング周波数がどれだけであるかなどの情報を持つ。これらの情報は、プレーヤが再生する前のデコーダの初期化などに利用される。 In the stream attribute information, as shown in FIG. 30, the attribute information for each stream included in the multiplexed data is registered for each PID. The attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream. The video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is. The audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
 本実施の形態においては、上記多重化データのうち、PMTに含まれるストリームタイプを利用する。また、記録媒体に多重化データが記録されている場合には、多重化データ情報に含まれる、ビデオストリーム属性情報を利用する。具体的には、上記各実施の形態で示した動画像符号化方法または装置において、PMTに含まれるストリームタイプ、または、ビデオストリーム属性情報に対し、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示す固有の情報を設定するステップまたは手段を設ける。この構成により、上記各実施の形態で示した動画像符号化方法または装置によって生成した映像データと、他の規格に準拠する映像データとを識別することが可能になる。 In this embodiment, among the multiplexed data, the stream type included in the PMT is used. Also, when multiplexed data is recorded on the recording medium, video stream attribute information included in the multiplexed data information is used. Specifically, in the video encoding method or apparatus shown in each of the above embodiments, the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT. There is provided a step or means for setting unique information indicating that the video data is generated by the method or apparatus. With this configuration, it is possible to discriminate between video data generated by the moving picture encoding method or apparatus described in the above embodiments and video data compliant with other standards.
 また、本実施の形態における動画像復号化方法のステップを図31に示す。ステップexS100において、多重化データからPMTに含まれるストリームタイプ、または、多重化データ情報に含まれるビデオストリーム属性情報を取得する。次に、ステップexS101において、ストリームタイプ、または、ビデオストリーム属性情報が上記各実施の形態で示した動画像符号化方法または装置によって生成された多重化データであることを示しているか否かを判断する。そして、ストリームタイプ、または、ビデオストリーム属性情報が上記各実施の形態で示した動画像符号化方法または装置によって生成されたものであると判断された場合には、ステップexS102において、上記各実施の形態で示した動画像復号方法により復号を行う。また、ストリームタイプ、または、ビデオストリーム属性情報が、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠するものであることを示している場合には、ステップexS103において、従来の規格に準拠した動画像復号方法により復号を行う。 FIG. 31 shows the steps of the moving picture decoding method according to the present embodiment. In step exS100, the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data. Next, in step exS101, it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do. When it is determined that the stream type or the video stream attribute information is generated by the moving image encoding method or apparatus described in the above embodiments, in step exS102, the above embodiments are performed. Decoding is performed by the moving picture decoding method shown in the form. If the stream type or video stream attribute information indicates that it conforms to a standard such as conventional MPEG-2, MPEG4-AVC, or VC-1, in step exS103, the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
 このように、ストリームタイプ、または、ビデオストリーム属性情報に新たな固有値を設定することにより、復号する際に、上記各実施の形態で示した動画像復号化方法または装置で復号可能であるかを判断することができる。従って、異なる規格に準拠する多重化データが入力された場合であっても、適切な復号化方法または装置を選択することができるため、エラーを生じることなく復号することが可能となる。また、本実施の形態で示した動画像符号化方法または装置、または、動画像復号方法または装置を、上述したいずれの機器・システムに用いることも可能である。 In this way, by setting a new unique value in the stream type or video stream attribute information, whether or not decoding is possible with the moving picture decoding method or apparatus described in each of the above embodiments is performed. Judgment can be made. Therefore, even when multiplexed data conforming to different standards is input, an appropriate decoding method or apparatus can be selected, and therefore decoding can be performed without causing an error. In addition, the moving picture encoding method or apparatus or the moving picture decoding method or apparatus described in this embodiment can be used in any of the above-described devices and systems.
 (実施の形態6)
 上記各実施の形態で示した動画像符号化方法および装置、動画像復号化方法および装置は、典型的には集積回路であるLSIで実現される。一例として、図32に1チップ化されたLSIex500の構成を示す。LSIex500は、以下に説明する要素ex501、ex502、ex503、ex504、ex505、ex506、ex507、ex508、ex509を備え、各要素はバスex510を介して接続している。電源回路部ex505は電源がオン状態の場合に各部に対して電力を供給することで動作可能な状態に起動する。
(Embodiment 6)
The moving picture encoding method and apparatus and moving picture decoding method and apparatus described in the above embodiments are typically realized by an LSI that is an integrated circuit. As an example, FIG. 32 shows a configuration of an LSI ex500 that is made into one chip. The LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510. The power supply circuit unit ex505 is activated to an operable state by supplying power to each unit when the power supply is on.
 例えば符号化処理を行う場合には、LSIex500は、CPUex502、メモリコントローラex503、ストリームコントローラex504、駆動周波数制御部ex512等を有する制御部ex501の制御に基づいて、AV I/Oex509によりマイクex117やカメラex113等からAV信号を入力する。入力されたAV信号は、一旦SDRAM等の外部のメモリex511に蓄積される。制御部ex501の制御に基づいて、蓄積したデータは処理量や処理速度に応じて適宜複数回に分けるなどされ信号処理部ex507に送られ、信号処理部ex507において音声信号の符号化および/または映像信号の符号化が行われる。ここで映像信号の符号化処理は上記各実施の形態で説明した符号化処理である。信号処理部ex507ではさらに、場合により符号化された音声データと符号化された映像データを多重化するなどの処理を行い、ストリームI/Oex506から外部に出力する。この出力された多重化データは、基地局ex107に向けて送信されたり、または記録メディアex215に書き込まれたりする。なお、多重化する際には同期するよう、一旦バッファex508にデータを蓄積するとよい。 For example, when performing the encoding process, the LSI ex500 uses the AV I / O ex509 to perform the microphone ex117 and the camera ex113 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512, and the like. The AV signal is input from the above. The input AV signal is temporarily stored in an external memory ex511 such as SDRAM. Based on the control of the control unit ex501, the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed. Here, the encoding process of the video signal is the encoding process described in the above embodiments. The signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside. The output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so as to be synchronized when multiplexing.
 なお、上記では、メモリex511がLSIex500の外部の構成として説明したが、LSIex500の内部に含まれる構成であってもよい。バッファex508も1つに限ったものではなく、複数のバッファを備えていてもよい。また、LSIex500は1チップ化されてもよいし、複数チップ化されてもよい。 In the above description, the memory ex511 is described as an external configuration of the LSI ex500. However, a configuration included in the LSI ex500 may be used. The number of buffers ex508 is not limited to one, and a plurality of buffers may be provided. The LSI ex500 may be made into one chip or a plurality of chips.
 また、上記では、制御部ex501が、CPUex502、メモリコントローラex503、ストリームコントローラex504、駆動周波数制御部ex512等を有するとしているが、制御部ex501の構成は、この構成に限らない。例えば、信号処理部ex507がさらにCPUを備える構成であってもよい。信号処理部ex507の内部にもCPUを設けることにより、処理速度をより向上させることが可能になる。また、他の例として、CPUex502が信号処理部ex507、または信号処理部ex507の一部である例えば音声信号処理部を備える構成であってもよい。このような場合には、制御部ex501は、信号処理部ex507、またはその一部を有するCPUex502を備える構成となる。 In the above description, the control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex501 is not limited to this configuration. For example, the signal processing unit ex507 may further include a CPU. By providing a CPU also in the signal processing unit ex507, the processing speed can be further improved. As another example, the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507. In such a case, the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
 なお、ここでは、LSIとしたが、集積度の違いにより、IC、システムLSI、スーパーLSI、ウルトラLSIと呼称されることもある。 In addition, although it was set as LSI here, it may be called IC, system LSI, super LSI, and ultra LSI depending on the degree of integration.
 また、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサを利用してもよい。このようなプログラマブル・ロジック・デバイスは、典型的には、ソフトウェア又はファームウェアを構成するプログラムを、ロードする又はメモリ等から読み込むことで、上記各実施の形態で示した動画像符号化方法、又は動画像復号化方法を実行することができる。 Further, the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible. An FPGA (Field Programmable Gate Array) that can be programmed after manufacturing the LSI or a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used. Such a programmable logic device typically loads or reads a program constituting software or firmware from a memory or the like, so that the moving image encoding method or the moving image described in each of the above embodiments is used. An image decoding method can be performed.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行ってもよい。バイオ技術の適応等が可能性としてありえる。 Furthermore, if integrated circuit technology that replaces LSI emerges as a result of advances in semiconductor technology or other derived technology, it is naturally also possible to integrate functional blocks using this technology. Biotechnology can be applied.
 (実施の形態7)
 上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データを復号する場合、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データを復号する場合に比べ、処理量が増加することが考えられる。そのため、LSIex500において、従来の規格に準拠する映像データを復号する際のCPUex502の駆動周波数よりも高い駆動周波数に設定する必要がある。しかし、駆動周波数を高くすると、消費電力が高くなるという課題が生じる。
(Embodiment 7)
When decoding the video data generated by the moving picture encoding method or apparatus shown in the above embodiments, the video data conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1 is decoded. It is conceivable that the amount of processing increases compared to the case. Therefore, in LSI ex500, it is necessary to set a driving frequency higher than the driving frequency of CPU ex502 when decoding video data compliant with the conventional standard. However, when the drive frequency is increased, there is a problem that power consumption increases.
 この課題を解決するために、テレビex300、LSIex500などの動画像復号化装置は、映像データがどの規格に準拠するものであるかを識別し、規格に応じて駆動周波数を切替える構成とする。図33は、本実施の形態における構成ex800を示している。駆動周波数切替え部ex803は、映像データが、上記各実施の形態で示した動画像符号化方法または装置によって生成されたものである場合には、駆動周波数を高く設定する。そして、上記各実施の形態で示した動画像復号化方法を実行する復号処理部ex801に対し、映像データを復号するよう指示する。一方、映像データが、従来の規格に準拠する映像データである場合には、映像データが、上記各実施の形態で示した動画像符号化方法または装置によって生成されたものである場合に比べ、駆動周波数を低く設定する。そして、従来の規格に準拠する復号処理部ex802に対し、映像データを復号するよう指示する。 In order to solve this problem, moving picture decoding devices such as the television ex300 and LSI ex500 are configured to identify which standard the video data conforms to and switch the driving frequency in accordance with the standard. FIG. 33 shows a configuration ex800 in the present embodiment. The drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments. Then, the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments is instructed to decode the video data. On the other hand, when the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
 より具体的には、駆動周波数切替え部ex803は、図32のCPUex502と駆動周波数制御部ex512から構成される。また、上記各実施の形態で示した動画像復号化方法を実行する復号処理部ex801、および、従来の規格に準拠する復号処理部ex802は、図32の信号処理部ex507に該当する。CPUex502は、映像データがどの規格に準拠するものであるかを識別する。そして、CPUex502からの信号に基づいて、駆動周波数制御部ex512は、駆動周波数を設定する。また、CPUex502からの信号に基づいて、信号処理部ex507は、映像データの復号を行う。ここで、映像データの識別には、例えば、実施の形態5で記載した識別情報を利用することが考えられる。識別情報に関しては、実施の形態5で記載したものに限られず、映像データがどの規格に準拠するか識別できる情報であればよい。例えば、映像データがテレビに利用されるものであるか、ディスクに利用されるものであるかなどを識別する外部信号に基づいて、映像データがどの規格に準拠するものであるか識別可能である場合には、このような外部信号に基づいて識別してもよい。また、CPUex502における駆動周波数の選択は、例えば、図35のような映像データの規格と、駆動周波数とを対応付けたルックアップテーブルに基づいて行うことが考えられる。ルックアップテーブルを、バッファex508や、LSIの内部メモリに格納しておき、CPUex502がこのルックアップテーブルを参照することにより、駆動周波数を選択することが可能である。 More specifically, the drive frequency switching unit ex803 includes the CPU ex502 and the drive frequency control unit ex512 in FIG. Also, the decoding processing unit ex801 that executes the moving picture decoding method shown in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG. The CPU ex502 identifies which standard the video data conforms to. Then, based on the signal from the CPU ex502, the drive frequency control unit ex512 sets the drive frequency. Further, based on the signal from the CPU ex502, the signal processing unit ex507 decodes the video data. Here, for identification of video data, for example, the identification information described in the fifth embodiment may be used. The identification information is not limited to that described in the fifth embodiment, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal. Further, the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a look-up table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to the look-up table.
 図34は、本実施の形態の方法を実施するステップを示している。まず、ステップexS200では、信号処理部ex507において、多重化データから識別情報を取得する。次に、ステップexS201では、CPUex502において、識別情報に基づいて映像データが上記各実施の形態で示した符号化方法または装置によって生成されたものであるか否かを識別する。映像データが上記各実施の形態で示した符号化方法または装置によって生成されたものである場合には、ステップexS202において、駆動周波数を高く設定する信号を、CPUex502が駆動周波数制御部ex512に送る。そして、駆動周波数制御部ex512において、高い駆動周波数に設定される。一方、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合には、ステップexS203において、駆動周波数を低く設定する信号を、CPUex502が駆動周波数制御部ex512に送る。そして、駆動周波数制御部ex512において、映像データが上記各実施の形態で示した符号化方法または装置によって生成されたものである場合に比べ、低い駆動周波数に設定される。 FIG. 34 shows steps for executing the method of the present embodiment. First, in step exS200, the signal processing unit ex507 acquires identification information from the multiplexed data. Next, in step exS201, the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information. When the video data is generated by the encoding method or apparatus shown in the above embodiments, in step exS202, the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency. On the other hand, if it indicates that the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1, in step exS203, the CPU ex502 drives the signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
 さらに、駆動周波数の切替えに連動して、LSIex500またはLSIex500を含む装置に与える電圧を変更することにより、省電力効果をより高めることが可能である。例えば、駆動周波数を低く設定する場合には、これに伴い、駆動周波数を高く設定している場合に比べ、LSIex500またはLSIex500を含む装置に与える電圧を低く設定することが考えられる。 Furthermore, the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency. For example, when the drive frequency is set low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
 また、駆動周波数の設定方法は、復号する際の処理量が大きい場合に、駆動周波数を高く設定し、復号する際の処理量が小さい場合に、駆動周波数を低く設定すればよく、上述した設定方法に限らない。例えば、MPEG4-AVC規格に準拠する映像データを復号する処理量の方が、上記各実施の形態で示した動画像符号化方法または装置により生成された映像データを復号する処理量よりも大きい場合には、駆動周波数の設定を上述した場合の逆にすることが考えられる。 In addition, the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method. For example, the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
 さらに、駆動周波数の設定方法は、駆動周波数を低くする構成に限らない。例えば、識別情報が、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示している場合には、LSIex500またはLSIex500を含む装置に与える電圧を高く設定し、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合には、LSIex500またはLSIex500を含む装置に与える電圧を低く設定することも考えられる。また、他の例としては、識別情報が、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示している場合には、CPUex502の駆動を停止させることなく、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合には、処理に余裕があるため、CPUex502の駆動を一時停止させることも考えられる。識別情報が、上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データであることを示している場合であっても、処理に余裕があれば、CPUex502の駆動を一時停止させることも考えられる。この場合は、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データであることを示している場合に比べて、停止時間を短く設定することが考えられる。 Furthermore, the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered. For example, when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in the above embodiments, the voltage applied to the LSIex500 or the apparatus including the LSIex500 is set high. However, when it is shown that the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, VC-1, etc., it is also possible to set the voltage applied to the LSIex500 or the device including the LSIex500 low. It is done. As another example, when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in the above embodiments, the driving of the CPU ex502 is stopped. If the video data conforms to the standards such as MPEG-2, MPEG4-AVC, VC-1, etc., the CPU ex502 is temporarily stopped because there is room in processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is a margin for processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
 このように、映像データが準拠する規格に応じて、駆動周波数を切替えることにより、省電力化を図ることが可能になる。また、電池を用いてLSIex500またはLSIex500を含む装置を駆動している場合には、省電力化に伴い、電池の寿命を長くすることが可能である。 Thus, it is possible to save power by switching the drive frequency according to the standard to which the video data conforms. In addition, when the battery is used to drive the LSI ex500 or the device including the LSI ex500, it is possible to extend the life of the battery with power saving.
 (実施の形態8)
 テレビや、携帯電話など、上述した機器・システムには、異なる規格に準拠する複数の映像データが入力される場合がある。このように、異なる規格に準拠する複数の映像データが入力された場合にも復号できるようにするために、LSIex500の信号処理部ex507が複数の規格に対応している必要がある。しかし、それぞれの規格に対応する信号処理部ex507を個別に用いると、LSIex500の回路規模が大きくなり、また、コストが増加するという課題が生じる。
(Embodiment 8)
A plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone. As described above, the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input. However, when the signal processing unit ex507 corresponding to each standard is used individually, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
 この課題を解決するために、上記各実施の形態で示した動画像復号方法を実行するための復号処理部と、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する復号処理部とを一部共有化する構成とする。この構成例を図36Aのex900に示す。例えば、上記各実施の形態で示した動画像復号方法と、MPEG4-AVC規格に準拠する動画像復号方法とは、エントロピー符号化、逆量子化、デブロッキング・フィルタ、動き補償などの処理において処理内容が一部共通する。共通する処理内容については、MPEG4-AVC規格に対応する復号処理部ex902を共有し、MPEG4-AVC規格に対応しない、本発明の一態様に特有の他の処理内容については、専用の復号処理部ex901を用いるという構成が考えられる。特に、本発明の一態様は、キーフレームの処理に特徴を有していることから、例えば、キーフレームの処理については専用の復号処理部ex901を用い、それ以外のエントロピー復号、逆量子化、デブロッキング・フィルタ、動き補償のいずれか、または、全ての処理については、復号処理部を共有することが考えられる。復号処理部の共有化に関しては、共通する処理内容については、上記各実施の形態で示した動画像復号化方法を実行するための復号処理部を共有し、MPEG4-AVC規格に特有の処理内容については、専用の復号処理部を用いる構成であってもよい。 In order to solve this problem, a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1 The processing unit is partly shared. An example of this configuration is shown as ex900 in FIG. 36A. For example, the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common. For common processing contents, the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for other processing contents specific to one aspect of the present invention that do not correspond to the MPEG4-AVC standard, a dedicated decoding processing unit A configuration using ex901 is conceivable. In particular, since one aspect of the present invention has a feature in key frame processing, for example, a dedicated decoding processing unit ex901 is used for key frame processing, and other entropy decoding, inverse quantization, It is conceivable to share a decoding processing unit for any of the deblocking filter, motion compensation, or all processes. Regarding the sharing of the decoding processing unit, regarding the common processing content, the decoding processing unit for executing the moving picture decoding method described in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
 また、処理を一部共有化する他の例を図36Bのex1000に示す。この例では、本発明の一態様に特有の処理内容に対応した専用の復号処理部ex1001と、他の従来規格に特有の処理内容に対応した専用の復号処理部ex1002と、本発明の一態様に係る動画像復号方法と他の従来規格の動画像復号方法とに共通する処理内容に対応した共用の復号処理部ex1003とを用いる構成としている。ここで、専用の復号処理部ex1001、ex1002は、必ずしも本発明の一態様、または、他の従来規格に特有の処理内容に特化したものではなく、他の汎用処理を実行できるものであってもよい。また、本実施の形態の構成を、LSIex500で実装することも可能である。 Further, ex1000 in FIG. 36B shows another example in which processing is partially shared. In this example, a dedicated decoding processing unit ex1001 corresponding to the processing content specific to one aspect of the present invention, a dedicated decoding processing unit ex1002 corresponding to the processing content specific to another conventional standard, and one aspect of the present invention And a common decoding processing unit ex1003 corresponding to the processing contents common to the moving image decoding method according to the above and other conventional moving image decoding methods. Here, the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in one aspect of the present invention or processing content specific to other conventional standards, and can execute other general-purpose processing. Also good. Also, the configuration of the present embodiment can be implemented by LSI ex500.
 このように、本発明の一態様に係る動画像復号方法と、従来の規格の動画像復号方法とで共通する処理内容について、復号処理部を共有することにより、LSIの回路規模を小さくし、かつ、コストを低減することが可能である。 As described above, the processing content common to the moving picture decoding method according to one aspect of the present invention and the moving picture decoding method of the conventional standard reduces the circuit scale of the LSI by sharing the decoding processing unit, In addition, the cost can be reduced.
 本発明は、画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置に適用できる。また、本発明は、画像符号化装置を備える、テレビ、デジタルビデオレコーダー、カーナビゲーション、携帯電話、デジタルカメラ、及びデジタルビデオカメラ等の高解像度の情報表示機器又は撮像機器に利用可能である。 The present invention can be applied to an image encoding method, an image decoding method, an image encoding device, and an image decoding device. The present invention can also be used for high-resolution information display devices or imaging devices such as televisions, digital video recorders, car navigation systems, mobile phones, digital cameras, and digital video cameras that include an image encoding device.
 101A,101B,101C カメラ
 102,102A,102B,102C,200 画像符号化装置
 103,103A,103B,103C,250 符号列
 104 キーフレーム
 105 通常フレーム
 106,300 画像復号装置
 201,306 予測部
 202 減算部
 203 変換量子化部
 204 可変長符号化部
 205,302 逆量子化逆変換部
 206,303 加算部
 207,304 予測制御部
 208,305 選択部
 209,307 キーフレームメモリ
 210,308 近傍フレームメモリ
 211,309 面内フレームメモリ
 212,310 フレームメモリ
 251 動画像データ
 252 差分信号
 253,351 量子化信号
 254,352 復号差分信号
 255,350 復号画像
 256,353 参照画像
 257,354 予測画像
 258,355 予測パラメタ
 259,356 画像データ
 301 可変長復号部
 400 ストレージ
 500 システム
 501 データベース
 502 制御部
101A, 101B, 101C Camera 102, 102A, 102B, 102C, 200 Image encoding device 103, 103A, 103B, 103C, 250 Code sequence 104 Key frame 105 Normal frame 106, 300 Image decoding device 201, 306 Prediction unit 202 Subtraction unit 203 Transform quantization unit 204 Variable length coding unit 205, 302 Inverse quantization inverse transform unit 206, 303 Adder unit 207, 304 Prediction control unit 208, 305 Select unit 209, 307 Key frame memory 210, 308 Neighbor frame memory 211, 309 In- plane frame memory 212, 310 Frame memory 251 Video data 252 Differential signal 253, 351 Quantized signal 254, 352 Decoded differential signal 255, 350 Decoded image 256, 353 Reference image 257, 354 Preliminary Measurement image 258, 355 Prediction parameter 259, 356 Image data 301 Variable length decoding unit 400 Storage 500 System 501 Database 502 Control unit

Claims (22)

  1.  複数の画像を符号化する画像符号化方法であって、
     前記複数の画像から、ランダムアクセス可能なキーフレームを選択する選択ステップと、
     前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて符号化する符号化ステップとを含む
     画像符号化方法。
    An image encoding method for encoding a plurality of images,
    A selection step of selecting a randomly accessible key frame from the plurality of images;
    And a coding step of coding the key frame using inter prediction that refers to a reference picture different from the key frame.
  2.  前記画像符号化方法は、さらに、
     前記複数の画像から前記インター予測を用いて符号化されたキーフレームを特定するための情報を符号化する情報符号化ステップを含む
     請求項1記載の画像符号化方法。
    The image encoding method further includes:
    The image encoding method according to claim 1, further comprising: an information encoding step for encoding information for specifying a key frame encoded using the inter prediction from the plurality of images.
  3.  前記参照ピクチャは、前記キーフレームと復号順及び表示順で隣接しないピクチャである
     請求項1又は2記載の画像符号化方法。
    The image encoding method according to claim 1, wherein the reference picture is a picture that is not adjacent to the key frame in decoding order and display order.
  4.  前記参照ピクチャは、ロングターム参照ピクチャである
     請求項3記載の画像符号化方法。
    The image coding method according to claim 3, wherein the reference picture is a long-term reference picture.
  5.  前記選択ステップでは、前記複数の画像から、前記キーフレームを含む複数のキーフレームを選択し、
     前記符号化ステップでは、前記複数のキーフレームに含まれる対象キーフレームを、前記複数のキーフレームのうちの他のキーフレームを参照して符号化する
     請求項1~4のいずれか1項に記載の画像符号化方法。
    In the selection step, a plurality of key frames including the key frame are selected from the plurality of images,
    5. The encoding step includes encoding a target key frame included in the plurality of key frames with reference to another key frame of the plurality of key frames. Image coding method.
  6.  前記参照ピクチャは、ネットワークを介して取得された画像である
     請求項1又は2記載の画像符号化方法。
    The image encoding method according to claim 1, wherein the reference picture is an image acquired via a network.
  7.  前記参照ピクチャは、前記キーフレームとは異なる符号化方法で符号化された画像である
     請求項1~6のいずれか1項に記載の画像符号化方法。
    The image encoding method according to any one of claims 1 to 6, wherein the reference picture is an image encoded by an encoding method different from that of the key frame.
  8.  前記符号化ステップでは、
     前記キーフレームの領域のうち背景領域を判定し、
     前記背景領域に対して、前記参照ピクチャを特定するための情報を符号化し、当該背景領域の画像情報を符号化しない
     請求項1~7のいずれか1項に記載の画像符号化方法。
    In the encoding step,
    Determining a background area of the keyframe area;
    The image encoding method according to any one of claims 1 to 7, wherein information for specifying the reference picture is encoded with respect to the background region, and image information of the background region is not encoded.
  9.  前記符号化ステップでは、
     前記キーフレームと前記参照ピクチャとの類似度を判定し、
     前記類似度が所定値以上の場合、前記参照ピクチャを特定するための情報を符号化し、前記キーフレームの画像情報を符号化しない
     請求項1~7のいずれか1項に記載の画像符号化方法。
    In the encoding step,
    Determining the similarity between the key frame and the reference picture;
    The image encoding method according to any one of claims 1 to 7, wherein when the similarity is equal to or greater than a predetermined value, information for specifying the reference picture is encoded, and image information of the key frame is not encoded. .
  10.  複数の画像を復号する画像復号方法であって、
     前記複数の画像から、ランダムアクセス可能なキーフレームを判別する判別ステップと、
     前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて復号する復号ステップとを含む
     画像復号方法。
    An image decoding method for decoding a plurality of images,
    A determining step of determining a randomly accessible key frame from the plurality of images;
    And a decoding step of decoding the key frame using inter prediction that refers to a reference picture different from the key frame.
  11.  前記画像復号方法は、さらに、
     前記複数の画像から前記インター予測を用いて符号化されたキーフレームを特定するための情報を復号する情報復号ステップを含み、
     前記判別ステップでは、さらに、前記情報に基づき前記キーフレームがインター予測を用いて符号化されたキーフレームであるかを判別し、
     前記復号ステップでは、前記インター予測を用いて符号化されたキーフレームをインター予測を用いて復号する
     請求項10記載の画像復号方法。
    The image decoding method further includes:
    An information decoding step of decoding information for identifying a key frame encoded using the inter prediction from the plurality of images,
    In the determination step, it is further determined whether the key frame is a key frame encoded using inter prediction based on the information;
    The image decoding method according to claim 10, wherein in the decoding step, a key frame encoded using the inter prediction is decoded using inter prediction.
  12.  前記参照ピクチャは、前記キーフレームと復号順及び表示順で隣接しないピクチャである
     請求項10又は11記載の画像復号方法。
    The image decoding method according to claim 10 or 11, wherein the reference picture is a picture that is not adjacent to the key frame in decoding order and display order.
  13.  前記参照ピクチャは、ロングターム参照ピクチャである
     請求項12記載の画像復号方法。
    The image decoding method according to claim 12, wherein the reference picture is a long-term reference picture.
  14.  前記判別ステップでは、前記複数の画像から、前記キーフレームを含む複数のキーフレームを判別し、
     前記復号ステップでは、前記複数のキーフレームに含まれる対象キーフレームを、前記複数のキーフレームのうちの他のキーフレームを参照して復号する
     請求項10~13のいずれか1項に記載の画像復号方法。
    In the determining step, a plurality of key frames including the key frame are determined from the plurality of images,
    The image according to any one of claims 10 to 13, wherein in the decoding step, a target key frame included in the plurality of key frames is decoded with reference to another key frame of the plurality of key frames. Decryption method.
  15.  前記参照ピクチャは、ネットワークを介して取得された画像である
     請求項10又は11記載の画像復号方法。
    The image decoding method according to claim 10 or 11, wherein the reference picture is an image acquired via a network.
  16.  前記参照ピクチャは、前記キーフレームとは異なる符号化方法で符号化された画像である
     請求項10~15のいずれか1項に記載の画像復号方法。
    The image decoding method according to any one of claims 10 to 15, wherein the reference picture is an image encoded by an encoding method different from that of the key frame.
  17.  前記復号ステップでは、
     前記キーフレームの領域のうち背景領域に対して、前記参照ピクチャを特定するための情報を復号し、当該背景領域の画像として、前記情報で特定される前記参照ピクチャを出力する
     請求項10~16のいずれか1項に記載の画像復号方法。
    In the decoding step,
    Information for specifying the reference picture is decoded for a background area in the key frame area, and the reference picture specified by the information is output as an image of the background area. The image decoding method according to any one of the above.
  18.  前記復号ステップでは、前記参照ピクチャを特定するための情報を復号し、前記情報で特定される前記参照ピクチャを、前記キーフレームとして出力する
     請求項10~16のいずれか1項に記載の画像復号方法。
    The image decoding according to any one of claims 10 to 16, wherein in the decoding step, information for specifying the reference picture is decoded, and the reference picture specified by the information is output as the key frame. Method.
  19.  前記画像復号方法は、さらに、
     複数の画像を格納している記憶装置から、指定した画像の撮影状況に合致する画像を取得する取得ステップと、
     前記取得された画像を前記参照ピクチャとして格納する格納ステップとを含む
     請求項10~18のいずれか1項に記載の画像復号方法。
    The image decoding method further includes:
    An acquisition step of acquiring an image that matches the shooting status of the specified image from a storage device storing a plurality of images;
    The image decoding method according to any one of claims 10 to 18, further comprising a storing step of storing the acquired image as the reference picture.
  20.  前記撮影状況は、画像が撮影された時刻、画像が撮影された場所、又は、画像が撮影された時の天候である
     請求項19記載の画像復号方法。
    The image decoding method according to claim 19, wherein the shooting situation is a time when the image was shot, a place where the image was shot, or a weather when the image was shot.
  21.  複数の画像を符号化する画像符号化装置であって、
     前記複数の画像から、ランダムアクセス可能なキーフレームを選択する選択部と、
     前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて符号化する符号化部とを備える
     画像符号化装置。
    An image encoding device for encoding a plurality of images,
    A selection unit for selecting a randomly accessible key frame from the plurality of images;
    An image encoding apparatus comprising: an encoding unit that encodes the key frame using inter prediction that refers to a reference picture different from the key frame.
  22.  複数の画像を復号する画像復号装置であって、
     前記複数の画像から、ランダムアクセス可能なキーフレームを判別する判別部と、
     前記キーフレームを、当該キーフレームとは異なる参照ピクチャを参照するインター予測を用いて復号する復号部とを備える
     画像復号装置。
    An image decoding device for decoding a plurality of images,
    A discriminating unit for discriminating a randomly accessible key frame from the plurality of images;
    An image decoding apparatus comprising: a decoding unit that decodes the key frame using inter prediction that refers to a reference picture different from the key frame.
PCT/JP2015/002969 2014-07-03 2015-06-15 Image encoding method, image decoding method, image encoding device, and image decoding device WO2016002140A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201462020542P 2014-07-03 2014-07-03
US62/020,542 2014-07-03
JP2015-077493 2015-04-06
JP2015077493A JP2018142752A (en) 2014-07-03 2015-04-06 Image encoding method, image decoding method, image encoding device, and image decoding device

Publications (1)

Publication Number Publication Date
WO2016002140A1 true WO2016002140A1 (en) 2016-01-07

Family

ID=55018721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/002969 WO2016002140A1 (en) 2014-07-03 2015-06-15 Image encoding method, image decoding method, image encoding device, and image decoding device

Country Status (1)

Country Link
WO (1) WO2016002140A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107343205A (en) * 2016-04-28 2017-11-10 浙江大华技术股份有限公司 A kind of coding method of long term reference code stream and code device
CN107343205B (en) * 2016-04-28 2019-07-16 浙江大华技术股份有限公司 A kind of coding method of long term reference code stream and code device
CN113362233A (en) * 2020-03-03 2021-09-07 浙江宇视科技有限公司 Picture processing method, device, equipment, system and storage medium
WO2023078048A1 (en) * 2021-11-06 2023-05-11 中兴通讯股份有限公司 Video bitstream encapsulation method and apparatus, video bitstream decoding method and apparatus, and video bitstream access method and apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002281508A (en) * 2001-03-19 2002-09-27 Kddi Corp Skip area detection type moving image encoder and recording medium
JP2005340896A (en) * 2004-05-24 2005-12-08 Mitsubishi Electric Corp Motion picture encoder
WO2006003814A1 (en) * 2004-07-01 2006-01-12 Mitsubishi Denki Kabushiki Kaisha Video information recording medium which can be accessed at random, recording method, reproduction device, and reproduction method
JP2007535208A (en) * 2004-04-28 2007-11-29 松下電器産業株式会社 STREAM GENERATION DEVICE, METHOD, STREAM REPRODUCTION DEVICE, METHOD, AND RECORDING MEDIUM
WO2013031785A1 (en) * 2011-09-01 2013-03-07 日本電気株式会社 Captured image compression and transmission method and captured image compression and transmission system
WO2013074410A1 (en) * 2011-11-16 2013-05-23 Qualcomm Incorporated Constrained reference picture sets in wave front parallel processing of video data

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002281508A (en) * 2001-03-19 2002-09-27 Kddi Corp Skip area detection type moving image encoder and recording medium
JP2007535208A (en) * 2004-04-28 2007-11-29 松下電器産業株式会社 STREAM GENERATION DEVICE, METHOD, STREAM REPRODUCTION DEVICE, METHOD, AND RECORDING MEDIUM
JP2005340896A (en) * 2004-05-24 2005-12-08 Mitsubishi Electric Corp Motion picture encoder
WO2006003814A1 (en) * 2004-07-01 2006-01-12 Mitsubishi Denki Kabushiki Kaisha Video information recording medium which can be accessed at random, recording method, reproduction device, and reproduction method
WO2013031785A1 (en) * 2011-09-01 2013-03-07 日本電気株式会社 Captured image compression and transmission method and captured image compression and transmission system
WO2013074410A1 (en) * 2011-11-16 2013-05-23 Qualcomm Incorporated Constrained reference picture sets in wave front parallel processing of video data

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107343205A (en) * 2016-04-28 2017-11-10 浙江大华技术股份有限公司 A kind of coding method of long term reference code stream and code device
CN107343205B (en) * 2016-04-28 2019-07-16 浙江大华技术股份有限公司 A kind of coding method of long term reference code stream and code device
CN113362233A (en) * 2020-03-03 2021-09-07 浙江宇视科技有限公司 Picture processing method, device, equipment, system and storage medium
CN113362233B (en) * 2020-03-03 2023-08-29 浙江宇视科技有限公司 Picture processing method, device, equipment, system and storage medium
WO2023078048A1 (en) * 2021-11-06 2023-05-11 中兴通讯股份有限公司 Video bitstream encapsulation method and apparatus, video bitstream decoding method and apparatus, and video bitstream access method and apparatus

Similar Documents

Publication Publication Date Title
JP6222589B2 (en) Decoding method and decoding apparatus
JP6210248B2 (en) Moving picture coding method and moving picture coding apparatus
JP6394966B2 (en) Encoding method, decoding method, encoding device, and decoding device using temporal motion vector prediction
WO2013035313A1 (en) Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
JP2019169975A (en) Transmission method and transmission apparatus
WO2013114860A1 (en) Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding/decoding device
JP6004375B2 (en) Image encoding method and image decoding method
JP6414712B2 (en) Moving picture encoding method, moving picture decoding method, moving picture encoding apparatus, and moving picture decoding method using a large number of reference pictures
WO2012023281A1 (en) Video image decoding method, video image encoding method, video image decoding apparatus, and video image encoding apparatus
JP6587046B2 (en) Image encoding method, image decoding method, image encoding device, and image decoding device
KR102130046B1 (en) Image decoding method, image encoding method, image decoding device, image encoding device, and image encoding/decoding device
WO2012120840A1 (en) Image decoding method, image coding method, image decoding device and image coding device
JP2013187905A (en) Methods and apparatuses for encoding and decoding video
JP6483028B2 (en) Image encoding method and image encoding apparatus
JP2015506596A (en) Moving picture coding method, moving picture coding apparatus, moving picture decoding method, and moving picture decoding apparatus
JP5873029B2 (en) Video encoding method and video decoding method
WO2013164903A1 (en) Image encoding method, image decoding method, image encoding device, image decoding device, and image encoding and decoding device
JP6365924B2 (en) Image decoding method and image decoding apparatus
JP5680812B1 (en) Image encoding method, image decoding method, image encoding device, and image decoding device
WO2016002140A1 (en) Image encoding method, image decoding method, image encoding device, and image decoding device
WO2011132400A1 (en) Image coding method and image decoding method
JP2014039252A (en) Image decoding method and image decoding device
WO2013136678A1 (en) Image decoding device and image decoding method
JP2015180038A (en) Image encoder, image decoder, image processing system, image encoding method and image decoding method
JP2018142752A (en) Image encoding method, image decoding method, image encoding device, and image decoding device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15814291

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15814291

Country of ref document: EP

Kind code of ref document: A1