WO2016002140A1 - Procédé de codage d'image, procédé de décodage d'image, dispositif de codage d'image et dispositif de décodage d'image - Google Patents

Procédé de codage d'image, procédé de décodage d'image, dispositif de codage d'image et dispositif de décodage d'image Download PDF

Info

Publication number
WO2016002140A1
WO2016002140A1 PCT/JP2015/002969 JP2015002969W WO2016002140A1 WO 2016002140 A1 WO2016002140 A1 WO 2016002140A1 JP 2015002969 W JP2015002969 W JP 2015002969W WO 2016002140 A1 WO2016002140 A1 WO 2016002140A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
key frame
decoding
reference picture
encoding
Prior art date
Application number
PCT/JP2015/002969
Other languages
English (en)
Japanese (ja)
Inventor
寿郎 笹井
哲史 吉川
健吾 寺田
Original Assignee
パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP2015077493A external-priority patent/JP2018142752A/ja
Application filed by パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ filed Critical パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ
Publication of WO2016002140A1 publication Critical patent/WO2016002140A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/58Motion compensation with long-term prediction, i.e. the reference frame for a current frame not being the temporally closest one

Definitions

  • the present invention relates to an image encoding method or an image decoding method.
  • Non-Patent Document 1 High Efficiency Video Coding
  • the predetermined frame is an I frame that is intra (in-screen) encoded in MPEG-4 AVC, HEVC, and the like.
  • JCT-VC Join Collaborative Team on Video Coding
  • doc High Efficiency Video Coding (HEVC) text specification draft 10 (for FDIS & Last Call) http: // phenix. it-sudparis. eu / jct / doc_end_user / documents / 12_Geneva / wg11 / JCTVC-L1003-v34. zip
  • an object of the present invention is to provide an image encoding method capable of efficiently encoding an image or an image decoding method capable of efficiently decoding an image.
  • an image encoding method is an image encoding method for encoding a plurality of images, and selects randomly accessible key frames from the plurality of images. And a coding step of coding the key frame using inter prediction that refers to a reference picture different from the key frame.
  • An image decoding method is an image decoding method for decoding a plurality of images, wherein a determination step of determining a randomly accessible key frame from the plurality of images, and the key frame And a decoding step of decoding using inter prediction that refers to a reference picture different from the key frame.
  • the present invention can provide an image encoding method capable of efficiently encoding an image or an image decoding method capable of efficiently decoding an image.
  • FIG. 1A is a diagram illustrating an example of an image distribution system.
  • FIG. 1B is a diagram for explaining a reproduction operation in the image decoding apparatus.
  • FIG. 2 is a diagram illustrating an example of a conventional code string.
  • FIG. 3A is a diagram illustrating an example of the operation of the image distribution system.
  • FIG. 4 is a diagram illustrating an example of a code string according to the first embodiment.
  • FIG. 5A is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 5B is a diagram illustrating an example of a conventional code string.
  • FIG. 6A is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 6B is a diagram illustrating an example of a conventional code string.
  • FIG. 1A is a diagram illustrating an example of an image distribution system.
  • FIG. 1B is a diagram for explaining a reproduction operation in the image decoding apparatus.
  • FIG. 2 is a diagram
  • FIG. 7A is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 7B is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 7C is a diagram illustrating an example of a code string according to Embodiment 1.
  • FIG. 8 is a block diagram of the image coding apparatus according to Embodiment 1.
  • FIG. 9 is a flowchart of the image encoding process according to the first embodiment.
  • FIG. 10 is a flowchart of the key frame encoding process according to the first embodiment.
  • FIG. 11 is a flowchart of the image encoding process according to the first embodiment.
  • FIG. 12 is a flowchart of the image encoding process according to the first embodiment.
  • FIG. 13 is a block diagram of an image decoding apparatus according to the second embodiment.
  • FIG. 14 is a flowchart of image decoding processing according to the second embodiment.
  • FIG. 15 is a flowchart of key frame decryption processing according to the second embodiment.
  • FIG. 16 is a diagram illustrating a configuration of a system according to the third embodiment.
  • FIG. 17 is a diagram illustrating the operation of the system according to the third embodiment.
  • FIG. 18 is an overall configuration diagram of a content supply system that implements a content distribution service.
  • FIG. 19 is an overall configuration diagram of a digital broadcasting system.
  • FIG. 20 is a block diagram illustrating a configuration example of a television.
  • FIG. 20 is a block diagram illustrating a configuration example of a television.
  • FIG. 21 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
  • FIG. 22 is a diagram illustrating a structure example of a recording medium that is an optical disk.
  • FIG. 23A is a diagram illustrating an example of a mobile phone.
  • FIG. 23B is a block diagram illustrating a configuration example of a mobile phone.
  • FIG. 24 is a diagram showing a structure of multiplexed data.
  • FIG. 25 is a diagram schematically showing how each stream is multiplexed in the multiplexed data.
  • FIG. 26 is a diagram showing in more detail how the video stream is stored in the PES packet sequence.
  • FIG. 27 is a diagram illustrating the structure of TS packets and source packets in multiplexed data.
  • FIG. 28 shows the data structure of the PMT.
  • FIG. 29 is a diagram showing an internal configuration of multiplexed data information.
  • FIG. 30 shows the internal structure of stream attribute information.
  • FIG. 31 is a diagram illustrating steps for identifying video data.
  • FIG. 32 is a block diagram illustrating a configuration example of an integrated circuit that implements the moving picture coding method and the moving picture decoding method according to each embodiment.
  • FIG. 33 is a diagram illustrating a configuration for switching the driving frequency.
  • FIG. 34 is a diagram illustrating steps for identifying video data and switching between driving frequencies.
  • FIG. 35 is a diagram illustrating an example of a look-up table in which video data standards are associated with drive frequencies.
  • FIG. 36A is a diagram illustrating an example of a configuration for sharing a module of a signal processing unit.
  • FIG. 36B is a diagram illustrating another example of
  • Patent Document 1 discloses a technique for improving encoding efficiency by storing an image for a long period of time and using it as a reference picture.
  • Non-Patent Document 1 has a long term reference picture so that the technique described in Patent Document 1 can be used.
  • the decoded image is stored in the frame memory for a long time. This makes it possible to refer to the long-term reference picture for subsequent decoded images.
  • Patent Document 1 when a video is recorded for a long time, or when a plurality of videos are recorded, using the technique of Patent Document 1, when playback is started from an arbitrary time or when playback of a plurality of videos is switched, There is a problem that it is necessary to wait for a long time before starting reproduction or the encoding efficiency is not improved.
  • FIG. 1A and 1B are schematic diagrams for explaining a problem to be solved in the present embodiment.
  • FIG. 1A shows a case where videos from a plurality of cameras are appropriately switched and played back by a playback terminal (decoding device).
  • FIG. 1A shows an example of code strings (BitStream) 103A to 103C that are output from the image encoding devices 102A to 102C, for example, images captured by the cameras 101A to 101C.
  • the code strings 103A to 103C are composed of a frame serving as a decoding start point called a key frame 104 (KeyFrame) and a frame other than the key frame called a normal frame 105.
  • the key frame 104 is a frame (intra prediction frame) that has been subjected to intra-frame prediction encoding.
  • the image decoding apparatus 106 which is a portable device such as a tablet terminal or a smartphone, there is a restriction on the transmission band.
  • the image decoding apparatus 106 does not receive all of the code sequences 103A to 103C transmitted from the plurality of image encoding apparatuses 102A to 102C at the same time, but rather the video that the user wants to view (display or play). Select and play only the video stream.
  • the image decoding apparatus 106 when the image decoding apparatus 106 performs reproduction in the order of the code string 103A, the code string 103B, and the code string 103C, the image decoding apparatus 106 can only perform this in units of key frames 104 as shown in FIG. 1B. Cannot switch.
  • the key frame 104 is an intra prediction frame (I).
  • I in a picture indicates an I frame (intra prediction frame) in which intra prediction is used
  • P indicates a P frame in which unidirectional prediction in which only one frame is referenced is used.
  • Numerical values in parentheses in the picture indicate processing order (display order).
  • An arrow indicates a reference relationship, and indicates that the original picture of the arrow is used (referenced) for the prediction process of the picture after the arrow.
  • frame P (3) cannot be decoded without frame P (2)
  • frame P (2) cannot be decoded without frame P (1)
  • frame P (1) is frame I (0). It is in a relationship that cannot be decrypted without it. Therefore, the video cannot be reproduced from the frame P (3).
  • an intra prediction frame used as the key frame 104 in the prior art generally has lower encoding efficiency than an inter (inter-screen) prediction frame. This is because an intra-prediction frame cannot compress a moving image using temporal continuity characteristics.
  • the inter prediction frame is a frame encoded using inter prediction with reference to another frame.
  • the ratio of intra prediction frames in the data amount of the code string 103 is very high. ing. Therefore, it is important to increase the number of playback start points (key frames) while reducing the number of intra prediction frames.
  • the image encoding device 102 generates a code string 103 by encoding a long-time video such as a video taken by a 24-hour monitoring camera, for example.
  • the image decoding apparatus 106 is required to jump to the decoding start point of the code string 103 when reproducing the video. For example, if there is only one key frame 104 per hour, the image decoding device 106 cannot reproduce the target scene unless the worst one-hour data is decoded. In order to prevent this, it is known to appropriately insert a key frame 104 as an arbitrary playback point.
  • the key frame 104 using the conventional intra prediction frame has a low encoding efficiency as described above, the ratio of the data of the intra prediction frame to the total data amount is large especially for video recorded for a long time. It has been known.
  • the above problem is solved by inter-predicting key frames required in both cases to improve the encoding efficiency.
  • the time required for switching between multiple videos can be shortened, or a code string that can shorten the time required to start playback of a long video at an arbitrary timing is efficiently used.
  • An image encoding method or image decoding method that can be generated or decoded well will be described.
  • An image encoding method is an image encoding method for encoding a plurality of images, wherein a selection step of selecting a randomly accessible key frame from the plurality of images, and the key frame And a coding step of coding using inter prediction with reference to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • the image encoding method may further include an information encoding step of encoding information for specifying a key frame encoded using the inter prediction from the plurality of images.
  • the reference picture may be a picture that is not adjacent to the key frame in decoding order and display order.
  • the reference picture may be a long term reference picture.
  • a plurality of key frames including the key frame are selected from the plurality of images, and in the encoding step, target key frames included in the plurality of key frames are selected as the plurality of key frames.
  • the encoding may be performed with reference to other key frames.
  • the reference picture may be an image acquired via a network.
  • the reference picture may be an image encoded by an encoding method different from the key frame.
  • a background area of the key frame area is determined, information for specifying the reference picture is encoded for the background area, and image information of the background area is encoded. It does not have to be.
  • the data amount of the encoded data in the background area can be reduced.
  • a similarity between the key frame and the reference picture is determined, and when the similarity is equal to or greater than a predetermined value, information for specifying the reference picture is encoded,
  • the image information may not be encoded.
  • the amount of encoded data can be reduced.
  • An image decoding method is an image decoding method for decoding a plurality of images, wherein a determination step of determining a randomly accessible key frame from the plurality of images, and the key frame And a decoding step of decoding using inter prediction that refers to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • the image decoding method further includes an information decoding step of decoding information for specifying a key frame encoded using the inter prediction from the plurality of images, and the determining step further includes the step of: Based on the information, it is determined whether the key frame is a key frame encoded using inter prediction.
  • the key frame encoded using the inter prediction is decoded using inter prediction. May be.
  • the reference picture may be a picture that is not adjacent to the key frame in decoding order and display order.
  • the reference picture may be a long term reference picture.
  • a plurality of key frames including the key frame are determined from the plurality of images, and in the decoding step, target key frames included in the plurality of key frames are determined from the plurality of key frames.
  • Decoding may be performed with reference to other key frames.
  • the reference picture may be an image acquired via a network.
  • the reference picture may be an image encoded by an encoding method different from the key frame.
  • the decoding step information for specifying the reference picture is decoded for a background area in the key frame area, and the reference picture specified by the information is used as an image of the background area. It may be output.
  • the data amount of the encoded data in the background area can be reduced.
  • information for specifying the reference picture may be decoded, and the reference picture specified by the information may be output as the key frame.
  • the amount of encoded data can be reduced.
  • the image decoding method further includes an acquisition step of acquiring an image that matches a shooting situation of a specified image from a storage device storing a plurality of images, and stores the acquired image as the reference picture. Storage step.
  • the shooting situation may be the time when the image was shot, the place where the image was shot, or the weather when the image was shot.
  • An image encoding apparatus is an image encoding apparatus that encodes a plurality of images, the selection unit selecting a randomly accessible key frame from the plurality of images, and the key frame Is encoded using inter prediction that refers to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • An image decoding device is an image decoding device that decodes a plurality of images, and includes a determination unit that determines a randomly accessible key frame from the plurality of images, and the key frame. And a decoding unit that performs decoding using inter prediction that refers to a reference picture different from the key frame.
  • the encoding efficiency can be improved as compared with the case where the key frame is encoded by the intra prediction.
  • a frame may be described in other words as a picture or an image.
  • a frame (picture or image) to be encoded or decoded may be referred to as a current picture or an access frame.
  • various terms commonly used in the field of codec technology can also be used.
  • an image encoding device and an image encoding method capable of appropriately switching videos from a plurality of cameras or realizing high-efficiency encoding while reproducing a long-time video from an arbitrary point. Will be described.
  • FIG. 4 is a diagram illustrating an example of a code string 250 output from the image encoding device 200.
  • FIG. 4 shows an example of a code string 250 in the present embodiment corresponding to FIG. 2 shown in the above-described conventional configuration.
  • the arrows indicate reference relationships.
  • the meanings of characters and numbers in each picture are the same as those in FIG.
  • the key frame 104 is an I frame, but in the present embodiment, a key frame other than the I frame may be set as the key frame 104.
  • the frame P (t) that is the key frame 104 is a P frame.
  • this frame P (t) makes a long term reference to the frame I (0).
  • a frame that can be used for long-term reference like the frame I (0) is called a long-term reference picture.
  • This long term reference picture is stored in the memory of the image encoding device and the image decoding device, and can be referred to at any time.
  • the image decoding apparatus can detect the frame P (t). Playback can be started from
  • a P frame using a long term reference is used as the key frame 104.
  • encoding efficiency can be improved.
  • the key frame 104 can be decoded by decoding only the long-term reference picture.
  • the number of images that need to be decoded when the P frame is set as the key frame 104 can be reduced.
  • FIG. 5A is a diagram showing an example of a code string 250 according to the present embodiment.
  • FIG. 5B is a diagram for comparison and shows an example of a conventional code string 103.
  • a long term reference picture is used when the background is the same video for a long time in a surveillance camera video or the like.
  • the normal frame 105 refers to the long term reference picture, not the immediately preceding frame.
  • the configuration shown in FIGS. 2 and 4 cannot decode the frames from the frame P (3) to the frame P (t ⁇ 1).
  • the configuration shown in FIGS. 5A and 5B even if the frame P (2) cannot be decoded, the frame P (2) is not used for prediction of other frames. Therefore, if the frame I (0) has been correctly decoded, the image decoding apparatus can correctly decode the frames from the frame P (3) to the frame P (t ⁇ 1).
  • an intra prediction frame (key frame 104), which is a long-term reference picture, is used periodically.
  • the long term reference picture can be updated as appropriate, so that it is possible to cope with a change in a still area such as a background.
  • an inter prediction frame P (t) (key frame 104) that refers only to the long term reference picture I (0) is used instead of the intra prediction frame. Used. Thereby, encoding efficiency can be improved similarly to the structure shown in FIG.
  • the inter prediction frame generally has better encoding efficiency when referring to a temporally close frame. Therefore, by using only the P frame, it is possible to reduce the processing amount, reduce the delay, and suppress the deterioration of the encoding efficiency.
  • B frames prediction frames
  • FIG. 6A is a diagram illustrating an example of a code string 250 according to the present embodiment.
  • FIG. 6B is a diagram for comparison, and shows an example of a conventional code string 103.
  • the normal frame 105 refers to the long-term reference picture and the immediately preceding frame.
  • the frame B (t) which is a long-term reference picture, cannot be set as the key frame 104 but is set as the normal frame 105 in order to make the B frames continuous. Yes. Even in this case, in order to increase the key frame 104, it is necessary to use the intra prediction frame I as the key frame 104 as in FIG. 2 or FIG. 5B.
  • the frame P (t) that is an inter prediction frame is used as the key frame 104.
  • accessibility can be improved while maintaining encoding efficiency.
  • FIG. 7A is a schematic diagram for explaining processing in the image encoding device and the image decoding device according to the present embodiment.
  • the code string shown in FIG. 7A has the same configuration as the code string 250 shown in FIG. Therefore, the frames I (0), P (8), and I (16) are treated as the key frame 104.
  • the image decoding apparatus can start decoding after a transmission delay. Further, the image decoding apparatus can decode the frame P (8) after decoding the frame I (0), or can start decoding from the frame I (16). Also, when it is desired to create playback points at a high frequency, it is possible to increase the encoding efficiency by inserting a key frame 104 of a P frame as in the frame P (8).
  • the key frame 104 may be stored in the storage 400.
  • the configuration of the code string is the same as that in FIG. 7A.
  • the frame I (16) is the same image as the frame I (0).
  • the code string information regarding the frame I (16) may be parameter information indicating the same as the frame I (0). Thereby, encoding efficiency can be further improved.
  • a frame I (0) indicating background information is stored in the storage 400 provided in the image decoding apparatus.
  • the image decoding apparatus acquires (for example, downloads via the network) the frame I (0) and stores it in the storage.
  • the image decoding apparatus can decode the frame P (8) by referring to the image of the frame I (0) stored in the storage. Thereby, it is possible to add a reproduction point (add a key frame) while improving encoding efficiency.
  • a reference image for example, a background image
  • a reference image for example, a background image
  • the images to be switched can be switched, and high-quality video switching can be realized even when a narrow-band network is used.
  • FIG. 7C illustrates a case where a plurality of key frame 104 images or a key frame reference picture that is a reference image of the key frame 104 is stored in the storage 400, or a plurality of key frames via a cloud or a network. A case where the frame 104 or the key frame reference picture can be accessed is shown.
  • a B frame is set as the key frame 104.
  • the image decoding apparatus uses a plurality of key frame reference pictures stored in the storage 400 or a key frame reference picture acquired via a network as the frame B (8), which is the key frame 104. Decrypt. Thereby, since the image decoding apparatus can start decoding from the frame B (8), switching of a plurality of camera videos or jumping reproduction can be realized.
  • FIG. 7A to 7B show the transmission delay when data is transmitted from the image encoding device to the image decoding device. For example, when there is data in storage accessible from the image decoding device in advance, FIG. No transmission delay occurs.
  • a frame described as a B frame may be encoded as a P frame in relation to the surroundings, or a frame described as a P frame may be encoded as a B frame.
  • a frame described as a P frame may be encoded as a B frame.
  • the storage 400 may not be used.
  • the B frame which is the key frame 104 may refer to only a plurality of past key frames 104, for example.
  • the normal frame 105 refers only to the immediately preceding frame, but the previous two frames may be referred to, or bi-directional prediction or multiple prediction may be used.
  • P frames are used for at least a part of key frames in which I frames are used in the prior art.
  • the key frame is a frame that can be randomly accessed, in other words, a frame that can be reproduced (decoded or displayed) by the image decoding apparatus.
  • information indicating which image is a key frame among a plurality of images included in a video is included in the code string.
  • key frames are inter-predicted by referring only to key frame reference pictures.
  • a key frame reference picture is a picture that is not adjacent to a processing target key frame in decoding order (encoding order) and display order.
  • the key frame reference picture is a long term reference picture.
  • the key frame reference picture is a decoded (encoded) key frame or an image obtained via a network.
  • FIG. 8 is a block diagram showing the configuration of the image coding apparatus 200 according to the present embodiment.
  • the image encoding apparatus 200 generates a code string 250 by encoding the moving image data 251.
  • the image coding apparatus 200 includes a prediction unit 201, a subtraction unit 202, a transform quantization unit 203, a variable length coding unit 204, an inverse quantization inverse transform unit 205, an addition unit 206, and a prediction control unit 207.
  • the prediction unit 201 generates a prediction image 257 based on the target image included in the moving image data 251 and the reference image 256 selected by the selection unit 208, and the generated prediction image 257 is subtracted by the subtraction unit 202 and the addition unit. It outputs to 206.
  • the prediction unit 201 outputs a prediction parameter 258 that is a parameter used to generate the predicted image 257 to the variable length coding unit 204.
  • the subtraction unit 202 calculates a difference signal 252 that is a difference between the target image included in the moving image data 251 and the predicted image 257, and outputs the calculated difference signal 252 to the transform quantization unit 203.
  • the transform quantization unit 203 transform-quantizes the difference signal 252 to generate a quantized signal 253, and outputs the generated quantized signal 253 to the variable length coding unit 204 and the inverse quantization inverse transform unit 205. .
  • the variable length coding unit 204 generates a code string 250 by performing variable length coding on the quantized signal 253 and the prediction parameter 258 output from the prediction unit 201 and the prediction control unit 207.
  • the prediction parameter 258 includes information indicating a used prediction method, a prediction mode, and a reference picture.
  • the inverse quantization inverse transform unit 205 generates a decoded differential signal 254 by performing inverse quantization and inverse transform on the quantized signal 253, and outputs the generated decoded differential signal 254 to the adder 206.
  • the adding unit 206 generates a decoded image 255 by adding the predicted image 257 and the decoded difference signal 254, and outputs the generated decoded image 255 to the frame memory 212.
  • the frame memory 212 includes a key frame memory 209 in which a key frame reference picture, which is a long term reference picture for a key frame, is stored, a neighboring frame memory 210 in which other already decoded and referenceable images are stored, And an in-plane frame memory 211 in which partially decoded images included in the image to be encoded are stored. These memories are controlled by the prediction control unit 207, and a reference image necessary for creating the predicted image 257 is output to the selection unit 208.
  • the prediction control unit 207 determines in which memory the reference image stored is to be used based on the moving image data 251.
  • the key frame memory 209 may store not only the decoded image 255 but also image data 259 separately obtained from the outside as a reference image.
  • the frame memory 212 includes three types of memories. However, in actual implementation, there is no need to provide individual memory spaces, and for example, each reference image is stored in the same memory. May be. That is, the frame memory 212 may be configured to output any reference image based on an instruction from the prediction control unit 207.
  • Non-Patent Document 1 Can be used. Note that, as these processes, other moving image encoding methods may be used.
  • FIG. 9 is a flowchart of operations related to the prediction control unit 207, the selection unit 208, and the frame memory 212.
  • the prediction control unit 207 selects an image to be set as a key frame among a plurality of images included in the moving image data 251 (S101). Specifically, the prediction control unit 207 sets a frame located at an access point, which is a randomly accessible point, as a key frame. For example, the frequency of access (switching or jumping over) is set in advance, and the prediction control unit 207 sets a key frame every plural number according to this frequency. Alternatively, the prediction control unit 207 may set, as a key frame, an image in which an object has moved greatly (the movement of the image is large) or an object that is designated in advance is included in the image. Further, the prediction control unit 207 may combine both methods.
  • the image encoding device 200 may acquire a key frame reference picture from another device or the like and store it in the key frame memory 209. In this case, the image encoding device 200 compares the moving image data 251 with a plurality of key frame reference pictures stored in another device, and acquires and acquires a key frame reference picture having a high degree of similarity. The key frame reference picture is stored in the key frame memory 209.
  • the image encoding device 200 encodes information (information for specifying a key frame) indicating which image is a key frame, thereby generating a code string 250 including this information.
  • information for specifying a key frame specifies a conventional intra-predicted key frame and an inter-predicted key frame according to the present embodiment. That is, this information indicates whether each key frame is a key frame encoded using intra prediction or a key frame encoded using inter prediction.
  • the image encoding device 200 performs an encoding process for each image.
  • the image encoding device 200 determines whether the target image that is the image to be encoded included in the moving image data 251 is a key frame (S102). When the target image is a key frame (YES in S102), the image encoding device 200 encodes the target image by the key frame encoding process (S103).
  • FIG. 10 is a flowchart showing the operation of step S103.
  • the image encoding apparatus 200 determines whether there is a key frame reference picture similar to the target image by comparing the target image with the key frame reference picture stored in the key frame memory 209 (S201). .
  • being similar means that the difference is less than a predetermined value.
  • the prediction unit 201 When there is a key frame reference picture similar to the target image (YES in S202), the prediction unit 201 refers to the similar key frame reference picture stored in the key frame memory 209 and inter-codes the target image. (S203).
  • the prediction unit 201 performs intra prediction encoding on the target image (S204).
  • the image encoding device 200 decodes the key frame after encoding, and stores the obtained decoded image in the key frame memory 209 as a key frame reference picture (S205).
  • the key frame reference picture stored in the key frame memory 209 is compared with the target image, but a comparison with an image that can be acquired and stored in the key frame memory 209 is performed. Also good.
  • the image coding apparatus 200 may determine whether there is an image similar to the target image from reference images that can be acquired via a network.
  • the prediction unit 201 performs inter prediction encoding using the reference image.
  • the code string 250 includes information indicating that an image that can be acquired via a network or the like is used as a reference image, and information specifying the reference image to be used. Sent. In this way, information for specifying a reference image similar to the image decoding device can be notified while further improving the encoding efficiency.
  • the image decoding apparatus can acquire the reference image via the network using this information at the time of decoding, and can decode the image using the reference image.
  • intra prediction encoding is performed when there is no similar key frame reference picture.
  • intra prediction encoding may be performed at a predetermined frequency. .
  • key frames are used as key frame reference pictures here, only some key frames may be used as key frame reference pictures.
  • the image encoding device 200 When the target image is not a key frame but a normal frame (NO in S102), the image encoding device 200 performs a normal encoding process of encoding the target image with reference to neighboring pictures (S104). That is, the image coding apparatus 200 searches the frame memory 212 for a reference image with high compression efficiency without considering the decoding order, and performs inter prediction coding that refers to the obtained reference image. In other words, in this encoding process, the encoding considering the accessibility is not performed unlike the above-described key frame encoding process. In an environment where low delay is required, it is not always necessary to select a method with the best coding efficiency, such as referring to the immediately preceding frame.
  • the image encoding device 200 ends the processing. On the other hand, when the input of the moving image data 251 continues (NO in S105), the image encoding device 200 performs the processing after step S102 on the next image.
  • the code string 250 generated by the image encoding device 200 includes information indicating which image is a key frame or an access point (information for specifying a key frame). Further, the code string 250 includes information indicating whether intra prediction is used for each key frame or inter prediction using a key frame reference picture, and information indicating the used key frame reference picture. The code string 250 may include information indicating whether the key frame reference picture is an image included in the code string 250 or acquired via a network. Further, when a key frame reference picture is acquired via a network, identification information for identifying the key frame reference picture or information indicating a storage location so that the image decoding apparatus can acquire the key frame reference picture Is included. Note that at least a part of the information may be notified to the image decoding apparatus by a signal different from the code string 250.
  • the image encoding apparatus 200 performs the following processing on a video that can be distinguished from a region that can be regarded as a background and a region that can be regarded as a foreground region including a moving object, such as a video of an installed camera. You may go.
  • FIG. 11 is a flowchart showing the flow of processing in this case.
  • the image coding apparatus 200 compares each area included in the target image with the background image, and determines whether each area is a background area or a foreground area (S301). Specifically, the image coding apparatus 200 compares a frame that has been determined in advance as a background image with the target image, determines an area having a predetermined similarity to the background image as a background area, and the similarity is An area less than a predetermined value is determined as a foreground area. Alternatively, the image coding apparatus 200 calculates an average change amount between the target image and an image before a certain time, determines an area with a small average change amount as a background image, and determines an area with a large change amount as a foreground image. To do.
  • the image encoding device 200 When the target region that is the processing target region included in the target image is the background region (YES in S302), the image encoding device 200 includes information for specifying the key frame reference picture that is the background image as the target region, For example, only the ID of the key frame reference picture is encoded, and the image information (such as a difference signal) of the background area is not encoded (S303). For example, the image encoding device 200 uses a prediction mode that indicates only the background image. Thereby, the amount of information can be greatly reduced. Specifically, the image coding apparatus 200 designates a background image (key frame reference picture) as a reference image in the skip mode.
  • the image encoding device 200 encodes only the information specifying the background image without encoding the difference signal between the target image and the background image. Thereby, the image decoding apparatus outputs (displays) the background image designated by the information as it is as the image of the background area.
  • the normal encoding process is generally a moving image code such as performing prediction using a nearby reference image and encoding a difference signal or encoding a difference signal with a background image. This is predictive encoding used in the conversion, and is similar to the processing in step S104 shown in FIG. 9, for example.
  • the foreground area does not have to be in front of the background area.
  • the background refers to a region where there is a certain range of regions that do not change more than a certain amount, and the foreground may indicate a region that varies more than a certain amount.
  • the background region has a small amount of change, for example, it indicates a region having little influence on the subjective image quality even if a difference signal referred to in moving image coding is not sent.
  • the foreground area indicates a background image that is a comparison target or an image that is different from the image before a certain time, and is an image that greatly differs from the original image when the difference image referred to in moving image coding is not sent. is there.
  • the subjective image quality can be reduced by not encoding the difference for subtle movements due to wind or the like. It is possible to improve the coding efficiency.
  • a plurality of images that can be referred to via a network as key frame reference pictures (background images) an area that can be determined as a background area increases. Thereby, it is possible to record a video for a long time with a smaller code amount. Also, video can be transmitted even when a very narrow band network such as a wireless environment is used.
  • the background and foreground are determined for each area in one frame and the processing is switched.
  • the background and foreground may be determined for each frame and the processing may be switched. In this case, if the entire frame can be determined to be the background, the amount of code can be greatly reduced. If it is determined that the background is not the background, a normal encoding method may be used. In this way, the processing of the image encoding device 200 can be simplified.
  • FIG. 12 is a flowchart showing an operation flow in this case.
  • the image encoding device 200 compares a target frame with a key frame reference picture stored in the key frame memory 209 or obtainable via a network, and similar to the target frame. For example, it is determined whether or not a background image exists (S401). Specifically, the image coding apparatus 200 determines the similarity between the target image and each key frame reference picture, and determines that a key frame reference picture with a similarity equal to or greater than a predetermined value is similar to the target image.
  • the image encoding device 200 When there is a key frame reference picture similar to the target image (YES in S402), the image encoding device 200 does not encode the image information (difference signal or the like) of the target image and refers to a key frame similar to the target image. Only the information for specifying the picture, for example, the ID of the key frame reference picture is encoded (S403).
  • the image encoding device 200 encodes the target image using intra prediction (S404). Also, the image coding apparatus 200 sets the target image as a key frame reference picture (S405). Thereby, this image can be utilized at the time of encoding of subsequent images. Thereby, since the information amount of the code string 250 can be further reduced, encoding efficiency can be improved.
  • the image coding apparatus 200 may delete the key frame reference picture that has been referred to so far from the key frame memory 209. . Thereby, an increase in the capacity of the key frame memory 209 can be suppressed.
  • the image encoding device 200 may perform normal encoding processing instead of steps S404 and S405. This process is, for example, the same process as step S104 shown in FIG. Even in this case, when a key frame reference picture similar to the target image exists, encoding efficiency can be improved by encoding only the ID indicating the key frame reference picture.
  • 11 or 12 may be performed on all the images included in the moving image data 251 or may be performed on only a part of the images. For example, these processes may be performed only on key frames.
  • variable length coding unit 204 may use arithmetic coding, or may use a coding table designed according to entropy.
  • the image encoding device 200 encodes a plurality of images.
  • the image encoding device 200 selects a randomly accessible key frame 104 from a plurality of images.
  • the image encoding device 200 encodes the key frame 104 using inter prediction that refers to a key frame reference picture different from the key frame 104.
  • inter prediction refers to a key frame reference picture different from the key frame 104.
  • the image encoding device 200 selects a plurality of key frames from a plurality of images, and encodes target key frames included in the plurality of key frames with reference to other key frames among the plurality of key frames. To do. Thereby, since the number of images required for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
  • the image decoding apparatus is an image decoding method that realizes the situation illustrated in FIG. 1A and the like described in the first embodiment, and correctly decodes the code string 250 illustrated in FIGS. 4, 5A, and 6A.
  • the present invention relates to an image decoding method.
  • FIG. 13 is a block diagram of image decoding apparatus 300 according to the present embodiment.
  • the image decoding apparatus 300 shown in FIG. 13 generates a decoded image 350 by decoding the code string 250.
  • the code string 250 is, for example, the code string 250 generated by the image encoding device 200 described above.
  • the image decoding apparatus 300 includes a variable length decoding unit 301, an inverse quantization inverse transformation unit 302, an addition unit 303, a prediction control unit 304, a selection unit 305, a prediction unit 306, and a frame memory 310. .
  • variable length decoding unit 301 acquires the quantized signal 351 and the prediction parameter 355 by performing variable length decoding on the code string 250, outputs the quantized signal 351 to the inverse quantization inverse transform unit 302, and outputs the prediction parameter. 355 is output to the prediction control unit 304 and the prediction unit 306.
  • the inverse quantization inverse transform unit 302 generates a decoded differential signal 352 by performing inverse quantization and inverse transform on the quantized signal 351, and outputs the generated decoded differential signal 352 to the adder 303.
  • the prediction control unit 304 determines a reference image used for prediction processing based on the prediction parameter 355. This process will be described later.
  • the prediction unit 306 generates and generates a prediction image 354 using information necessary for generation of the prediction image 354 such as a prediction mode included in the prediction parameter 355 and the reference image 353 output from the selection unit 305.
  • the predicted image 354 thus output is output to the adding unit 303.
  • the adding unit 303 generates a decoded image 350 by adding the predicted image 354 and the decoded difference signal 352.
  • the decoded image 350 is displayed on, for example, a display unit.
  • the decoded image 350 is stored in the frame memory 310.
  • the frame memory 310 includes a key frame memory 307 that stores a key frame reference picture, a neighboring frame memory 308 that stores a reference image that is temporally close to a decoding target image used for normal prediction, and has already been decoded in the decoding target image. And an in-plane frame memory 309 for storing image signals. Similar to the frame memory 212 of the first embodiment, there is no need to provide separate memory spaces for the three types of frame memories. For example, each reference image may be stored on the same memory. That is, the frame memory 310 may have any configuration that can output any reference image based on an instruction from the prediction control unit 304.
  • the key frame memory 307 stores not only the decoded image 350 but also image data 356 separately acquired from the outside as a reference image.
  • FIG. 14 is a flowchart showing a processing flow of the image decoding apparatus 300.
  • the image decoding apparatus 300 acquires encoded data of a target image, which is a decoding target frame, from the code string 250 (S501). Next, the image decoding apparatus 300 determines whether the target image that is a frame to be decoded is a key frame (S502). When the target image is a key frame (YES in S502), the image decoding device 300 performs a key frame decoding process (S503). The key frame decryption process will be described in detail later.
  • the image decoding apparatus 300 may determine whether the target image is a key frame (whether it is an access point) using information for specifying a key frame included in the code string 250. . Specifically, this information indicates whether or not the target image is a key frame, or indicates which image among the plurality of images is a key frame. Further, this information indicates whether each key frame is a key frame encoded using intra prediction or a key frame encoded using inter prediction. For example, this information is parameter information included in the header information of the code string 250. This information may be information recorded in a field in which information common to the image encoding apparatus is recorded in a system different from the code string 250.
  • the image decoding apparatus 300 can make a determination at high speed without accessing other information.
  • the image decoding apparatus 300 refers to predetermined field information as in the latter case, there is no need for information in the code string 250, so that the encoding efficiency can be further improved.
  • the image decoding apparatus 300 performs a normal decoding process (S504).
  • the normal decoding process will not be described in detail here, but the normal decoding process is a process similar to the decoding method of the inter prediction frame described in Non-Patent Document 1.
  • the image decoding apparatus 300 acquires the reference image indicated by the information included in the code string 250 from the frame memory 310, and performs prediction while using the reference image based on the prediction parameter 355 included in the code string 250.
  • An image 354 is created and a decoding process is performed using the predicted image 354.
  • the image decoding apparatus 300 acquires encoded data of the next image (S501). On the other hand, when the data of the target image is the end of the code string 250 (YES in S505), the image decoding device 300 ends the decoding process.
  • FIG. 15 is a flowchart of the key frame decoding process.
  • the key frame is an intra prediction frame or an inter prediction frame that refers to a key frame reference picture. Therefore, the image decoding apparatus 300 first determines whether the key frame that is the target image is an inter prediction frame that refers to the key frame reference picture (S511). Specifically, the header information of the target image included in the code string 250 includes information indicating a key frame reference picture or information indicating whether inter prediction is used. The image decoding apparatus 300 decodes the information and performs the above determination based on the obtained information.
  • the image decoding apparatus 300 acquires the key frame reference picture (S512). Specifically, the image decoding apparatus 300 acquires a key frame reference picture via a network, for example. Alternatively, the image decoding device 300 acquires a key frame reference picture that is a key frame that has already been decoded.
  • the code string 250 includes information indicating whether the key frame reference picture is a picture acquired via a network or a decoded picture, and information for specifying the key frame reference picture. .
  • the image decoding apparatus 300 acquires a key frame reference picture using this information. Further, the image decoding device 300 may acquire a key frame reference picture in advance via a network.
  • the image decoding apparatus 300 decodes the target image by the inter prediction decoding process using the key frame reference picture (S513).
  • the image decoding apparatus 300 when the code string 250 includes only information (for example, ID) that specifies a key frame reference picture (when no difference signal is included), the image decoding apparatus 300 The information for specifying the key frame reference picture is decoded with respect to the background area, and the key frame reference picture specified by the information is output as it is as the decoded image of the background area.
  • the code string 250 includes information indicating that the reference image is output as it is or that a differential signal is not included.
  • the image decoding apparatus 300 refers to the information and determines whether or not to output the key frame reference picture as it is.
  • the image decoding apparatus 300 decodes information for specifying the key frame reference picture, and uses the key frame reference picture specified by the information as a decoded image of the target image as it is. Output.
  • the image decoding apparatus 300 decodes the target image by inter prediction decoding (S514).
  • the image decoding apparatus 300 stores the decoded image of the key frame in the key frame memory 307 as a key frame reference picture (S515).
  • the image decoding device 300 selects, for example, a representative image or an image at a predetermined cycle from the obtained decoded images as a key frame reference picture, and stores these images in a storage on the network. By accumulating, the image encoding device may be able to access these images.
  • variable length decoding unit 301 may use arithmetic decoding or may use a decoding table designed according to entropy. That is, a method associated with the image encoding device paired with the image decoding device 300 may be used.
  • the key frame reference picture shared by the image encoding device 200 and the image decoding device 300 can be modified as follows.
  • the key frame reference picture may be an image encoded by an image encoding method different from the image included in the code string 250 (the image to be encoded or decoded). That is, the key frame reference picture may be an image encoded by an encoding method different from that of the processing target key frame.
  • the key frame reference pictures are MPEG-2, MPEG-4-AVC (H.264), JPEG Or an image encoded by another encoding method such as JPEG2000.
  • the image encoding device 200 or the image decoding device 300 acquires this key frame reference picture via a network. Thereby, the communication load between the image coding apparatus 200 or the image decoding apparatus 300 and the network can be reduced. Further, the capacity of the storage (key frame memory 209 or 307) included in the image encoding device 200 or the image decoding device 300 can be reduced. Furthermore, since image data distributed in the world including the Internet can be used as a key frame reference picture, the encoding efficiency can be further improved.
  • an image encoded by a different image encoding method is not limited to an image acquired via a network, and may be an image to be encoded (decoded).
  • the image encoding device 200 or the image decoding device 300 has a function of decoding an image encoded by an encoding method different from that of the image included in the code string 250, and the image is encoded by the different encoding method.
  • the obtained image may be acquired, the image may be decoded, and the obtained decoded image may be stored as a key frame reference picture.
  • the key frame reference picture may be an image with a resolution different from that of the image included in the code string 250 (the image to be encoded or decoded). That is, the key frame reference picture may be an image having a resolution different from that of the key frame to be processed.
  • the resolution of the key frame reference picture may be 3840 ⁇ 2160. Since the image coding apparatus 200 and the image decoding apparatus 300 according to the present embodiment perform inter prediction processing using key frame reference pictures for key frames, more inter prediction is performed when the resolution of the key frame reference pictures is large. Can improve the efficiency. Thereby, since the data amount of the differential signal to be transmitted is reduced, the coding efficiency can be improved.
  • the key frame reference picture may be a still image
  • a large number of photographic images on the network such as the Internet can be used as the key frame reference picture.
  • the resolutions of photographic images are various, the number of images that can be used as key frame reference pictures can be increased by supporting different resolutions. Thereby, encoding efficiency can be improved.
  • the image resolution of the key frame reference picture may be smaller than the image resolution to be encoded or decoded. Because there are many images on the network as described above, by preparing images that can be used for prediction even if the resolution is not the same, the difference image can be reduced and the encoding efficiency can be increased. be able to. Further, as illustrated in FIG. 7C, when referring to a plurality of images, the image encoding device and the image decoding device generate prediction images more similar to key frames by referring to images with different resolutions. It becomes possible to do. Thereby, encoding efficiency can further be improved.
  • the image decoding apparatus 300 decodes a plurality of images.
  • the image decoding apparatus 300 determines a randomly accessible key frame 104 from a plurality of images.
  • the image decoding apparatus 300 decodes the key frame 104 using inter prediction that refers to a key frame reference picture different from the key frame 104.
  • inter prediction refers to a key frame reference picture different from the key frame 104.
  • the image decoding apparatus 300 discriminates a plurality of key frames from a plurality of images, and decodes target key frames included in the plurality of key frames with reference to other key frames among the plurality of key frames. Thereby, since the number of images required for decoding the key frame can be reduced, the time until the video is displayed at the time of random reproduction can be reduced.
  • FIG. 16 is a schematic diagram showing a system 500 according to the present embodiment.
  • the system 500 has a database 501 that can be accessed from the image decoding apparatus 300.
  • the database 501 stores a plurality of images g1t1 to gNtM that are key frame reference pictures including the background image described above. Further, as shown in FIG. 16, the database 501 may store a plurality of sets d0 to dL each including a plurality of images.
  • the image decoding device 300 is, for example, the image decoding device 300 described in the second embodiment.
  • the system 500 includes a control unit 502. Based on a trigger (signal) transmitted from the image decoding apparatus 300 or a trigger signal obtained from a time (a time signal included in the system 500), the control unit 502 specifies a specific one from a plurality of key frame reference pictures held in the database 501.
  • the image or the image group is transmitted to the image decoding device 300.
  • the transmitted image or image group is stored in a data buffer (key frame memory 307) that is a local storage of the image decoding apparatus 300.
  • the image stored in the data buffer is used for key frame inter prediction as a key frame reference picture.
  • FIG. 17 is a diagram illustrating an operation flow of the system 500 and the image decoding apparatus 300 according to the present embodiment.
  • the image decoding apparatus 300 transmits a trigger signal (control signal) for acquiring an image necessary or likely to be necessary for decoding to the system 500 (S601).
  • the system 500 receives the trigger signal transmitted from the image decoding device 300, selects an image that is necessary or likely to be necessary from a plurality of stored images (S602), and selects the selected image as an image decoding device. It transmits to 300 (S603).
  • the image decoding device 300 receives the image transmitted from the system 500 and stores it in the local storage (S604). Accordingly, the image decoding apparatus 300 can acquire a necessary key frame reference picture, and thus can appropriately decode a code string.
  • the trigger signal includes, for example, position information indicating the current position of the image decoding device 300.
  • the system 500 holds, for each stored image or image group, position information indicating the location where the image included in the image or image group is taken.
  • the system 500 selects an image or a group of images captured at a position close to the current position indicated by the trigger signal, and transmits the selected image or group of images to the image decoding device 300.
  • a mobile terminal such as a tablet terminal provided with the image decoding device 300 displays a monitoring video set near the current position among a large number of monitoring cameras set nationwide.
  • the image decoding apparatus 300 acquires an image related to the position from the system 500, so that the memory size of the terminal can be reduced as compared to acquiring all the images.
  • the amount of transmission data can be reduced.
  • the trigger information may include position information indicating a position specified by the user.
  • the user designates an area where a suspicious person or the like has occurred from a remote location.
  • the image decoding apparatus 300 can acquire an image related to the area in advance, so that video switching can be performed smoothly.
  • the trigger signal includes time information.
  • the time information indicates the current time.
  • the system 500 holds time information indicating the time or time zone when the image included in the image or image group was captured.
  • the system 500 selects an image or a group of images taken at a time or time zone close to the current time, and transmits the selected image or group of images to the image decoding apparatus 300.
  • the memory size of the terminal can be reduced and the amount of transmission data can be reduced as compared to acquiring all images.
  • the time information is not limited to the current time, and may indicate a time designated by the user.
  • the time information indicates the time by one of seconds, minutes, hours, days, months, seasons (multiple months) or a combination thereof. Since a large unit is used as the unit indicated by the time information, the system 500 can store the image in units of seasons, for example, so that the memory size of the system 500 can be reduced. Further, since the number of images transmitted to the terminal is reduced, the memory size of the terminal can be reduced and the amount of transmission data can be reduced. In addition, the data amount of the trigger signal can be reduced.
  • the trigger signal may include weather information indicating the weather.
  • the system 500 holds weather information indicating the weather when the image is captured for each stored image or image group. Thereby, the system 500 selects an image or a group of images captured in the same or similar weather as the weather indicated by the trigger information, and transmits the selected image or group of images to the image decoding device 300.
  • the memory size of the terminal can be reduced and the amount of transmission data can be reduced as compared to acquiring all images.
  • this technique may be applied only to cameras such as surveillance cameras installed in the outdoors and cameras that are photographing the outdoors. As a result, the amount of data stored in the system 500 can be reduced, and the data amount of the trigger signal can be reduced.
  • the trigger signal may include two or more of the position information, time information, and weather information described above.
  • the system 500 selects an image that meets all of the plurality of conditions indicated by the trigger signal. Thereby, since the selected image can be specified more, the amount of data transmitted to the image decoding apparatus 300 can be further reduced.
  • the same image as the image transmitted to the image decoding device 300 is transmitted to the image encoding device 200.
  • the image encoding device 200 generates a code string using the same image as the image transmitted to the image decoding device 300, and transmits the generated code sequence to the image decoding device 300.
  • the system 500 does not store the image specified by the trigger signal, for example, an image that matches the specification by the trigger signal is newly acquired from the image encoding device 200 via a network or the like. To get and store. As a result, it is possible to cope with changes in the environment and to improve the encoding efficiency continuously.
  • time information generated in the system 500 may be used as the time information. Thereby, the data amount of data transmitted between the image decoding apparatus 300 and the system 500 can be reduced. In this case, it is necessary for the image encoding device 200 and the image decoding device 300 to share which time information is used. For example, information indicating which time information is used is notified to the image encoding device 200 and the image decoding device 300.
  • the system 500 may include the image decoding device 300. In this case, transmission between the image decoding apparatus 300 and the system 500 does not go through the network.
  • the present invention is not limited thereto.
  • the system 500 holds individual images in association with each time, each place, or each weather, but the present invention is not limited thereto.
  • the times are different, if the image contents are the same or similar, only one image may be stored in association with a plurality of times. Thereby, since the number of images stored in the system 500 can be reduced, the data amount of images stored in the system 500 can be reduced.
  • the image decoding device 300 indicates that the reference image cannot be acquired when the image encoding device 200 cannot acquire the reference image (key frame reference picture) used for generating the code string. Notify system 500 or user. In that case, the image decoding device 300 generates a decoded image using another image (an image not shared with the image encoding device 200) acquired by a predetermined method for prediction, and displays the decoded image. May be. Thereby, although it is not a decoded image expected by the image coding apparatus 200, a similar decoded image can be displayed by using a similar image as a predicted image on the image decoding apparatus 300 side, and thus no video is displayed. Can be prevented.
  • each processing unit included in the image encoding device and the image decoding device according to the above embodiment is typically realized as an LSI that is an integrated circuit. These may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
  • circuits are not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • An FPGA Field Programmable Gate Array
  • reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
  • each component may be configured by dedicated hardware or may be realized by executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • the image encoding device and the image decoding device include a processing circuit and a storage device (storage) electrically connected to the processing circuit (accessible from the processing circuit).
  • the processing circuit includes at least one of dedicated hardware and a program execution unit. Further, when the processing circuit includes a program execution unit, the storage device stores a software program executed by the program execution unit. The processing circuit executes the image encoding method or the image decoding method according to the above embodiment using the storage device.
  • the present invention may be the software program or a non-transitory computer-readable recording medium on which the program is recorded.
  • the program can be distributed via a transmission medium such as the Internet.
  • division of functional blocks in the block diagram is an example, and a plurality of functional blocks can be realized as one functional block, a single functional block can be divided into a plurality of functions, or some functions can be transferred to other functional blocks. May be.
  • functions of a plurality of functional blocks having similar functions may be processed in parallel or time-division by a single hardware or software.
  • the order in which the steps included in the prediction image generation method, the encoding method, or the decoding method are executed is for illustration in order to specifically describe the present invention, and the order other than the above is used. May be. Also, some of the above steps may be executed simultaneously (in parallel) with other steps.
  • each embodiment may be realized by centralized processing using a single device (system), or may be realized by distributed processing using a plurality of devices.
  • the computer that executes the program may be singular or plural. That is, centralized processing may be performed, or distributed processing may be performed.
  • the prediction image generation device As described above, the prediction image generation device, the encoding device, and the decoding device according to one or more aspects of the present invention have been described based on the embodiment. However, the present invention is not limited to this embodiment. Absent. Unless it deviates from the gist of the present invention, the embodiment in which various modifications conceived by those skilled in the art have been made in the present embodiment, and forms constructed by combining components in different embodiments are also applicable to one or more of the present invention. It may be included within the scope of the embodiments.
  • the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
  • the system has an image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
  • image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
  • Other configurations in the system can be appropriately changed according to circumstances.
  • FIG. 18 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service.
  • a communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
  • the content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
  • PDA Personal Digital Assistant
  • each device may be directly connected to the telephone network ex104 without going from the base station ex106, which is a fixed wireless station, to ex110.
  • the devices may be directly connected to each other via short-range wireless or the like.
  • the camera ex113 is a device that can shoot moving images such as a digital video camera
  • the camera ex116 is a device that can shoot still images and movies such as a digital camera.
  • the mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division Multiple Access) system, or an LTE (Long Terminal Term Evolution). It is possible to use any of the above-mentioned systems, HSPA (High Speed Packet Access) mobile phone, PHS (Personal Handyphone System), or the like.
  • the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
  • live distribution content that is shot by a user using the camera ex113 (for example, music live video) is encoded as described in each of the above embodiments (that is, in one aspect of the present invention).
  • the streaming server ex103 stream-distributes the content data transmitted to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 that can decode the encoded data.
  • Each device that receives the distributed data decodes the received data and reproduces it (that is, functions as an image decoding device according to one embodiment of the present invention).
  • the captured data may be encoded by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be shared with each other.
  • the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in common with each other.
  • still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
  • the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
  • these encoding / decoding processes are generally performed in the computer ex111 and the LSI ex500 included in each device.
  • the LSI ex500 may be configured as a single chip or a plurality of chips.
  • moving image encoding / decoding software is incorporated into some recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111, etc., and encoding / decoding processing is performed using the software. May be.
  • moving image data acquired by the camera may be transmitted.
  • the moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
  • the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
  • the encoded data can be received and reproduced by the client.
  • the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and personal broadcasting can be realized even for a user who does not have special rights or facilities.
  • the digital broadcasting system ex200 also includes at least the moving image encoding device (image encoding device) or the moving image decoding according to each of the above embodiments. Any of the devices (image decoding devices) can be incorporated.
  • the broadcast station ex201 multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves.
  • This video data is data encoded by the moving image encoding method described in each of the above embodiments (that is, data encoded by the image encoding apparatus according to one aspect of the present invention).
  • the broadcasting satellite ex202 transmits a radio wave for broadcasting, and this radio wave is received by a home antenna ex204 capable of receiving satellite broadcasting.
  • the received multiplexed data is decoded and reproduced by an apparatus such as the television (receiver) ex300 or the set top box (STB) ex217 (that is, functions as an image decoding apparatus according to one embodiment of the present invention).
  • a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system using the recording medium ex215 on which the multiplexed data is recorded.
  • a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television.
  • the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
  • FIG. 20 is a diagram illustrating a television (receiver) ex300 that uses the video decoding method and the video encoding method described in each of the above embodiments.
  • the television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data.
  • the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
  • the television ex300 also decodes the audio data and the video data, or encodes the information, the audio signal processing unit ex304, the video signal processing unit ex305 (the image encoding device or the image according to one embodiment of the present invention) A signal processing unit ex306 that functions as a decoding device), a speaker ex307 that outputs the decoded audio signal, and an output unit ex309 that includes a display unit ex308 such as a display that displays the decoded video signal. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that performs overall control of each unit, and a power supply circuit unit ex311 that supplies power to each unit.
  • the interface unit ex317 includes a bridge unit ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording unit such as a hard disk.
  • a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
  • the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored.
  • Each part of the television ex300 is connected to each other via a synchronous bus.
  • the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in each of the above embodiments.
  • the decoded audio signal and video signal are output from the output unit ex309 to the outside. At the time of output, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Also, the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or to a recording medium will be described.
  • the television ex300 receives a user operation from the remote controller ex220 and the like, encodes an audio signal with the audio signal processing unit ex304, and converts the video signal with the video signal processing unit ex305 based on the control of the control unit ex310. Encoding is performed using the encoding method described in (1).
  • the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320, ex321, etc. so that the audio signal and the video signal are synchronized.
  • a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
  • the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good.
  • the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
  • the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218,
  • the reader / recorder ex218 may share with each other.
  • FIG. 21 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to an optical disk.
  • the information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below.
  • the optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disk to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information.
  • the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
  • the reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary To play back information.
  • the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
  • the disk motor ex405 rotates the recording medium ex215.
  • the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
  • the system control unit ex407 controls the entire information reproduction / recording unit ex400.
  • the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary.
  • the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner.
  • the system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
  • the optical head ex401 has been described as irradiating a laser spot.
  • a configuration in which higher-density recording is performed using near-field light may be used.
  • FIG. 22 shows a schematic diagram of a recording medium ex215 that is an optical disk.
  • Guide grooves grooves
  • address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
  • This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus.
  • the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
  • the area used for recording user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
  • the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
  • an optical disk such as a single-layer DVD or BD has been described as an example.
  • the present invention is not limited to these, and an optical disk having a multilayer structure and capable of recording other than the surface may be used.
  • an optical disc with a multi-dimensional recording / reproducing structure such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
  • the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
  • the configuration of the car navigation ex211 may be, for example, the configuration shown in FIG. 20 with a GPS receiving unit added, and the same may be considered for the computer ex111, the mobile phone ex114, and the like.
  • FIG. 23A is a diagram showing the mobile phone ex114 using the video decoding method and the video encoding method described in the above embodiment.
  • the mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of capturing video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data.
  • the mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio, a captured video,
  • an audio input unit ex356 such as a microphone for inputting audio
  • a captured video In the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data
  • a slot ex364 is provided.
  • the mobile phone ex114 has a power supply circuit part ex361, an operation input control part ex362, and a video signal processing part ex355 with respect to a main control part ex360 that comprehensively controls each part of the main body including the display part ex358 and the operation key part ex366.
  • a camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
  • the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
  • the cellular phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. Then, this is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing are performed by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
  • the mobile phone ex114 also amplifies the received data received via the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing by the modulation / demodulation unit ex352, and performs voice signal processing unit After being converted into an analog audio signal by ex354, this is output from the audio output unit ex357.
  • the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362.
  • the main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350.
  • almost the reverse process is performed on the received data and output to the display unit ex358.
  • the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments. Encode (that is, function as an image encoding device according to an aspect of the present invention), and send the encoded video data to the multiplexing / demultiplexing unit ex353.
  • the audio signal processing unit ex354 encodes the audio signal picked up by the audio input unit ex356 while the camera unit ex365 images a video, a still image, etc., and sends the encoded audio data to the multiplexing / separating unit ex353. To do.
  • the multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result.
  • the multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
  • the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370.
  • the encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355.
  • the video signal processing unit ex355 decodes the video signal by decoding using the video decoding method corresponding to the video encoding method described in each of the above embodiments (that is, an image according to an aspect of the present invention).
  • video and still images included in the moving image file linked to the home page are displayed from the display unit ex358 via the LCD control unit ex359.
  • the audio signal processing unit ex354 decodes the audio signal, and the audio is output from the audio output unit ex357.
  • the terminal such as the mobile phone ex114 is referred to as a transmission terminal having only an encoder and a receiving terminal having only a decoder.
  • a transmission terminal having only an encoder
  • a receiving terminal having only a decoder.
  • multiplexed data in which music data or the like is multiplexed with video data is received and transmitted, but data in which character data or the like related to video is multiplexed in addition to audio data It may be video data itself instead of multiplexed data.
  • the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
  • multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to.
  • identification information indicating which standard the video data conforms to.
  • FIG. 24 is a diagram showing a structure of multiplexed data.
  • multiplexed data is obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream.
  • the video stream indicates the main video and sub-video of the movie
  • the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio
  • the presentation graphics stream indicates the subtitles of the movie.
  • the main video indicates a normal video displayed on the screen
  • the sub-video is a video displayed on a small screen in the main video.
  • the interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen.
  • the video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing.
  • the audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
  • Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
  • FIG. 25 is a diagram schematically showing how multiplexed data is multiplexed.
  • a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240.
  • the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246.
  • the multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
  • FIG. 26 shows in more detail how the video stream is stored in the PES packet sequence.
  • the first row in FIG. 26 shows a video frame sequence of the video stream.
  • the second level shows a PES packet sequence.
  • a plurality of Video Presentation Units in the video stream are divided into pictures, B pictures, and P pictures, and are stored in the payload of the PES packet.
  • Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
  • PTS Presentation Time-Stamp
  • DTS Decoding Time-Stamp
  • FIG. 27 shows the format of TS packets that are finally written in the multiplexed data.
  • the TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data.
  • the PES packet is divided and stored in the TS payload.
  • a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data.
  • TP_Extra_Header information such as ATS (Arrival_Time_Stamp) is described.
  • ATS indicates the transfer start time of the TS packet to the PID filter of the decoder.
  • source packets are arranged as shown in the lower part of FIG. 27, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
  • TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption.
  • PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0.
  • the PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data.
  • the descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data.
  • the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
  • FIG. 28 is a diagram for explaining the data structure of the PMT in detail.
  • a PMT header describing the length of data included in the PMT is arranged at the head of the PMT.
  • a plurality of descriptors related to multiplexed data are arranged.
  • the copy control information and the like are described as descriptors.
  • a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged.
  • the stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream.
  • the multiplexed data is recorded together with the multiplexed data information file.
  • the multiplexed data information file is management information of multiplexed data, has a one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
  • the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time as shown in FIG.
  • the system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later.
  • the ATS interval included in the multiplexed data is set to be equal to or less than the system rate.
  • the playback start time is the PTS of the first video frame of the multiplexed data
  • the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
  • the attribute information for each stream included in the multiplexed data is registered for each PID.
  • the attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream.
  • the video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is.
  • the audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
  • the stream type included in the PMT is used.
  • video stream attribute information included in the multiplexed data information is used.
  • the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT.
  • FIG. 31 shows the steps of the moving picture decoding method according to the present embodiment.
  • step exS100 the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data.
  • step exS101 it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do.
  • step exS102 the above embodiments are performed. Decoding is performed by the moving picture decoding method shown in the form.
  • the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
  • FIG. 32 shows a configuration of an LSI ex500 that is made into one chip.
  • the LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510.
  • the power supply circuit unit ex505 is activated to an operable state by supplying power to each unit when the power supply is on.
  • the LSI ex500 uses the AV I / O ex509 to perform the microphone ex117 and the camera ex113 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512, and the like.
  • the AV signal is input from the above.
  • the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
  • the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed.
  • the encoding process of the video signal is the encoding process described in the above embodiments.
  • the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside.
  • the output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so as to be synchronized when multiplexing.
  • the memory ex511 is described as an external configuration of the LSI ex500.
  • a configuration included in the LSI ex500 may be used.
  • the number of buffers ex508 is not limited to one, and a plurality of buffers may be provided.
  • the LSI ex500 may be made into one chip or a plurality of chips.
  • control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex501 is not limited to this configuration.
  • the signal processing unit ex507 may further include a CPU.
  • the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507.
  • the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
  • LSI LSI
  • IC system LSI
  • super LSI ultra LSI depending on the degree of integration
  • the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
  • An FPGA Field Programmable Gate Array
  • Such a programmable logic device typically loads or reads a program constituting software or firmware from a memory or the like, so that the moving image encoding method or the moving image described in each of the above embodiments is used.
  • An image decoding method can be performed.
  • FIG. 33 shows a configuration ex800 in the present embodiment.
  • the drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments.
  • the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments is instructed to decode the video data.
  • the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
  • the drive frequency switching unit ex803 includes the CPU ex502 and the drive frequency control unit ex512 in FIG.
  • the decoding processing unit ex801 that executes the moving picture decoding method shown in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG.
  • the CPU ex502 identifies which standard the video data conforms to. Then, based on the signal from the CPU ex502, the drive frequency control unit ex512 sets the drive frequency. Further, based on the signal from the CPU ex502, the signal processing unit ex507 decodes the video data.
  • the identification information described in the fifth embodiment may be used.
  • the identification information is not limited to that described in the fifth embodiment, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal. Further, the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a look-up table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to the look-up table.
  • FIG. 34 shows steps for executing the method of the present embodiment.
  • the signal processing unit ex507 acquires identification information from the multiplexed data.
  • the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information.
  • the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency.
  • step exS203 the CPU ex502 drives the signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
  • the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency. For example, when the drive frequency is set low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
  • the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method.
  • the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
  • the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered.
  • the voltage applied to the LSIex500 or the apparatus including the LSIex500 is set high.
  • the driving of the CPU ex502 is stopped.
  • the CPU ex502 is temporarily stopped because there is room in processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is a margin for processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
  • a plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone.
  • the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input.
  • the signal processing unit ex507 corresponding to each standard is used individually, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
  • a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1
  • the processing unit is partly shared.
  • An example of this configuration is shown as ex900 in FIG. 36A.
  • the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common.
  • the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for other processing contents specific to one aspect of the present invention that do not correspond to the MPEG4-AVC standard, a dedicated decoding processing unit A configuration using ex901 is conceivable.
  • a dedicated decoding processing unit ex901 is used for key frame processing, and other entropy decoding, inverse quantization, It is conceivable to share a decoding processing unit for any of the deblocking filter, motion compensation, or all processes.
  • the decoding processing unit for executing the moving picture decoding method described in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
  • ex1000 in FIG. 36B shows another example in which processing is partially shared.
  • a dedicated decoding processing unit ex1001 corresponding to the processing content specific to one aspect of the present invention
  • a dedicated decoding processing unit ex1002 corresponding to the processing content specific to another conventional standard
  • a common decoding processing unit ex1003 corresponding to the processing contents common to the moving image decoding method according to the above and other conventional moving image decoding methods.
  • the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in one aspect of the present invention or processing content specific to other conventional standards, and can execute other general-purpose processing. Also good.
  • the configuration of the present embodiment can be implemented by LSI ex500.
  • the processing content common to the moving picture decoding method according to one aspect of the present invention and the moving picture decoding method of the conventional standard reduces the circuit scale of the LSI by sharing the decoding processing unit, In addition, the cost can be reduced.
  • the present invention can be applied to an image encoding method, an image decoding method, an image encoding device, and an image decoding device.
  • the present invention can also be used for high-resolution information display devices or imaging devices such as televisions, digital video recorders, car navigation systems, mobile phones, digital cameras, and digital video cameras that include an image encoding device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention concerne un procédé de codage d'image qui peut coder de manière efficace des images, ou un procédé de décodage d'image qui peut décoder de manière efficace des images. Le procédé de codage d'image est un procédé pour coder de multiples images, et comprend : une étape de sélection (S101) pour sélectionner une trame clé accessible de manière aléatoire (104) parmi de multiples images ; et des étapes de codage (S103, S204) qui réalisent un codage sur la trame clé (104) à l'aide d'une prédiction inter qui fait référence à une image de référence de trame clé qui est différente de la trame clé (104).
PCT/JP2015/002969 2014-07-03 2015-06-15 Procédé de codage d'image, procédé de décodage d'image, dispositif de codage d'image et dispositif de décodage d'image WO2016002140A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201462020542P 2014-07-03 2014-07-03
US62/020,542 2014-07-03
JP2015077493A JP2018142752A (ja) 2014-07-03 2015-04-06 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置
JP2015-077493 2015-04-06

Publications (1)

Publication Number Publication Date
WO2016002140A1 true WO2016002140A1 (fr) 2016-01-07

Family

ID=55018721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2015/002969 WO2016002140A1 (fr) 2014-07-03 2015-06-15 Procédé de codage d'image, procédé de décodage d'image, dispositif de codage d'image et dispositif de décodage d'image

Country Status (1)

Country Link
WO (1) WO2016002140A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107343205A (zh) * 2016-04-28 2017-11-10 浙江大华技术股份有限公司 一种长期参考码流的编码方法及编码装置
CN107343205B (zh) * 2016-04-28 2019-07-16 浙江大华技术股份有限公司 一种长期参考码流的编码方法及编码装置
CN113362233A (zh) * 2020-03-03 2021-09-07 浙江宇视科技有限公司 图片处理方法、装置、设备、系统及存储介质
WO2023078048A1 (fr) * 2021-11-06 2023-05-11 中兴通讯股份有限公司 Procédé et appareil d'encapsulation de flux binaire vidéo, procédé et appareil de décodage de flux binaire vidéo, et procédé et appareil d'accès à un flux binaire vidéo

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002281508A (ja) * 2001-03-19 2002-09-27 Kddi Corp スキップ領域検出型動画像符号化装置および記録媒体
JP2005340896A (ja) * 2004-05-24 2005-12-08 Mitsubishi Electric Corp 動画像符号化装置
WO2006003814A1 (fr) * 2004-07-01 2006-01-12 Mitsubishi Denki Kabushiki Kaisha Support d’enregistrement d’informations vidéo d’accès aléatoire, procédé d’enregistrement, dispositif de reproduction, et procédé de reproduction
JP2007535208A (ja) * 2004-04-28 2007-11-29 松下電器産業株式会社 ストリーム生成装置、方法、ストリーム再生装置、方法および記録媒体
WO2013031785A1 (fr) * 2011-09-01 2013-03-07 日本電気株式会社 Procédé de compression et de transmission d'image capturée, et système de compression et de transmission d'image capturée
WO2013074410A1 (fr) * 2011-11-16 2013-05-23 Qualcomm Incorporated Ensembles d'images de référence contraints dans un traitement parallèle de front d'onde de données vidéo

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002281508A (ja) * 2001-03-19 2002-09-27 Kddi Corp スキップ領域検出型動画像符号化装置および記録媒体
JP2007535208A (ja) * 2004-04-28 2007-11-29 松下電器産業株式会社 ストリーム生成装置、方法、ストリーム再生装置、方法および記録媒体
JP2005340896A (ja) * 2004-05-24 2005-12-08 Mitsubishi Electric Corp 動画像符号化装置
WO2006003814A1 (fr) * 2004-07-01 2006-01-12 Mitsubishi Denki Kabushiki Kaisha Support d’enregistrement d’informations vidéo d’accès aléatoire, procédé d’enregistrement, dispositif de reproduction, et procédé de reproduction
WO2013031785A1 (fr) * 2011-09-01 2013-03-07 日本電気株式会社 Procédé de compression et de transmission d'image capturée, et système de compression et de transmission d'image capturée
WO2013074410A1 (fr) * 2011-11-16 2013-05-23 Qualcomm Incorporated Ensembles d'images de référence contraints dans un traitement parallèle de front d'onde de données vidéo

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107343205A (zh) * 2016-04-28 2017-11-10 浙江大华技术股份有限公司 一种长期参考码流的编码方法及编码装置
CN107343205B (zh) * 2016-04-28 2019-07-16 浙江大华技术股份有限公司 一种长期参考码流的编码方法及编码装置
CN113362233A (zh) * 2020-03-03 2021-09-07 浙江宇视科技有限公司 图片处理方法、装置、设备、系统及存储介质
CN113362233B (zh) * 2020-03-03 2023-08-29 浙江宇视科技有限公司 图片处理方法、装置、设备、系统及存储介质
WO2023078048A1 (fr) * 2021-11-06 2023-05-11 中兴通讯股份有限公司 Procédé et appareil d'encapsulation de flux binaire vidéo, procédé et appareil de décodage de flux binaire vidéo, et procédé et appareil d'accès à un flux binaire vidéo

Similar Documents

Publication Publication Date Title
JP6222589B2 (ja) 復号方法及び復号装置
JP6210248B2 (ja) 動画像符号化方法及び動画像符号化装置
JP6394966B2 (ja) 時間動きベクトル予測を用いた、符号化方法、復号方法、符号化装置、及び、復号装置
WO2013035313A1 (fr) Procédé de codage d'images, procédé de décodage d'images, dispositif de codage d'images, dispositif de décodage d'images et dispositif de codage/ décodage d'images
JP2019169975A (ja) 送信方法および送信装置
WO2013114860A1 (fr) Procédés de codage et de décodage d'image, dispositifs de codage et de décodage d'image et dispositif de codage/décodage d'image
JP6004375B2 (ja) 画像符号化方法および画像復号化方法
JP6414712B2 (ja) 多数の参照ピクチャを用いる動画像符号化方法、動画像復号方法、動画像符号化装置、および動画像復号方法
WO2012023281A1 (fr) Procédé de décodage d'image vidéo, procédé de codage d'image vidéo, appareil de décodage d'image vidéo, appareil de codage d'image vidéo
JP6587046B2 (ja) 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置
KR102130046B1 (ko) 화상 복호 방법, 화상 부호화 방법, 화상 복호 장치, 화상 부호화 장치 및 화상 부호화 복호 장치
WO2012120840A1 (fr) Procédé de décodage d'image, procédé de codage d'image, dispositif de décodage d'image et dispositif de codage d'image
JP2013187905A (ja) 映像を符号化および復号する方法および装置
JP6483028B2 (ja) 画像符号化方法及び画像符号化装置
JP2015506596A (ja) 動画像符号化方法、動画像符号化装置、動画像復号方法、及び、動画像復号装置
JP5873029B2 (ja) 動画像符号化方法及び動画像復号化方法
WO2013164903A1 (fr) Procédé de codage d'image, procédé de décodage d'image, dispositif de codage d'image, dispositif de décodage d'image et dispositif de codage et de décodage d'image
JP6365924B2 (ja) 画像復号方法及び画像復号装置
JP5680812B1 (ja) 画像符号化方法、画像復号方法、画像符号化装置および画像復号装置
WO2016002140A1 (fr) Procédé de codage d'image, procédé de décodage d'image, dispositif de codage d'image et dispositif de décodage d'image
WO2011132400A1 (fr) Procédé de codage d'image, et procédé de décodage d'image
JP2014039252A (ja) 画像復号方法および画像復号装置
WO2013136678A1 (fr) Dispositif de décodage d'image et procédé de décodage d'image
JP2015180038A (ja) 画像符号化装置、画像復号装置、画像処理システム、画像符号化方法、および、画像復号方法
JP2018142752A (ja) 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15814291

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15814291

Country of ref document: EP

Kind code of ref document: A1