WO2017038800A1 - 送信装置、送信方法、受信装置および受信方法 - Google Patents
送信装置、送信方法、受信装置および受信方法 Download PDFInfo
- Publication number
- WO2017038800A1 WO2017038800A1 PCT/JP2016/075284 JP2016075284W WO2017038800A1 WO 2017038800 A1 WO2017038800 A1 WO 2017038800A1 JP 2016075284 W JP2016075284 W JP 2016075284W WO 2017038800 A1 WO2017038800 A1 WO 2017038800A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image data
- frame rate
- stream
- processing
- picture
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 152
- 230000005540 biological transmission Effects 0.000 title claims description 29
- 238000002156 mixing Methods 0.000 claims abstract description 201
- 239000000203 mixture Substances 0.000 claims description 54
- 230000002441 reversible effect Effects 0.000 claims description 35
- 238000003780 insertion Methods 0.000 claims description 15
- 230000037431 insertion Effects 0.000 claims description 15
- 239000010410 layer Substances 0.000 description 192
- 238000013139 quantization Methods 0.000 description 46
- 238000006243 chemical reaction Methods 0.000 description 41
- 230000032258 transport Effects 0.000 description 31
- 239000011229 interlayer Substances 0.000 description 25
- 208000031509 superficial epidermolytic ichthyosis Diseases 0.000 description 19
- 239000013598 vector Substances 0.000 description 16
- 230000000903 blocking effect Effects 0.000 description 15
- 238000010586 diagram Methods 0.000 description 10
- 101100243456 Arabidopsis thaliana PES2 gene Proteins 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 101001073193 Homo sapiens Pescadillo homolog Proteins 0.000 description 2
- 102100035816 Pescadillo homolog Human genes 0.000 description 2
- 238000004148 unit process Methods 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/31—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the temporal domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/587—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234327—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by decomposing into layers, e.g. base layer and one or more enhancement layers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234381—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
Definitions
- the present technology relates to a transmitting apparatus, a transmitting method, a receiving apparatus, and a receiving method, and more particularly to a transmitting apparatus that transmits moving image data of a high frame rate.
- the high frame rate is a frame rate several times or several tens of times, or even several hundred times higher.
- Frame interpolation using a high sharpness image taken with a high speed frame shutter has a large difference between when motion vector search matches and when it does not match. As a result, the difference between the two is displayed as significant image quality deterioration.
- high load operations are required to improve the accuracy of motion vector search, which affects the receiver cost.
- the present applicant has previously proposed a technology for converting a material based on an image taken with a high speed frame shutter and displaying the image with a certain quality or more with a conventional receiver that decodes a normal frame rate (patent document 1).
- An object of the present technology is to favorably transmit image data of a normal frame rate and a high frame rate.
- a basic stream is obtained that includes, as an access unit, encoded image data for each picture of basic frame rate image data obtained by performing mixing processing in temporally two consecutive picture units in high frame rate image data.
- An image encoding unit for obtaining an extension stream including, as an access unit, encoded image data for each picture of the high frame rate image data;
- a transmitter comprising: a transmitter configured to transmit a container of a predetermined format including the basic stream and the extension stream.
- a basic stream and an extension stream are obtained by the image coding unit.
- the basic stream is obtained by performing encoding processing on image data of a basic frame rate obtained by performing mixing processing in units of two pictures that are temporally continuous in image data of a high frame rate.
- the extension stream is obtained by performing encoding processing on image data of a high frame rate.
- the transmitter transmits a container of a predetermined format including the basic stream and the extension stream.
- a basic stream including image data of a basic frame rate obtained by performing mixing processing in units of two pictures that are temporally continuous in high frame rate image data is transmitted. Therefore, in the case of a receiver capable of processing basic frame rate image data, by processing this basic stream to obtain basic frame rate image data, a smooth image can be displayed as a moving image, and It is possible to avoid causing an image quality problem in frame interpolation processing by low load calculation in display processing.
- an extension stream including image data of a high frame rate is transmitted. Therefore, in the case of a receiver capable of processing high-frame-rate image data, processing this extended stream to obtain high-frame-rate image data enables good display of high-frame rate images. Can.
- the image coding unit performs predictive coding processing of image data of the basic frame rate on the image data of the basic frame rate to obtain a basic stream, and performs high on image data of the basic frame rate.
- the reverse process of the mixing process is performed using the image data of the frame rate, and when the image data of the high frame rate is the image data of one picture in the unit of two temporally consecutive pictures, the other picture
- the image data may be acquired as image data after mixture compensation, and the high frame rate image data may be subjected to predictive coding processing with the image data after this mixture compensation to obtain an extension stream.
- the image data after the mixture compensation is used as the reference image data, so that the prediction residual can be reduced.
- the image coding unit may be configured to obtain image data in a range larger than the prediction block as the image data after the mixture compensation, for each prediction block of the high frame rate image data. By doing this, even when image data after mixture compensation is used as reference image data, motion compensation can be favorably performed.
- an information insertion unit that inserts mixing ratio information in the mixing process into the layer of the enhancement stream may be further provided.
- the basic stream and the extension stream have a NAL unit structure, and the information insertion unit inserts the SEI NAL unit having mixing ratio information into the extension stream, or the mixing ratio information is PPS NAL unit of the extension stream It may be made to insert in.
- the mixing ratio information is inserted into the layer of the extension stream, so that on the receiving side, for example, it is possible to easily and appropriately perform the reverse processing of the mixing processing using this mixing ratio information.
- the access stream further includes an information insertion unit that inserts phase information indicating which of the two temporally consecutive pictures the access unit corresponds to in each access unit of the extension stream. It may be done.
- an information insertion unit that inserts identification information indicating that the image data included in the basic stream is the image data obtained by performing the mixing process in the layer of the container is further provided. May be In this case, on the receiving side, it can be easily recognized from the identification information that the image data included in the basic stream is image data obtained by performing the mixing process.
- a receiver for receiving a container of a predetermined format including a basic stream and an extension stream is a predictive coding process of image data of a basic frame rate on image data of a basic frame rate obtained by performing mixing processing in a unit of two pictures temporally continuous in image data of a high frame rate Was obtained by
- the high stream rate is obtained by subjecting the high frame rate image data to the high frame rate image data to the high frame rate image data using the high frame rate image data and performing processing reverse to the mixing process.
- Is the image data of one picture in units of temporally consecutive two pictures, and is subjected to predictive coding processing with the image data after mixed compensation, which is the image data of the other picture Was obtained from A processing unit for processing only the basic stream to obtain image data at the basic frame rate, or processing both the basic stream and the extension stream to obtain image data at the high frame rate,
- the image data of the basic frame rate obtained by processing the basic stream is subjected to the reverse processing of the mixing process using the image data of the high frame rate expansion frame obtained by processing the extension stream
- image data of the high frame rate is image data of one picture in units of two pictures that are temporally continuous
- image data after mixture compensation which is image data of the other picture, is acquired, It is in a receiving device that uses the image data after the mixture compensation as reference image data.
- the receiving unit receives a container of a predetermined format including the basic stream and the extension stream.
- predictive coding of image data of this basic frame rate is applied to image data of the basic frame rate obtained by performing mixing processing in units of two pictures that are temporally continuous in image data of high frame rate. It was obtained by
- the extension stream is high frame rate image data obtained by performing processing reverse to the mixing process using high frame rate image data on high frame rate image data and basic frame rate image data. What is obtained by performing predictive coding processing with image data after mixed compensation which is image data of the other picture when it is image data of one picture in units of two pictures that are temporally consecutive It is.
- the processing unit processes only the basic stream to obtain basic frame rate image data, or both the basic stream and the extension stream to process high frame rate image data.
- the processing unit uses the image data of the high frame rate extension frame obtained by processing the extension stream to the image data of the basic frame rate obtained by processing the basic stream. Processing is performed reverse to the mixing processing, and when image data of a high frame rate is image data of one picture in units of two temporally consecutive pictures, mixing compensation is image data of the other picture Subsequent image data is acquired, and the image data after this mixture compensation is used as reference image data.
- the extension stream when the extension stream is subjected to the decoding process, the image data after the mixture compensation is used as the reference image data. Therefore, decoding processing can be correctly performed on the extension stream, and image data of a high frame rate extension frame can be obtained favorably.
- mixing ratio information in the mixing processing is inserted in the layer of the extension stream, and the processing unit uses the mixing ratio information when performing processing reverse to the mixing processing. May be By performing the reverse processing of the mixing processing using the mixing ratio information inserted in the layer of the extension stream as described above, the processing can be easily and appropriately performed.
- phase information indicating which of two temporally consecutive pictures this access unit corresponds to is inserted, and the processing unit mixes This phase information may be used in performing the reverse process of the process.
- a receiver for receiving a container of a predetermined format including a basic stream and an extension stream The above basic stream is obtained by performing encoding processing on image data of a basic frame rate obtained by performing mixing processing in units of two pictures that are temporally continuous in image data of a high frame rate.
- the extension stream is obtained by performing encoding processing on the image data of the high frame rate.
- the receiving apparatus further includes a processing unit that processes only the basic stream to obtain image data at the basic frame rate, or processes both the basic stream and the extension stream to obtain image data at the high frame rate. is there.
- the receiving unit receives a container of a predetermined format including the basic stream and the extension stream.
- the basic stream is obtained by performing encoding processing on image data of a basic frame rate obtained by performing mixing processing in units of two pictures that are temporally continuous in image data of a high frame rate.
- the extension stream is obtained by performing encoding processing on image data of a high frame rate.
- the processing unit processes only the basic stream to obtain basic frame rate image data, or both the basic stream and the extension stream to process high frame rate image data.
- both the base stream and the extension stream are processed to obtain high frame rate image data. Therefore, in the case of a receiver capable of processing high frame rate (high frame rate) image data, by processing this extension stream to obtain high frame rate image data, high frame rate image display Can do well.
- the present technology it is possible to favorably transmit image data of a normal frame rate (basic frame rate) and a high frame rate.
- the effect described here is not necessarily limited, and may be any effect described in the present disclosure.
- FIG. 1 shows a configuration example of a transmission / reception system 10 as an embodiment.
- the transmission / reception system 10 is configured to include a transmission device 100 and a reception device 200.
- the transmitting apparatus 100 transmits a transport stream TS as a container on broadcast waves.
- a transport stream TS a basic stream (basic video stream) and an extension stream (extended video stream) obtained by processing image data (moving image data) at a high frame rate, 120 fps in this embodiment, are obtained. included.
- the base stream and the extension stream are assumed to have the NAL unit structure.
- the basic stream is a basic frame rate (normal frame rate) image data obtained by performing mixing processing in units of two pictures that are temporally continuous in high frame rate image data.
- the image data is obtained by performing predictive coding processing of
- This basic stream includes, as an access unit, encoded image data for each picture of image data of a basic frame rate.
- the image data of this basic frame rate is image data of 60 fps.
- the extension stream is obtained by adaptively performing predictive coding processing on the high frame rate image data with the image data after mixing compensation, or predictive coding processing on the high frame rate image data. It is
- the extended stream includes, as an access unit, encoded image data for each picture of high frame rate image data.
- the image data after mixture compensation is obtained by subjecting the image data of the basic frame rate to the reverse processing of the mixing process using the image data of high frame rate, the image data of high frame rate obtained by the above temporally
- the image data is image data of one picture when it is image data of one picture in a unit of two consecutive pictures.
- the high frame rate image data is an original image sequence as shown in FIG. 2 (a).
- the image data of the basic frame rate obtained by performing mixing processing in units of two consecutive pictures in time has a shutter opening for the time covered by the original image sequence.
- the rate is 1 (100%).
- the image data of the basic frame rate obtained by extracting the image data of one picture in units of two consecutive pictures is a shutter for the time covered by the original image sequence, as shown in FIG. 2 (b).
- the aperture ratio is 1/2 (50%).
- Mixing ratio information in the mixing process is inserted into the layer of the extension stream.
- a SEI NAL unit having mixing ratio information is inserted into the extension stream, or mixing ratio information is inserted into PPS NAL units of the extension stream.
- PPS NAL units of the extension stream On the reception side, it is possible to easily and appropriately perform the reverse processing of the mixing processing using this mixing ratio information.
- phase information indicating which of the two temporally consecutive pictures the access unit corresponds to is inserted.
- SEI NAL units having phase information are inserted into each access unit of the extension stream, or phase information is inserted into PPS NAL units of each access unit of the extension stream.
- identification information indicating that the image data included in the basic stream is the image data obtained by performing the mixing process is inserted.
- a descriptor (Descriptor) in which identification information is described is inserted into a video elementary stream loop arranged corresponding to the extension stream under the program map table (PMT: Program Map Table). Be done. On the reception side, it can be easily recognized from the identification information that the image data included in the basic stream is image data obtained by performing the mixing process.
- PMT Program Map Table
- the receiving device 200 receives the above-described transport stream TS sent from the transmitting device 100 on a broadcast wave.
- the receiving apparatus 200 has decoding capability capable of processing 60 fps image data, it processes only the basic stream included in the transport stream TS to obtain image data of a basic frame rate (60 fps) and reproduces an image. Do.
- the receiving apparatus 200 when the receiving apparatus 200 has decoding capability capable of processing 120 fps image data, it processes both the basic stream and the extension stream included in the transport stream TS to obtain high frame rate (120 fps) image data. Obtain and perform image reproduction.
- the receiving apparatus 200 performs decoding processing on the extension stream to obtain image data of a high frame rate
- the extension stream is processed to image data of the basic frame rate obtained by processing the basic stream.
- the high frame rate image data obtained by performing the reverse process of the mixing process using the obtained high frame rate image data, image data of one picture in units of two pictures that are temporally consecutive
- image data after mixture compensation which is image data of the other picture, is used as reference image data.
- the mixing ratio information in the mixing processing inserted in the layer of the extension stream is used, and the phase information inserted in each access unit of the extension stream Is used.
- the reverse process of the mixing process is easily and appropriately performed, and as a result, the decoding process of the extension stream is well performed.
- FIG. 3 shows an outline of processing of the transmission device 100 and the reception device 200.
- the transmission device 100 receives image data P of a 120P sequence.
- the preprocessor 101 processes the image data P to obtain image data Qb at a basic frame rate (60 fps) and image data Qe at a high frame rate (120 fps).
- the encoder 102 performs encoding processing on the image data Qb and Qe to obtain the basic stream STb and the extension stream STe. These two streams STb and STe are transmitted from the transmitting apparatus 100 to the receiving apparatus 200.
- the decoder 203 applies decoding processing to the two streams STb and STe, and the high frame rate image data Qe 'is an image data of 120P sequence. Obtained as On the other hand, in the receiving apparatus 200B capable of processing 60 fps image data, the stream STb is decoded in the decoder 203B to obtain the basic frame rate image data Qb 'as the 60P sequence image data.
- FIG. 4 shows a configuration example of the transmission apparatus 100.
- the transmitting apparatus 100 includes a pre-processor 101, an encoder 102, a multiplexer 103, and a transmitting unit 104.
- the pre-processor 101 inputs the image data P of 120 fps and outputs the image data Qb of the basic frame rate and the image data Qe of the high frame rate.
- the pre-processor 101 performs mixing processing in units of two pictures that are temporally continuous in the image data P of 120 fps to obtain image data Qb of a basic frame rate. Further, the pre-processor 101 outputs the 120 fps image data P as it is as the high frame rate image data Qe.
- FIG. 5 shows an example of the configuration of the pre-processor 101.
- the pre-processor 101 has delay circuits 111 and 114 for delaying one frame by 120 fps, an arithmetic circuit 112, and a latch circuit 113 for latching with a 60 Hz latch pulse synchronized with the delay circuits 111 and 114.
- the arithmetic circuit 112 adds the output of the delay circuit 111 and the image data P of 120 fps described above.
- image data of two pictures successive in time of the image data P is A and B
- the output of the delay circuit 111 becomes A at the timing when the output of the delay circuit 111 becomes “A * A + ⁇ * B "mixed output is obtained.
- the output of the arithmetic circuit 112 is input to the latch circuit 113.
- the latch circuit 113 latches the output of the arithmetic circuit 112 with a 60 Hz latch pulse, and the image data P is image data of a basic frame rate (60 fps) subjected to mixing processing in units of two pictures that are temporally continuous. Qb is obtained. Further, the 120 fps image data P is delayed by one frame period by the delay circuit 111, the timing with the basic frame rate image data Qb is adjusted, and the high frame rate (120 fps) image data Qe is output.
- FIG. 6A and 6B schematically show an example of the relationship between input data (image data P) of the pre-processor 101 and output data (image data Qb, Qe) of the pre-processor 101.
- the encoder 102 performs encoding processing on the image data Qb and Qe obtained by the pre-processor 101 to generate a basic stream and an extension stream.
- the encoder 102 subjects the image data Qb of the basic frame rate to predictive coding processing of the image data of this basic frame rate to obtain the basic stream STb.
- the encoder 102 adaptively performs predictive coding processing on the high frame rate image data Qe with the image data after mixture compensation, or predictive coding processing on the high frame rate image data. Get the extension stream STe.
- the encoder 102 subjects the image data of the basic frame rate to the reverse processing of the mixing process using the image data of the high frame rate, and the two frames of the image data of the high frame rate are continuous in time.
- Image data to be image data of one picture when it is image data of the other picture in unit is acquired as image data after mixture compensation.
- FIG. 7A shows a sequence of image data Qb of a basic frame rate (60 fps) input to the encoder 102 and image data Qe of a high frame rate (120 fps).
- FIG. 7 (b) shows the sequence of coded pictures in the encoder 102.
- the image data Qb of the basic frame rate is encoded as a base layer (Base Layer), and the image data Qe of the high frame rate is encoded as an enhanced layer.
- Base Layer base layer
- FIG. 8 shows a layer configuration example and a prediction example.
- FIG. 8A shows an example of a layer configuration including one base layer (Base Layer) and one extension layer (Ext 1 Layer). [P21, P22] and [P23, P24] in the extension layer (Ext 1 Layer) indicate two temporally consecutive picture units. Also, in the base layer (Base Layer), [P11] and [P12] respectively indicate pictures obtained by performing mixing processing in units of two pictures that are temporally continuous.
- each picture of the extension layer (Ext 1 Layer) is encoded with reference to the corresponding picture of the base layer (Base Layer).
- FIG. 8B shows an example of the layer configuration including one base layer (Base Layer) and two extension layers (Ext 1 Layer and Ext 2 Layer).
- the extension layer 1 (Ext 1 Layer) is composed of pictures in odd positions (Odd) in two consecutive picture units
- the extension layer 2 (Ext 2 layer) is two consecutive picture units in time And is composed of pictures at even positions.
- the encoding timing of the picture of the enhancement layer 1 (Ext 1 Layer) is the same as the encoding timing of the picture of the base layer (Base Layer), but the encoding timing of the picture of the enhancement layer 2 (Ext 2 Layer) is basic It is in the middle of the encoding timing of the picture of the layer (Base Layer).
- each picture of the enhancement layer 1 (Ext 1 Layer) is encoded with reference to the corresponding picture of the base layer (Base Layer).
- each picture of the extension layer 2 (Ext 2 layer) is encoded with reference to the corresponding picture of the base layer (Base layer) or referred to the corresponding picture of the extension layer 1 (Ext 1 layer) And encoded.
- FIG. 8C shows a layer configuration example including one base layer (Base Layer) and two extension layers (Ext 1 Layer and Ext 2 Layer) as in the example of FIG. 8B.
- the encoding timing of the picture of the extension layer 1 is the same as the encoding timing of the picture of the base layer (Base Layer)
- the encoding timing of the picture of the extension layer 2 is also basic It is the same as the encoding timing of the picture of the layer (Base Layer).
- each picture of the enhancement layer 1 (Ext 1 Layer) is encoded with reference to the corresponding picture of the base layer (Base Layer).
- each picture of the extension layer 2 (Ext 2 layer) is encoded with reference to the corresponding picture of the extension layer 1 (Ext 1 layer).
- FIG. 9 shows a configuration example of a coding processing portion of the encoder 102.
- This configuration example corresponds to the case of a layer configuration example (see FIG. 8A) including a base layer (Base Layer) and one extension layer (Ext 1 Layer).
- Base Layer base layer
- Ext 1 Layer extension layer
- the encoder 102 includes a blocking circuit 121, a subtraction circuit 122, a motion prediction / motion compensation circuit 123, an integer conversion / quantization circuit 124, an inverse quantization / inverse integer conversion circuit 125, and an addition circuit 126.
- a loop filter 127, a memory 128, and an entropy coding circuit 129 are included.
- the encoder 102 includes a blocking circuit 131, a subtraction circuit 132, a motion prediction / motion compensation circuit 133, an inter-layer prediction / inter-layer compensation circuit 134, a mixing compensation circuit 135, and switching circuits 136 and 137.
- the integer conversion / quantization circuit 138, the inverse quantization / inverse integer conversion circuit 139, the addition circuit 140, the loop filter 141, the memory 142, and the entropy coding circuit 143 are included.
- the image data Qb at the basic frame rate is input to the blocking circuit 121.
- the image data of each picture making up the image data Qb is divided into blocks (MB: macro block) in units of encoding processing.
- Each block is sequentially supplied to the subtraction circuit 122.
- the motion prediction / motion compensation circuit 123 obtains a motion compensated prediction reference block for each block based on the image data of the reference picture stored in the memory 128.
- Each prediction reference block obtained by the motion prediction / motion compensation circuit 123 is sequentially supplied to the subtraction circuit 122.
- the subtraction circuit 122 performs subtraction processing with respect to the prediction reference block for each block obtained by the blocking circuit 121 to obtain a prediction error.
- the prediction error for each block is quantized after being integer-transformed (for example, DCT transformed) by the integer transform / quantization circuit 124.
- the quantization data for each block obtained by the integer conversion / quantization circuit 124 is supplied to the inverse quantization / inverse integer conversion circuit 125.
- the inverse quantization / inverse integer conversion circuit 125 performs inverse quantization on the quantized data and further performs inverse integer conversion to obtain a prediction residual.
- the prediction error is supplied to the adding circuit 126.
- the addition circuit 126 adds the motion-compensated prediction reference block to the prediction residual to obtain a block. This block is stored in the memory 128 after the quantization noise is removed by the loop filter 127.
- the quantization data for each block obtained by the integer conversion / quantization circuit 124 is supplied to the entropy coding circuit 129, and entropy coding is performed, and the prediction coding result of the image data Qb at the basic frame rate is obtained.
- a basic stream STb which is Note that information such as motion vectors in each block is added to the basic stream STb as MB header information for decoding on the receiving side.
- the high frame rate image data Qe is input to the blocking circuit 131.
- the image data of each picture making up the image data Qe is divided into blocks (MB: macro blocks) in units of encoding processing. Each block is sequentially supplied to the subtraction circuit 132.
- the inter-layer prediction / inter-layer compensation circuit 134 receives the image data after mixture compensation obtained by the mixture compensation circuit 135 or the image data stored in the memory 128 from the switching circuit 136 as the image data of the reference picture. It is selectively supplied. In this case, even when predictive coding with the base layer is performed, processing that does not pass through the mixing compensation circuit 135 is also possible.
- the inter-layer prediction / inter-layer compensation circuit 134 obtains a motion-compensated prediction reference block for each block based on the image data of the reference picture.
- Image data (image data of a basic frame rate) of the reference picture is supplied from the memory 128 to the mixture compensation circuit 135. Further, the output of the blocking circuit 131, that is, the image data of the picture of the prediction source (image data of high frame rate) is supplied to the mixture compensation circuit 135. The processing of the picture of the base layer and the processing of the picture of the enhancement layer are performed synchronously.
- the image data of the basic frame rate is subjected to the reverse processing of the mixing process using the image data of the high frame rate, and the image data of the high frame rate is two consecutive picture units in time.
- Image data that becomes image data of the other picture when it is image data of one picture is obtained as image data after mixture compensation.
- the mixing compensation circuit 135 can obtain image data after mixing compensation.
- image data in a block or more range is acquired as image data after mixture compensation. That is, in the process of the mixture compensation, data around the block is also set as the calculation target according to the shift range of the prediction target by the motion vector.
- FIG. 10 shows a configuration example of the mixing compensation circuit 135.
- the mixing compensation circuit 135 includes multiplication units 151 and 152 and an addition unit 153.
- the multiplication unit 151 outputs the image data of the reference picture (
- the image data [ ⁇ A + ⁇ B] of the basic frame rate is multiplied by coefficients of (1 / ⁇ ) and (1 / ⁇ ), respectively.
- the multiplication unit 152 when the image data of the high frame rate is the image data of the picture at the odd (Odd) or even (Even) position in the unit of two temporally consecutive pictures, the prediction is performed.
- the image data of the original picture image data [A] and [B] of the extension frame of high frame rate) are multiplied by coefficients of (- ⁇ / ⁇ ) and (- ⁇ / ⁇ ), respectively.
- the adding unit 153 adds the output of the multiplying unit 151 and the output of the multiplying unit 152 to obtain image data [B] and [A] after mixture compensation.
- ⁇ is a coefficient by which image data A at odd (Odd) positions is multiplied by the unit of two temporally consecutive pictures in the mixing process, and ⁇ is the time in the mixing process. It is a coefficient by which image data B at even positions (Even) can be multiplied in units of two consecutive pictures (see FIG. 5).
- a prediction reference block for intra-layer prediction obtained by the motion prediction / motion compensation circuit 133 or a prediction reference block for inter-layer prediction obtained by the inter-layer prediction / inter-layer compensation circuit 134.
- a prediction reference block for intra-layer prediction obtained by the motion prediction / motion compensation circuit 133 or a prediction reference block for inter-layer prediction obtained by the inter-layer prediction / inter-layer compensation circuit 134.
- the switching circuit 137 are selected in block units or picture units by the switching circuit 137 and supplied to the subtraction circuit 132. For example, in the switching circuit 137, switching is performed so that the residual component becomes small. Also, for example, the switching circuit 137 is forcibly switched to one according to whether or not it is the boundary of the sequence.
- the subtraction circuit 132 performs subtraction processing with respect to the prediction reference block for each block obtained by the blocking circuit 131 to obtain a prediction error.
- the prediction error for each block is quantized after integer conversion (eg, DCT conversion) by the integer conversion / quantization circuit 138.
- the quantized data for each block obtained by the integer conversion / quantization circuit 138 is supplied to the inverse quantization / inverse integer conversion circuit 139.
- the quantized data is subjected to inverse quantization, and then inverse integer conversion is performed to obtain a prediction residual.
- the prediction error for each block is supplied to the addition circuit 140.
- the prediction reference block selected by the switching circuit 137 is supplied to the addition circuit 140.
- the addition circuit 140 adds the motion compensated prediction reference block to the prediction residual to obtain a block. This block is stored in the memory 142 after the quantization noise is removed by the loop filter 141.
- the quantization data for each block obtained by the integer conversion / quantization circuit 138 is supplied to the entropy coding circuit 143 and entropy coding is performed, and the prediction coding result of the high frame rate image data Qe is obtained.
- An extended stream STe which is Note that information such as motion vectors in each block and switching of prediction reference blocks is added to the extension stream STe as MB block information for decoding on the receiving side.
- FIG. 11 also shows a configuration example of the encoding processing portion of the encoder 102.
- This configuration example corresponds to the case of a layer configuration example (see FIGS. 8B and 8C) including a base layer (Base Layer) and two extension layers (Ext 1 Layer and Ext 2 Layer). There is.
- FIG. 11 parts corresponding to FIG. 9 are assigned the same reference numerals, and the detailed description thereof will be omitted as appropriate.
- the encoder 102 includes a blocking circuit 121, a subtraction circuit 122, a motion prediction / motion compensation circuit 123, an integer conversion / quantization circuit 124, an inverse quantization / inverse integer conversion circuit 125, and an addition circuit 126.
- a loop filter 127, a memory 128, and an entropy coding circuit 129 are included.
- the encoder 102 also includes a switching circuit 130, a blocking circuit 131A, a subtracting circuit 132A, a motion prediction / motion compensation circuit 133A, an inter-layer prediction / inter-layer compensation circuit 134A, a mixing compensation circuit 135A, and a switching circuit.
- the encoder 102 includes a blocking circuit 131B, a subtraction circuit 132B, a motion prediction / motion compensation circuit 133B, an inter-layer prediction / inter-layer compensation circuit 134B, a mixing compensation circuit 135B, and switching circuits 136B and 137B.
- the encoding process for the image data Qb of the basic frame rate that is, the encoding process of the base layer (Base Layer) is the same as the configuration example of the encoding process part of the encoder 102 in FIG. .
- the coding process for the high frame rate image data Qe is divided into the coding process of the enhancement layer 1 and the coding process of the enhancement layer 2.
- the image data of each picture of the high frame rate image data Qe is divided by the switching circuit 130 into the image data of the picture handled in the encoding process of the enhancement layer 1 and the image data of the picture handled in the encoding process of the enhancement layer 2 .
- the image data A of the picture at the odd (Odd) position is supplied to the system of the encoding process of the enhancement layer 1 in units of two pictures that are temporally continuous.
- a system of encoding processing of the enhancement layer 1 is configured by each circuit indicated by a code appended with “A”.
- the system of the encoding process of the enhancement layer 1 is configured in the same manner as the system of the encoding process of the enhancement layer in the configuration example of the encoding process portion of the encoder 102 in FIG. A coded stream of enhancement layer 1 is obtained.
- prediction coding with the base layer or prediction coding in the enhancement layer 1 is performed. Even in the case of performing predictive coding processing with the base layer, processing that does not pass through the mixing compensation circuit 135A is also possible by switching by the switching circuit 136A.
- a system of encoding processing of the enhancement layer 2 is configured by each circuit indicated by a code appended with “B”.
- the system of the encoding process of the enhancement layer 2 is configured in the same manner as the system of the encoding process of the enhancement layer in the configuration example of the encoding process portion of the encoder 102 in FIG.
- a coded stream of enhancement layer 1 is obtained.
- the prediction coding process with the base layer the prediction coding process with the enhancement layer 1, or the prediction coding process in the enhancement layer 2 is performed. It will be.
- the switching circuit 145 selects the output of the memory 128.
- the switching circuit 145 selects the output of the memory 142A. Even in the case of performing predictive encoding processing with the base layer, processing that does not pass through the mixing compensation circuit 135B is also possible by switching by the switching circuit 136B.
- the coded stream of the enhancement layer 1 obtained by the entropy coding circuit 143A and the coded stream of the enhancement layer 2 obtained by the entropy coding circuit 143B are synthesized, and the high frame rate image data Qe is obtained.
- the extension stream STe which is the prediction coding result of is obtained.
- the case is shown in comparison as an example.
- “(N) th” and “(n + 1) th” indicate pictures (frames) in a temporally adjacent relationship.
- the picture of "(n + 1) th” constitutes a picture of a prediction source
- the picture of "(n) th” constitutes a reference picture.
- the picture of “Blended (n) th” indicates a reference picture being subjected to the mixing process.
- a dashed-dotted line rectangular frame indicates the range of the prediction block (processing unit block), and in the reference picture, a dashed-dotted line rectangular frame indicates the reference block range corresponding to the predicted block range. In addition, in the reference picture, a dashed rectangular frame indicates the range of the reference block motion-compensated with the motion vector mv.
- the block of the processing unit is a 4 ⁇ 4 block. The unit of processing is not limited to this, and may be a block larger than this.
- the prediction residual “((n + 1) th” and “(n) th”) is obtained by referring to the motion vector and performing prediction. n + 1) ⁇ (n) ′ ′ becomes zero.
- the prediction residual “(n + 1) ⁇ Blended (n) th” is not zero, and some residual component occurs.
- An example is shown.
- the illustrated example corresponds to the case where the picture of the prediction source is "B" in the mixture compensation circuit 135 shown in FIG. The description is omitted for the case where the picture of the prediction source is “A”.
- FIG. 14 shows an example of the prediction residual when the picture (image data) after the above-mentioned mixture compensation is used.
- (1) mixing processing in FIG. 12 is performed by performing prediction with reference to motion vectors between the picture of “(n + 1) th” and the picture after mixed compensation of “output (n) th”.
- the prediction residual "(n + 1)-(n)" is zero, as in the case of no.
- “(N) th” and “(n + 1) th” indicate pictures (frames) in a temporally adjacent relationship.
- the picture of "(n + 1) th” constitutes a picture of a prediction source
- the picture of "(n) th” constitutes a reference picture.
- the picture of “Blended (n) th” indicates a reference picture being subjected to the mixing process.
- a dashed-dotted line rectangular frame indicates the range of the prediction block (processing unit block), and in the reference picture, a dashed-dotted line rectangular frame indicates the reference block range corresponding to the predicted block range. In addition, in the reference picture, a dashed rectangular frame indicates the range of the reference block motion-compensated with the motion vector mv.
- the block of the processing unit is a 4 ⁇ 4 block. The unit of processing is not limited to this, and may be a block larger than this.
- the prediction residual “((n + 1) th” and “(n) th”) is obtained by referring to the motion vector and performing prediction. n + 1) ⁇ (n) ′ ′ becomes zero.
- the prediction residual “(n + 1) ⁇ Blended (n) th” is not zero, and some residual component occurs.
- the illustrated example corresponds to the case where the picture of the prediction source is "B" in the mixture compensation circuit 135 shown in FIG. The description is omitted for the case where the picture of the prediction source is “A”.
- the picture “(n) th” before the mixing process is obtained as the picture (image data) after the mixture compensation.
- FIG. 17 shows an example of the prediction residual when the picture (image data) after the above-described mixture compensation is used.
- (1) mixing processing in FIG. 15 is performed by performing prediction with reference to the motion vector between the picture of “(n + 1) th” and the picture after mixed compensation of “output (n) th”.
- the prediction residual "(n + 1)-(n)" is zero, as in the case of no.
- the prediction residual can be reduced by using the image data after mixture compensation as the image data of the reference picture.
- the encoder 102 inserts mixing ratio information in the mixing process into the layer of the enhancement stream.
- the mixing ratio information is used on the receiving side in the process of mixing compensation when decoding the extension stream.
- the encoder 102 inserts, in each access unit of the extension stream, phase information indicating which of two temporally consecutive pictures the access unit corresponds to. This phase information is used on the receiving side in the processing of mixture compensation when decoding the extension stream. In other words, it is necessary to switch the coefficients in the process of mixture compensation depending on which of the two consecutive pictures corresponds to (see FIG. 10).
- SEI NAL units having mixing ratio information and phase information are inserted into each access unit of the extension stream, or mixing ratio information and phase information are inserted into PPS NAL units of each access unit of the extension stream.
- the encoder 102 When inserting a SEI NAL unit having mixing ratio information and phase information into each access unit of the extension stream, the encoder 102 newly defines an inverse blending layer and a new definition in the “SEIs” portion of the access unit (AU). Insert predication and SEI (inverse_blending_layer_prediction_SEI).
- FIG. 18A shows an example of the structure (Syntax) of inverse blending layer prediction SEI
- FIG. 18B shows the contents (Semantics) of main information in the example of the structure .
- the 4-bit field of "blend_coef_alpha” indicates a coefficient ⁇ .
- a 4-bit field of "blend_coef_beta” indicates a coefficient ⁇ .
- a 1-bit field of "picture_phase” indicates the phase of the picture. For example, "1" indicates an odd (Odd) position, and "0" indicates an even (even) position.
- the encoder 102 defines the mixing ratio information and phase information in the extension part of PPS (Picture_parameter_set).
- FIG. 19 (a) shows a structural example (Syntax) of PPS
- FIG. 19 (b) shows the contents (Semantics) of main information in the structural example.
- the 1-bit field of “pps_blend_info_extention_flag” is flag information indicating whether mixing ratio information and phase information exist in the extension part. For example, “1" indicates that it exists, and "0" indicates that it does not exist.
- FIG. 19C shows a structural example (Syntax) of “pps_blend_info_extention ()”.
- the 4-bit field of "blend_coef_alpha” indicates a coefficient ⁇ .
- a 4-bit field of "blend_coef_beta” indicates a coefficient ⁇ .
- a 1-bit field of "picture_phase” indicates the phase of the picture. For example, “1" indicates an odd (Odd) position, and "0" indicates an even (even) position.
- the multiplexer 103 packetizes the basic stream STb and the extension stream STe generated by the encoder 102 into packetized elementary stream (PES) packets, further transports them into packets, multiplexes them, and transports them as multiplexed streams.
- PES packetized elementary stream
- the multiplexer 103 inserts, into the layer of the transport stream TS, identification information indicating that the image data included in the basic stream is the image data obtained by performing the mixing process.
- the multiplexer 103 inserts a video scalability information descriptor (video_scalability_information_descriptor) to be newly defined into the video elementary stream loop arranged corresponding to the extension stream under the program map table.
- FIG. 20A shows an example structure (Syntax) of the video scalability information descriptor.
- FIG. 20 (b) shows the contents (Semantics) of the main information in the structural example.
- An 8-bit field of "video_scalability_information_descriptor_tag” indicates a descriptor type, which indicates that it is a video scalability information descriptor.
- An 8-bit field of “video_scalability_information_descriptor_length” indicates the length (size) of the descriptor, and indicates the number of subsequent bytes as the length of the descriptor.
- the 1-bit field of “temporal_scalable_flag” is flag information indicating whether it is a time scalable stream. For example, “1" indicates that it is time scalable, and “0" indicates that it is not time scalable.
- the 1-bit field of "picture_blending_for_base_stream_flag” is flag information indicating whether the mixing process of pictures is performed on the base stream. For example, “1” indicates that the mixing process is being performed, and "0" indicates that the mixing process is not being performed.
- a 4-bit field “blend_coef_alpha”, a 4-bit field “blend_coef_beta”, and a 1-bit field “picture_phase” are present.
- the field of “blend_coef_alpha” indicates a coefficient ⁇ .
- the field of “blend_coef_beta” indicates a coefficient ⁇ .
- the field of “picture_phase” indicates the phase of the picture.
- FIG. 21 shows a configuration example of the transport stream TS.
- the transport stream TS includes two video streams of a basic stream (base stream) STb and an enhancement stream (enhanced stream) STe. That is, in this configuration example, the PES packet "video PES1" of the basic stream STb is present, and the PES packet "video PES2" of the extension stream STe is present.
- Inverse blending layer prediction SEI (see FIG. 18A) is inserted into the encoded image data of each picture containerized by the PES packet “video PES2”. It is to be noted that, instead of inserting this inverse blending layer prediction SEI, there are also cases where insertion of mixing ratio information and phase information is performed in the extended portion of PPS.
- the transport stream TS includes a PMT (Program Map Table) as one of PSI (Program Specific Information).
- PSI Program Specific Information
- PMT there is a program loop (Program loop) that describes information related to the entire program. Also, in PMT, there is an elementary stream loop having information related to each video stream. In this configuration example, there is a video elementary stream loop "video ES1 loop” corresponding to the basic stream, and a video elementary stream loop "video ES2 loop” corresponding to the extension stream.
- video ES1 loop information such as stream type and packet identifier (PID) is arranged corresponding to the basic stream (video PES1), and a descriptor describing information related to the video stream is also arranged. Ru. This stream type is "0x24" indicating a basic stream.
- video ES2 loop information such as a stream type and a packet identifier (PID) is arranged corresponding to the extension stream (video PES2), and a descriptor that describes the information related to the video stream is also included. Be placed. This stream type is "0x2x” indicating an extension stream.
- a video scalability information descriptor (see FIG. 19A) is inserted.
- the transmission unit 104 modulates the transport stream TS by a modulation scheme suitable for broadcasting such as QPSK / OFDM, for example, and transmits an RF modulation signal from the transmission antenna.
- a modulation scheme suitable for broadcasting such as QPSK / OFDM, for example
- the operation of transmitting apparatus 100 shown in FIG. 4 will be briefly described.
- the image data P of 120 fps is input to the pre-processor 101. Then, from the pre-processor 101, the image data Qb of the basic frame rate and the image data Qe of the high frame rate are outputted.
- the pre-processor 101 mixing processing is performed in units of two pictures that are temporally continuous in the image data P of 120 fps, and image data Qb of a basic frame rate is obtained. Further, in the pre-processor 101, the image data P of 120 fps is output as it is as the image data Qe of high frame rate.
- the image data Qb and Qe obtained by the pre-processor 101 are supplied to the encoder 102.
- the encoder 102 performs encoding processing on the image data Qb and Qe to generate a basic stream STb and an extension stream STe.
- the basic frame rate image data Qb is subjected to the predictive encoding process of the basic frame rate image data to obtain the basic stream STb.
- the predictive coding process between the high frame rate image data Qe and the basic frame rate image data Qb or the predictive coding process of the high frame rate image data is adaptively applied to the extension stream STe is obtained.
- image data after mixture compensation is used to reduce a prediction residual.
- mixing ratio information in the mixing process is inserted in the layer of the extension stream, and each access unit of the extension stream corresponds to any of two temporally consecutive pictures of the access unit.
- Phase information indicating whether or not is inserted.
- inverse blending layer prediction SEI (see FIG. 18A) is inserted in the “SEIs” portion of each access unit of the extension stream, or each extension stream Mixing ratio information and phase information are inserted into the extended portion of the PPS of the access unit (see FIG. 19A).
- the basic stream STb and the extension stream STe generated by the encoder 102 are supplied to the multiplexer 103.
- the basic stream STb and the extension stream STe are PES packetized and further transport packetized and multiplexed to obtain a transport stream TS as a multiplexed stream.
- identification information indicating that the image data included in the basic stream STb is image data obtained by performing mixing processing is inserted into the layer of the transport stream TS.
- the video scalability information descriptor (see FIG. 20A) is inserted into the video elementary stream loop arranged corresponding to the extension stream STe under the program map table.
- the transport stream TS generated by the multiplexer 103 is sent to the transmitter 104.
- this transport stream TS is modulated by a modulation scheme suitable for broadcasting such as QPSK / OFDM, for example, and an RF modulation signal is transmitted from the transmission antenna.
- FIG. 22 shows a configuration example of a receiving apparatus 200A capable of processing moving image data of 120 fps with decoding capability.
- the receiving device 200A includes a receiving unit 201, a demultiplexer 202, a decoder 203, and a display processor 205.
- the receiving unit 201 demodulates the RF modulated signal received by the receiving antenna to obtain the transport stream TS.
- the demultiplexer 202 extracts the basic stream STb and the extension stream STe from the transport stream TS by filtering the PID, and supplies the basic stream STb and the extension stream STe to the decoder 203.
- the demultiplexer 202 extracts section information included in the layer of the transport stream TS, and sends it to a control unit (not shown).
- the video scalability information descriptor (see FIG. 20 (a)) is also extracted.
- the control unit recognizes that the image data included in the basic stream STb is image data obtained by performing the mixing process.
- the decoder 203 decodes the basic stream STb and the extension stream STe to obtain high frame rate image data Qe '.
- the decoder 203 extracts a parameter set and SEI inserted in each access unit constituting the basic stream STb and the extension stream STe, and sends the extracted parameter set and SEI to a control unit (not shown).
- a control unit not shown
- mixing ratio information, inverse blending, layer prediction, and SEI (see FIG. 18A) having phase information, and PPS having mixing ratio information and phase information in the extended portion are also extracted.
- the control unit recognizes which of the coefficients ⁇ and ⁇ in the mixing process and each of the two access units correspond to temporally consecutive two pictures.
- the mixing ratio information and the phase information are used when performing mixing compensation on the basic frame rate image data in the decoding process.
- FIG. 23 shows a configuration example of the decoding processing part of the decoder 203.
- This configuration example corresponds to the case of a layer configuration example (see FIG. 8A) including a base layer (Base Layer) and one extension layer (Ext 1 Layer).
- the decoder 203 includes an entropy decoding circuit 211, an inverse quantization / inverse integer conversion circuit 212, a motion compensation circuit 213, an addition circuit 214, a loop filter 215, and a memory 216. Further, the decoder 203 includes an entropy decoding circuit 221, an inverse quantization / inverse integer conversion circuit 222, a motion compensation circuit 223, an inter-layer compensation circuit 224, a mixing compensation circuit 225, a switching circuit 226, and addition. A circuit 227, a switching circuit 228, a loop filter 229, and a memory 230 are included.
- entropy decoding circuit 211 entropy decoding is performed on the base stream STb to obtain quantized data of each block of the base layer. This quantized data is supplied to the inverse quantization / inverse integer conversion circuit 212.
- the inverse quantization / inverse integer conversion circuit 212 performs inverse quantization on the quantized data and further performs inverse integer conversion to obtain a prediction residual.
- the prediction error for each block is supplied to the addition circuit 214.
- the motion compensation circuit 213 obtains a motion compensated compensated reference block based on the image data of the reference picture stored in the memory 216.
- motion compensation is performed using a motion vector included as MB header information.
- the compensation reference block is added to the prediction residual to obtain a block constituting the basic frame rate image data Qb '.
- the block obtained by the adding circuit 214 is stored in the memory 216 after the quantization noise is removed by the loop filter 125. Then, by reading the accumulated data from the memory 216, the image data Qb 'at the basic frame rate can be obtained.
- the entropy decoding circuit 221 performs entropy decoding on the enhancement stream STe to obtain quantized data of each block of the enhancement layer. This quantized data is supplied to the inverse quantization / inverse integer conversion circuit 222.
- the inverse quantization / inverse integer conversion circuit 222 performs inverse quantization on the quantized data and further performs inverse integer conversion to obtain a prediction residual.
- the prediction error for each block is supplied to the adding circuit 227.
- a motion compensated compensated reference block for intra-layer compensation is obtained.
- motion compensation is performed using a motion vector included as MB header information.
- the image data after mixture compensation obtained by the mixture compensation circuit 225 or the image data stored in the memory 216 is selectively supplied from the switching circuit 226 to the inter-layer compensation circuit 224 as image data of a reference picture. Be done.
- the inter-layer compensation circuit 224 obtains a compensation reference block for inter-layer compensation, which is motion-compensated based on the image data of the reference picture and further multiplied by a prediction coefficient for reducing a prediction residual.
- motion compensation is performed using a motion vector included as MB header information
- image data switching is also performed based on switching information included as MB header information.
- Image data (image data of a basic frame rate) of a reference picture is supplied from the memory 216 to the mixture compensation circuit 225. Further, image data (image data of high frame rate) of the picture of the prediction source is supplied from the memory 230 to the mixture compensation circuit 225.
- the processing of the picture of the base layer and the processing of the picture of the enhancement layer are performed synchronously.
- the mixing compensation circuit 225 the image data of the basic frame rate is subjected to the reverse processing of the mixing process using the image data of the high frame rate, and the image data of the high frame rate is processed in two consecutive picture units. Image data to be image data of one picture when it is image data of one picture is obtained as image data after mixture compensation.
- the mixing processing circuit 225 is configured in the same manner as the mixing compensation circuit 135 in the encoder 102 of the transmission apparatus 100 described above (see FIG. 10), and the mixing recognized by the control unit as described above in the processing of the mixing compensation circuit 135. Ratio information and phase information are used.
- a compensation reference block for intra-layer compensation obtained by the motion compensation circuit 223 or a compensation reference block for inter-layer compensation obtained by the inter-layer compensation circuit 224 is selected in block units by the switching circuit 228 and added. It is supplied to the circuit 227. Here, switching is performed according to the information of the MB header.
- the addition circuit 227 adds the compensation reference block to the prediction residual to obtain a block constituting the image data Qe ′ of the high frame rate extension frame.
- the block obtained by the adding circuit 227 in this way is accumulated in the memory 230 after the quantization noise is removed by the loop filter 229. Then, by reading the accumulated data from the memory 230, image data Qe 'of the extended frame of the high frame rate is obtained.
- FIG. 24 also shows a configuration example of the decoding processing part of the decoder 203.
- This configuration example corresponds to the case of a layer configuration example (see FIGS. 8B and 8C) including a base layer (Base Layer) and two extension layers (Ext 1 Layer and Ext 2 Layer). There is.
- FIG. 24 parts corresponding to FIG. 23 are assigned the same reference numerals, and the detailed description thereof will be omitted as appropriate.
- the decoder 203 includes an entropy decoding circuit 211, an inverse quantization / inverse integer conversion circuit 212, a motion compensation circuit 213, an addition circuit 214, a loop filter 215, and a memory 216. Also, the decoder 203 switches between the switching circuit 220, the entropy decoding circuit 221A, the inverse quantization / inverse integer conversion circuit 222A, the motion compensation circuit 223A, the inter-layer compensation circuit 224A, and the mixing compensation circuit 225A.
- a circuit 226A, an adder circuit 227A, a switching circuit 228A, a loop filter 229A, and a memory 230A are included.
- the decoder 203 switches between the switching circuit 231, the entropy decoding circuit 221B, the inverse quantization / inverse integer conversion circuit 222B, the motion compensation circuit 223B, the inter-layer compensation circuit 224B, and the mixture compensation circuit 225B.
- a circuit 226B, an adder circuit 227B, a switching circuit 228B, a loop filter 229B, a memory 230B, and switching circuits 231 and 232 are included.
- the decoding process for the basic stream STb that is, the decoding process of the base layer (Base Layer) is the same as the configuration example of the decoding process part of the decoder 203 in FIG. 23, and thus the detailed description will be omitted.
- the encoding process for the enhancement stream STe is divided into the decoding process of the enhancement layer 1 and the decoding process of the enhancement layer 2.
- the extension stream STe is, in the switching circuit 220, an access unit (coded image data) of a picture handled in the decoding processing of the enhancement layer 1 and an access unit (coded image data) of a picture handled in the decoding processing of the enhancement layer 2. It is distributed.
- the access unit of the picture handled in the decoding processing of the enhancement layer 1 is an access unit of a picture located at an odd (Odd) position in two temporally consecutive pictures.
- the access unit of the picture handled in the decoding processing of the enhancement layer 2 is an access unit of a picture with even positions (Even) in two temporally consecutive pictures.
- a system of decoding processing of the enhancement layer 1 is configured by each circuit indicated by a code appended with “A”.
- the system of the decoding process of the enhancement layer 1 is configured in the same manner as the system of the decoding process of the enhancement layer in the configuration example of the decoding process part of the decoder 203 of FIG. 23 and reads stored data from the memory 230A.
- the image data of the picture at the odd-numbered (Odd) position is sequentially obtained among the two temporally consecutive pictures.
- compensation processing with the base layer or compensation processing within the enhancement layer 1 is performed.
- a system of decoding processing of the enhancement layer 2 is configured by each circuit indicated by a code appended with “B”.
- the system of the decoding process of the enhancement layer 2 is configured in the same manner as the system of the decoding process of the enhancement layer in the configuration example of the decoding process part of the decoder 203 of FIG. 23 and reads stored data from the memory 230B.
- the image data of the picture at the even (Even) position is sequentially obtained among the two temporally consecutive pictures.
- compensation processing with the base layer In the system of decoding processing of the enhancement layer 2, compensation processing with the base layer, predictive coding processing with the enhancement layer 1, or compensation processing within the enhancement layer 2 is performed.
- the switching circuit 231 selects the output of the memory 216.
- the switching circuit 231 selects the output of the memory 230A. It will be obtained sequentially.
- the image data of the picture at the odd (Odd) position read from the memory 230A and the image data of the picture at the even (Even) position read from the memory 230B are synthesized to obtain a high frame rate.
- Image data Qe ' is obtained.
- the display processor 205 performs temporal interpolation processing, that is, frame interpolation processing, on the high frame rate image data Qe 'as necessary to obtain image data having a frame rate higher than 120 fps. Obtained and supplied to the display unit.
- the reception unit 201 demodulates the RF modulation signal received by the reception antenna, and acquires the transport stream TS.
- the transport stream TS is sent to the demultiplexer 202.
- the basic stream STb and the extension stream STe are extracted from the transport stream TS by filtering of the PID, and supplied to the decoder 203.
- section information included in the layer of the transport stream TS is extracted and sent to a control unit (not shown).
- a video scalability information descriptor (see FIG. 20 (a)) is also extracted.
- the control unit recognizes that the image data included in the basic stream STb is image data obtained by performing the mixing process.
- the decoder 203 performs decoding processing on the basic stream STb and the extension stream STe to obtain image data Qe 'of high frame rate. Further, in the decoder 203, the basic stream STb and the parameter set and SEI inserted in each access unit constituting the extended stream STe are extracted and sent to a control unit (not shown). As a result, the control unit recognizes which of the coefficients ⁇ and ⁇ in the mixing process and which two access units correspond to temporally consecutive two pictures. The mixing ratio information and the phase information are used when performing mixing compensation on the basic frame rate image data in the decoding process.
- the high frame rate image data Qe ′ obtained by the decoder 203 is supplied to the display processor 205. If necessary, interpolation processing in the time direction, that is, frame interpolation processing, is performed on the high frame rate image data Qe ′, and image data having a frame rate higher than 120 fps is obtained. The image data is supplied to the display unit, and image display is performed.
- FIG. 25 shows an example of the configuration of a receiving apparatus 200B capable of processing 60 fps moving image data.
- the receiving device 200B includes a receiving unit 201, a demultiplexer 202B, a decoder 203B, and a display processor 205B.
- the reception unit 201 demodulates the RF modulation signal received by the reception antenna, and acquires the transport stream TS.
- the demultiplexer 202B only the basic stream STb is extracted from the transport stream TS by filtering of the PID, and is supplied to the decoder 203B.
- the decoder 203B performs decoding processing on the basic stream STb to obtain image data Qb at the basic frame rate.
- interpolation processing in the time direction is performed on the 60 fps image data Qb, that is, frame interpolation processing, and image data having a frame rate higher than 60 fps is obtained.
- the image data is supplied to the display unit, and image display is performed.
- the image data Qb at the basic frame rate of 60 fps is obtained by performing the mixing process in units of two pictures that are temporally continuous in the image data P of 120 fps.
- the basic stream STb obtained by subjecting the image data Qb of this basic frame rate to predictive coding processing is transmitted. Therefore, for example, on the receiving side, when there is a decoding capability capable of processing image data of a basic frame rate, processing this basic stream STb to obtain image data of the basic frame rate allows a smooth image as a moving image. It is possible to display, and it is possible to avoid causing an image quality problem in frame interpolation processing by low load operation in display processing.
- the extension stream STe including the high frame rate image data Qe is transmitted. Therefore, in the case of a receiver capable of processing high frame rate image data, the extended stream STe is processed to obtain high frame rate image data, whereby high frame rate image display can be performed well. be able to.
- the high frame rate image data Qe is mixed with the basic frame rate image data Qb. And the image data after mixture compensation is used as reference image data. Therefore, in predictive coding of the high frame rate image data Qe, it is possible to reduce the prediction residual.
- mixing ratio information in the mixing process is inserted into the layer of the extension stream. Therefore, on the reception side, it is possible to easily and appropriately perform the reverse process of the mixing process using the mixing ratio information.
- phase information indicating which of two temporally successive pictures corresponds to each access unit of the extension stream is inserted. Therefore, on the reception side, it is possible to appropriately switch the coefficient in the process opposite to the mixing process (mixing compensation process) using this phase information, and to perform the process easily and appropriately.
- the transmission / reception system 10 which consists of the transmitter 100 and the receiver 200 was shown in the above-mentioned embodiment, the structure of the transmitter-receiver system which can apply this technique is not limited to this.
- the part of the receiving apparatus 200 may be, for example, a configuration of a set top box and a monitor connected by a digital interface such as (HDMI (High-Definition Multimedia Interface), etc.) Note that “HDMI” is registered It is a trademark.
- the container is a transport stream (MPEG-2 TS).
- MPEG-2 TS transport stream
- MMT MPEG Media Transport
- the present technology can also be configured as follows.
- Basic stream including encoded image data for each picture of basic frame rate image data obtained by performing mixing processing in temporally continuous two picture units in high frame rate image data as an access unit
- an image encoding unit for obtaining an extension stream including, as an access unit, encoded image data for each picture of the high frame rate image data
- a transmitter comprising: a transmitter configured to transmit a container of a predetermined format including the basic stream and the extension stream.
- the transmission device according to (1) further including: an information insertion unit that inserts mixing ratio information in the mixing process into the layer of the extension stream.
- the basic stream and the extension stream have an NAL unit structure, The information insertion unit The transmitting device according to (2), wherein the SEI NAL unit having the mixing ratio information is inserted into the extension stream.
- the basic stream and the extension stream have an NAL unit structure, The information insertion unit The transmitting apparatus according to (2), wherein the mixing ratio information is inserted into a PPS NAL unit of the extension stream.
- An information insertion unit for inserting phase information indicating which of the two temporally consecutive pictures the access unit corresponds to in each access unit of the extension stream The transmitter according to any one of (4).
- An information insertion unit for inserting identification information indicating that the image data included in the basic stream is the image data obtained by performing the mixing process in the layer of the container The transmitter according to any one of 5).
- the image coding unit The basic frame rate image data is subjected to predictive coding processing of the basic frame rate image data to obtain the basic stream.
- the image processing data of the basic frame rate is subjected to the reverse processing of the mixing processing using the image data of the high frame rate, and the image data of the high frame rate is one of the two consecutive picture units.
- picture data of the other picture is acquired as picture data after mixing compensation, and predictive coding processing between the picture data of the high frame rate and the picture data after mixing compensation is performed
- the transmission apparatus according to any one of (1) to (6), wherein the extension stream is obtained.
- the image coding unit acquires image data in a range larger than the prediction block as the image data after the mixture compensation for each prediction block of the image data of the high frame rate according to (7). Transmitter.
- Basic stream including encoded image data for each picture of basic frame rate image data obtained by performing mixing processing in units of two consecutive pictures in high frame rate image data as an access unit
- An enhancement stream including the encoded image data for each picture of the high frame rate image data as an access unit
- a transmitting method comprising: a transmitting step of transmitting a container of a predetermined format including the basic stream and the extension stream by a transmitting unit.
- a receiving unit for receiving a container of a predetermined format including a basic stream and an extension stream The basic stream is a predictive coding process of image data of a basic frame rate on image data of a basic frame rate obtained by performing mixing processing in a unit of two pictures temporally continuous in image data of a high frame rate Was obtained by The high stream rate is obtained by subjecting the high frame rate image data to the high frame rate image data to the high frame rate image data using the high frame rate image data and performing processing reverse to the mixing process.
- Is the image data of one picture in units of temporally consecutive two pictures, and is subjected to predictive coding processing with the image data after mixed compensation, which is the image data of the other picture Was obtained from A processing unit for processing only the basic stream to obtain image data at the basic frame rate, or processing both the basic stream and the extension stream to obtain image data at the high frame rate,
- the image data of the basic frame rate obtained by processing the basic stream is subjected to the reverse processing of the mixing process using the image data of the high frame rate obtained by processing the extension stream and the high
- image data of a frame rate is image data of one picture in units of two temporally consecutive pictures
- image data after mixing compensation which is image data of the other picture, is acquired, and the mixture compensation is performed
- a receiver that uses later image data as reference image data.
- the mixing ratio information in the mixing process is inserted in the layer of the extension stream, The receiving device according to (10), wherein the processing unit uses the mixing ratio information when performing processing reverse to the mixing processing.
- phase information indicating which of the two temporally consecutive pictures the access unit corresponds to is inserted in each access unit of the extension stream, The receiving unit according to (10) or (11), wherein the processing unit uses the phase information when performing processing reverse to the mixing processing.
- the receiving unit has a receiving step of receiving a container of a predetermined format including the basic stream and the extension stream,
- the basic stream is a predictive coding process of image data of a basic frame rate on image data of a basic frame rate obtained by performing mixing processing in a unit of two pictures temporally continuous in image data of a high frame rate Was obtained by
- the high stream rate is obtained by performing the reverse process of the mixing process using the high frame rate image data on the high frame rate image data and the high frame rate image data.
- Is the image data of one picture in units of temporally consecutive two pictures, and is subjected to predictive coding processing with the image data after mixed compensation, which is the image data of the other picture Was obtained from Processing only the base stream to obtain image data of the base frame rate, or processing both the base stream and the extension stream to obtain image data of the high frame rate,
- the image data of the basic frame rate obtained by processing the basic stream is subjected to the reverse processing of the mixing process using the image data of the high frame rate obtained by processing the extension stream and the high
- image data of a frame rate is image data of one picture in units of two temporally consecutive pictures
- image data after mixing compensation which is image data of the other picture, is acquired, and the mixture compensation is performed
- a receiving method that uses later image data as reference image data.
- a receiver configured to receive a container of a predetermined format including the basic stream and the extension stream;
- the above basic stream is obtained by performing encoding processing on image data of a basic frame rate obtained by performing mixing processing in units of two pictures that are temporally continuous in image data of a high frame rate.
- the extension stream is obtained by performing encoding processing on the image data of the high frame rate,
- a receiving apparatus further comprising: a processing unit that processes only the basic stream to obtain image data of the basic frame rate, or processes both the basic stream and the extension stream to obtain image data of the high frame rate.
- the main feature of the present technology is that the image data Qb of the basic frame rate of 60 fps is obtained by performing mixing processing in units of two pictures that are temporally continuous in the image data P of 120 fps, and the image data of this basic frame rate By transmitting the extended stream STe including the high frame rate image data Qe of 120 fps together with the basic stream STb including Qb, it is possible to favorably transmit the high frame rate image data while achieving backward compatibility. (See FIGS. 3 and 9).
- Transmission / reception system 100 Transmission device 101: Preprocessor 102: Encoder 103: Multiplexer 104: Transmission unit 111, 114: Delay circuit 112: Arithmetic circuit 113 .. ⁇ Latch circuit 121 ⁇ ⁇ ⁇ Blocking circuit 122 ⁇ ⁇ ⁇ Subtracting circuit 123 ⁇ ⁇ ⁇ Motion prediction / motion compensation circuit 124 ⁇ ⁇ ⁇ ⁇ integer conversion / quantization circuit 125 ⁇ ⁇ ⁇ inverse quantization / inverse integer conversion circuit 126 ⁇ ... Adder circuit 127 ... Loop filter 128 ... Memory 129 ... Entropy coding circuit 130 ... Switching circuit 131, 131A, 131B ...
- Display processor 2 1 Entropy decoding circuit 212: Inverse quantization / inverse integer conversion circuit 213: Motion compensation circuit 214: Addition circuit 215: Loop filter 216: Memory 220: Switching circuit 221, 221A, 221B: Entropy decoding circuit 222, 222A, 222B: Inverse quantization / inverse integer conversion circuit 223, 223A, 223B: Motion compensation circuit 224, 224A, 224B: Interlayer compensation Circuits 225, 225A, 225B ... mixing compensation circuit 226, 226A, 226B ... switching circuit 227, 227A, 227B ... addition circuit 228, 228A, 228B ... switching circuit 229, 229A, 229B ... Loop filter 230, 230A, 230B ... memory 231, 232 ... Toggles circuit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Time-Division Multiplex Systems (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む基本ストリームを得ると共に、上記ハイフレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む拡張ストリームを得る画像符号化部と、
上記基本ストリームおよび上記拡張ストリームを含む所定フォーマットのコンテナを送信する送信部を備える
送信装置にある。
基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信部を備え、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに該基本フレームレートの画像データの予測符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに、上記基本フレームレートの画像データに上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して得られた、上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データとの間の予測符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理部をさらに備え、
上記処理部は、上記拡張ストリームに復号化処理を施すとき、
上記基本ストリームを処理して得られた上記基本フレームレートの画像データに上記拡張ストリームを処理して得られた上記ハイフレームレートの拡張フレームの画像データを用いて上記混合処理の逆の処理を施して上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データを取得して、該混合補償後の画像データを参照画像データとして用いる
受信装置にある。
基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信部を備え、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理部をさらに備える
受信装置にある。
1.実施の形態
2.変形例
[送受信システム]
図1は、実施の形態としての送受信システム10の構成例を示している。この送受信システム10は、送信装置100と、受信装置200とを有する構成となっている。
図4は、送信装置100の構成例を示している。この送信装置100は、プリプロセッサ101と、エンコーダ102と、マルチプレクサ103と、送信部104を有している。プリプロセッサ101は、120fpsの画像データPを入力して、基本フレームレートの画像データQbと、ハイフレームレートの画像データQeを出力する。
図22は、120fpsの動画像データを処理可能なデコード能力がある受信装置200Aの構成例を示している。この受信装置200Aは、受信部201と、デマルチプレクサ202と、デコーダ203と、ディスプレイプロセッサ205を有している。
順次得られる。
なお、上述実施の形態においては、全体のフレームレートが120fpsで基本フレームレートが60fpsの例を示したが、フレームレートの組み合わせは、これに限定されるものではない。例えば、100fpsと50fpsの組み合わせでも同様である。
(1)ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む基本ストリームを得ると共に、上記ハイフレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む拡張ストリームを得る画像符号化部と、
上記基本ストリームおよび上記拡張ストリームを含む所定フォーマットのコンテナを送信する送信部を備える
送信装置。
(2)上記拡張ストリームのレイヤに上記混合処理における混合比情報を挿入する情報挿入部をさらに備える
前記(1)に記載の送信装置。
(3)上記基本ストリームおよび上記拡張ストリームはNALユニット構造を有し、
上記情報挿入部は、
上記混合比情報を持つSEI NALユニットを、上記拡張ストリームに挿入する
前記(2)に記載の送信装置。
(4)上記基本ストリームおよび上記拡張ストリームはNALユニット構造を有し、
上記情報挿入部は、
上記混合比情報を、上記拡張ストリームのPPS NALユニットに挿入する
前記(2)に記載の送信装置。
(5)上記拡張ストリームの各アクセスユニットに、該アクセスユニットが上記時間的に連続する2個のピクチャのいずれに対応するかを示す位相情報を挿入する情報挿入部をさらに備える
前記(1)から(4)のいずれかに記載の送信装置。
(6)上記コンテナのレイヤに上記基本ストリームに含まれる画像データが上記混合処理を施して得られた画像データであることを示す識別情報を挿入する情報挿入部をさらに備える
前記(1)から(5)のいずれかに記載の送信装置。
(7)上記画像符号化部は、
上記基本フレームレートの画像データに該基本フレームレートの画像データの予測符号化処理を施して上記基本ストリームを得ると共に、
上記基本フレームレートの画像データに上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施し、上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データを混合補償後の画像データとして取得し、上記ハイフレームレートの画像データに該混合補償後の画像データとの間の予測符号化処理を施して上記拡張ストリームを得る
前記(1)から(6)のいずれかに記載の送信装置。
(8)上記画像符号化部は、上記ハイフレームレートの画像データの予測ブロック毎に、上記混合補償後の画像データとして上記予測ブロック以上の範囲の画像データを取得する
前記(7)に記載の送信装置。
(9)ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む基本ストリームを得ると共に、上記ハイフレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む拡張ストリームを得る画像符号化ステップと、
送信部により、上記基本ストリームおよび上記拡張ストリームを含む所定フォーマットのコンテナを送信する送信ステップを有する
送信方法。
(10)基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信部を備え、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに該基本フレームレートの画像データの予測符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに、上記基本フレームレートの画像データに上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して得られた、上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データとの間の予測符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理部をさらに備え、
上記処理部は、上記拡張ストリームに復号化処理を施すとき、
上記基本ストリームを処理して得られた上記基本フレームレートの画像データに上記拡張ストリームを処理して得られた上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データを取得して、該混合補償後の画像データを参照画像データとして用いる
受信装置。
(11)上記拡張ストリームのレイヤに上記混合処理における混合比情報が挿入されており、
上記処理部は、上記混合処理の逆の処理を施す際に、上記混合比情報を用いる
前記(10)に記載の受信装置。
(12)上記拡張ストリームの各アクセスユニットに、該アクセスユニットが上記時間的に連続する2個のピクチャのいずれに対応するかを示す位相情報が挿入されており、
上記処理部は、上記混合処理の逆の処理を施す際に、上記位相情報を用いる
前記(10)または(11)に記載の受信装置。
(13)受信部により、基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信ステップを有し、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに該基本フレームレートの画像データの予測符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに、上記基本フレームレートの画像データに、上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して得られた上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データとの間の予測符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理ステップをさらに有し、
上記処理ステップでは、上記拡張ストリームに復号化処理を施すとき、
上記基本ストリームを処理して得られた上記基本フレームレートの画像データに上記拡張ストリームを処理して得られた上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データを取得して、該混合補償後の画像データを参照画像データとして用いる
受信方法。
(14)基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信部を備え、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理部をさらに備える
受信装置。
100・・・送信装置
101・・・プリプロセッサ
102・・・エンコーダ
103・・・マルチプレクサ
104・・・送信部
111,114・・・遅延回路
112・・・演算回路
113・・・ラッチ回路
121・・・ブロック化回路
122・・・減算回路
123・・・動き予測/動き補償回路
124・・・整数変換/量子化回路
125・・・逆量子化/逆整数変換回路
126・・・加算回路
127・・・ループフィルタ
128・・・メモリ
129・・・エントロピー符号化回路
130・・・切り替え回路
131,131A,131B・・・ブロック化回路
132,132A,132B・・・減算回路
133,133A,133B・・・動き予測/動く補償回路
134,134A,134B・・・レイヤ間予測/レイヤ間補償回路
135,135A,135B・・・混合回路
136,136A,136B,137,137A,137B・・・切り替え回路
138,138A,138B・・・整数変換/量子化回路
139,139A,139B・・・逆量子化/逆整数変換回路
140,140A,140B・・・加算回路
141,141A,141B・・・ループフィルタ
142,142A,142B・・・メモリ
143,143A,143B・・・エントロピー符号化回路
145,146・・・切り替え回路
151,152・・・乗算部
153・・・加算部
200A,200B・・・受信装置
201・・・受信部
202,202B・・・デマルチプレクサ
203,203B・・・デコーダ
205,205B・・・ディスプレイプロセッサ
211・・・エントロピー復号化回路
212・・・逆量子化/逆整数変換回路
213・・・動き補償回路
214・・・加算回路
215・・・ループフィルタ
216・・・メモリ
220・・・切り替え回路
221,221A,221B・・・エントロピー復号化回路
222,222A,222B・・・逆量子化/逆整数変換回路
223,223A,223B・・・動き補償回路
224,224A,224B・・・レイヤ間補償回路
225,225A,225B・・・混合補償回路
226,226A,226B・・・切り替え回路
227,227A,227B・・・加算回路
228,228A,228B・・・切り替え回路
229,229A,229B・・・ループフィルタ
230,230A,230B・・・メモリ
231,232・・・切り替え回路
Claims (14)
- ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む基本ストリームを得ると共に、上記ハイフレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む拡張ストリームを得る画像符号化部と、
上記基本ストリームおよび上記拡張ストリームを含む所定フォーマットのコンテナを送信する送信部を備える
送信装置。 - 上記拡張ストリームのレイヤに上記混合処理における混合比情報を挿入する情報挿入部をさらに備える
請求項1に記載の送信装置。 - 上記基本ストリームおよび上記拡張ストリームはNALユニット構造を有し、
上記情報挿入部は、
上記混合比情報を持つSEI NALユニットを、上記拡張ストリームに挿入する
請求項2に記載の送信装置。 - 上記基本ストリームおよび上記拡張ストリームはNALユニット構造を有し、
上記情報挿入部は、
上記混合比情報を、上記拡張ストリームのPPS NALユニットに挿入する
請求項2に記載の送信装置。 - 上記拡張ストリームの各アクセスユニットに、該アクセスユニットが上記時間的に連続する2個のピクチャのいずれに対応するかを示す位相情報を挿入する情報挿入部をさらに備える
請求項1に記載の送信装置。 - 上記コンテナのレイヤに上記基本ストリームに含まれる画像データが上記混合処理を施して得られた画像データであることを示す識別情報を挿入する情報挿入部をさらに備える
請求項1に記載の送信装置。 - 上記画像符号化部は、
上記基本フレームレートの画像データに該基本フレームレートの画像データの予測符号化処理を施して上記基本ストリームを得ると共に、
上記基本フレームレートの画像データに上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施し、上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データを混合補償後の画像データとして取得し、上記ハイフレームレートの画像データに該混合補償後の画像データとの間の予測符号化処理を施して上記拡張ストリームを得る
請求項1に記載の送信装置。 - 上記画像符号化部は、上記ハイフレームレートの画像データの予測ブロック毎に、上記混合補償後の画像データとして上記予測ブロック以上の範囲の画像データを取得する
請求項7に記載の送信装置。 - ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む基本ストリームを得ると共に、上記ハイフレームレートの画像データのピクチャ毎の符号化画像データをアクセスユニットとして含む拡張ストリームを得る画像符号化ステップと、
送信部により、上記基本ストリームおよび上記拡張ストリームを含む所定フォーマットのコンテナを送信する送信ステップを有する
送信方法。 - 基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信部を備え、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに該基本フレームレートの画像データの予測符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに、上記基本フレームレートの画像データに上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して得られた、上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データとの間の予測符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理部をさらに備え、
上記処理部は、上記拡張ストリームに復号化処理を施すとき、
上記基本ストリームを処理して得られた上記基本フレームレートの画像データに上記拡張ストリームを処理して得られた上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データを取得して、該混合補償後の画像データを参照画像データとして用いる
受信装置。 - 上記拡張ストリームのレイヤに上記混合処理における混合比情報が挿入されており、
上記処理部は、上記混合処理の逆の処理を施す際に、上記混合比情報を用いる
請求項10に記載の受信装置。 - 上記拡張ストリームの各アクセスユニットに、該アクセスユニットが上記時間的に連続する2個のピクチャのいずれに対応するかを示す位相情報が挿入されており、
上記処理部は、上記混合処理の逆の処理を施す際に、上記位相情報を用いる
請求項10に記載の受信装置。 - 受信部により、基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信ステップを有し、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに該基本フレームレートの画像データの予測符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに、上記基本フレームレートの画像データに、上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して得られた上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データとの間の予測符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理ステップをさらに有し、
上記処理ステップでは、上記拡張ストリームに復号化処理を施すとき、
上記基本ストリームを処理して得られた上記基本フレームレートの画像データに上記拡張ストリームを処理して得られた上記ハイフレームレートの画像データを用いて上記混合処理の逆の処理を施して上記ハイフレームレートの画像データが上記時間的に連続する2個のピクチャ単位で一方のピクチャの画像データであるときに他方のピクチャの画像データである混合補償後の画像データを取得して、該混合補償後の画像データを参照画像データとして用いる
受信方法。 - 基本ストリームおよび拡張ストリームを含む所定フォーマットのコンテナを受信する受信部を備え、
上記基本ストリームは、ハイフレームレートの画像データにおいて時間的に連続する2個のピクチャ単位で混合処理を施して得られた基本フレームレートの画像データに符号化処理を施して得られたものであり、
上記拡張ストリームは、上記ハイフレームレートの画像データに符号化処理を施して得られたものであり、
上記基本ストリームのみを処理して上記基本フレームレートの画像データを得るか、あるいは上記基本ストリームおよび上記拡張ストリームの双方を処理して上記ハイフレームレートの画像データを得る処理部をさらに備える
受信装置。
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/746,497 US10778995B2 (en) | 2015-09-01 | 2016-08-30 | Transmission device, transmission method, reception device, and reception method |
JP2017538038A JP6809469B2 (ja) | 2015-09-01 | 2016-08-30 | 送信装置、送信方法、受信装置および受信方法 |
MX2018002272A MX2018002272A (es) | 2015-09-01 | 2016-08-30 | Dispositivo de transmision, metodo de transmision, dispositivo de recepcion, y metodo de recepcion. |
MYPI2018700055A MY186845A (en) | 2015-09-01 | 2016-08-30 | Transmission device, transmission method, reception device and reception method |
EP16841832.5A EP3346710A4 (en) | 2015-09-01 | 2016-08-30 | SENDING DEVICE, TRANSMISSION PROCEDURE, RECEPTION DEVICE AND RECEPTION PROCEDURE |
CN201680048593.6A CN107925766B (zh) | 2015-09-01 | 2016-08-30 | 发送装置、发送方法、接收装置和接收方法 |
CA2996279A CA2996279A1 (en) | 2015-09-01 | 2016-08-30 | Transmission device, transmission method, reception device, and reception method |
AU2016317252A AU2016317252B2 (en) | 2015-09-01 | 2016-08-30 | Transmission device, transmission method, reception device, and reception method |
KR1020187004794A KR20180044902A (ko) | 2015-09-01 | 2016-08-30 | 송신 장치, 송신 방법, 수신 장치 및 수신 방법 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015-172407 | 2015-09-01 | ||
JP2015172407 | 2015-09-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017038800A1 true WO2017038800A1 (ja) | 2017-03-09 |
Family
ID=58187567
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/075284 WO2017038800A1 (ja) | 2015-09-01 | 2016-08-30 | 送信装置、送信方法、受信装置および受信方法 |
Country Status (10)
Country | Link |
---|---|
US (1) | US10778995B2 (ja) |
EP (1) | EP3346710A4 (ja) |
JP (1) | JP6809469B2 (ja) |
KR (1) | KR20180044902A (ja) |
CN (1) | CN107925766B (ja) |
AU (1) | AU2016317252B2 (ja) |
CA (1) | CA2996279A1 (ja) |
MX (1) | MX2018002272A (ja) |
MY (1) | MY186845A (ja) |
WO (1) | WO2017038800A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018235666A1 (en) * | 2017-06-19 | 2018-12-27 | Sony Corporation | PIXEL LEVEL ADAPTIVE MIXING FOR FRAME FREQUENCY DOWN CONVERSION |
WO2019167952A1 (ja) * | 2018-03-01 | 2019-09-06 | ソニー株式会社 | 受信装置、受信方法、送信装置および送信方法 |
CN111164981A (zh) * | 2017-11-02 | 2020-05-15 | 索尼公司 | 发送装置、发送方法、接收装置和接收方法 |
US11533522B2 (en) | 2017-04-24 | 2022-12-20 | Saturn Licensing Llc | Transmission apparatus, transmission method, reception apparatus, and reception method |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020031782A1 (ja) | 2018-08-10 | 2020-02-13 | ソニー株式会社 | 受信装置、受信方法、送信装置および送信方法 |
WO2020039973A1 (ja) * | 2018-08-21 | 2020-02-27 | ソニー株式会社 | 受信装置、受信方法、送信装置および送信方法 |
EP3672241A1 (en) * | 2018-12-17 | 2020-06-24 | Nokia Technologies Oy | A method, an apparatus and a computer program product for video encoding and video decoding |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004088244A (ja) * | 2002-08-23 | 2004-03-18 | Sony Corp | 画像処理装置、画像処理方法、および画像フレームデータ記憶媒体、並びにコンピュータ・プログラム |
JP2006333071A (ja) * | 2005-05-26 | 2006-12-07 | Sony Corp | 情報処理システム、情報処理装置および方法、プログラム、並びにデータ構造 |
JP2007028034A (ja) * | 2005-07-14 | 2007-02-01 | Nippon Telegr & Teleph Corp <Ntt> | スケーラブル符号化方法および装置,スケーラブル復号方法および装置,並びにそれらのプログラムおよびその記録媒体 |
WO2015076277A1 (ja) * | 2013-11-22 | 2015-05-28 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8045836B2 (en) * | 2008-01-11 | 2011-10-25 | Texas Instruments Incorporated | System and method for recording high frame rate video, replaying slow-motion and replaying normal speed with audio-video synchronization |
EP2086237B1 (en) * | 2008-02-04 | 2012-06-27 | Alcatel Lucent | Method and device for reordering and multiplexing multimedia packets from multimedia streams pertaining to interrelated sessions |
JP4849130B2 (ja) * | 2008-02-19 | 2012-01-11 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
EP2716041A4 (en) * | 2011-05-31 | 2014-10-15 | Dolby Lab Licensing Corp | VIDEO COMPRESSION WITH RESOLUTION COMPENSATION AND OPTIMIZATION |
US20140286415A1 (en) * | 2011-10-05 | 2014-09-25 | Electronics And Telecommunications Research Institute | Video encoding/decoding method and apparatus for same |
EP2731337B1 (en) * | 2012-10-17 | 2017-07-12 | Dolby Laboratories Licensing Corporation | Systems and methods for transmitting video frames |
US9578372B2 (en) * | 2013-09-27 | 2017-02-21 | Cisco Technology, Inc. | Delay tolerant decoder |
-
2016
- 2016-08-30 US US15/746,497 patent/US10778995B2/en active Active
- 2016-08-30 JP JP2017538038A patent/JP6809469B2/ja not_active Expired - Fee Related
- 2016-08-30 MX MX2018002272A patent/MX2018002272A/es unknown
- 2016-08-30 WO PCT/JP2016/075284 patent/WO2017038800A1/ja active Application Filing
- 2016-08-30 CN CN201680048593.6A patent/CN107925766B/zh not_active Expired - Fee Related
- 2016-08-30 AU AU2016317252A patent/AU2016317252B2/en not_active Ceased
- 2016-08-30 EP EP16841832.5A patent/EP3346710A4/en not_active Ceased
- 2016-08-30 CA CA2996279A patent/CA2996279A1/en not_active Abandoned
- 2016-08-30 MY MYPI2018700055A patent/MY186845A/en unknown
- 2016-08-30 KR KR1020187004794A patent/KR20180044902A/ko active Search and Examination
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004088244A (ja) * | 2002-08-23 | 2004-03-18 | Sony Corp | 画像処理装置、画像処理方法、および画像フレームデータ記憶媒体、並びにコンピュータ・プログラム |
JP2006333071A (ja) * | 2005-05-26 | 2006-12-07 | Sony Corp | 情報処理システム、情報処理装置および方法、プログラム、並びにデータ構造 |
JP2007028034A (ja) * | 2005-07-14 | 2007-02-01 | Nippon Telegr & Teleph Corp <Ntt> | スケーラブル符号化方法および装置,スケーラブル復号方法および装置,並びにそれらのプログラムおよびその記録媒体 |
WO2015076277A1 (ja) * | 2013-11-22 | 2015-05-28 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3346710A4 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11533522B2 (en) | 2017-04-24 | 2022-12-20 | Saturn Licensing Llc | Transmission apparatus, transmission method, reception apparatus, and reception method |
WO2018235666A1 (en) * | 2017-06-19 | 2018-12-27 | Sony Corporation | PIXEL LEVEL ADAPTIVE MIXING FOR FRAME FREQUENCY DOWN CONVERSION |
JP2019004430A (ja) * | 2017-06-19 | 2019-01-10 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US11350115B2 (en) | 2017-06-19 | 2022-05-31 | Saturn Licensing Llc | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
US11895309B2 (en) | 2017-06-19 | 2024-02-06 | Saturn Licensing Llc | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
CN111164981A (zh) * | 2017-11-02 | 2020-05-15 | 索尼公司 | 发送装置、发送方法、接收装置和接收方法 |
WO2019167952A1 (ja) * | 2018-03-01 | 2019-09-06 | ソニー株式会社 | 受信装置、受信方法、送信装置および送信方法 |
Also Published As
Publication number | Publication date |
---|---|
AU2016317252A1 (en) | 2018-02-15 |
KR20180044902A (ko) | 2018-05-03 |
JP6809469B2 (ja) | 2021-01-06 |
CN107925766B (zh) | 2022-05-10 |
CN107925766A (zh) | 2018-04-17 |
MX2018002272A (es) | 2018-03-23 |
CA2996279A1 (en) | 2017-03-09 |
JPWO2017038800A1 (ja) | 2018-06-14 |
EP3346710A4 (en) | 2019-03-13 |
AU2016317252B2 (en) | 2020-12-24 |
EP3346710A1 (en) | 2018-07-11 |
MY186845A (en) | 2021-08-26 |
US10778995B2 (en) | 2020-09-15 |
US20180213242A1 (en) | 2018-07-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017038800A1 (ja) | 送信装置、送信方法、受信装置および受信方法 | |
US11412176B2 (en) | Transmission device, transmission method, reception device, and reception method | |
KR20150052029A (ko) | 송신 장치, 송신 방법, 수신 장치 및 수신 방법 | |
RU2678149C2 (ru) | Устройство кодирования, способ кодирования, передающее устройство, устройство декодирования, способ декодирования и приёмное устройство | |
CN108141622B (zh) | 发送装置、发送方法、接收装置和接收方法 | |
US11412254B2 (en) | Transmission device, transmission method, reception device and reception method | |
KR20180067527A (ko) | 송신 장치, 송신 방법, 수신 장치 및 수신 방법 | |
KR20190142326A (ko) | 송신 장치, 송신 방법, 수신 장치 및 수신 방법 | |
US11363300B2 (en) | Coding apparatus, coding method, decoding apparatus, decoding method, transmitting apparatus, and receiving apparatus | |
WO2019167952A1 (ja) | 受信装置、受信方法、送信装置および送信方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16841832 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017538038 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15746497 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2016317252 Country of ref document: AU Date of ref document: 20160830 Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20187004794 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2996279 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2018/002272 Country of ref document: MX |
|
NENP | Non-entry into the national phase |
Ref country code: DE |