WO1999005864A1 - Dispositif d'edition, procede d'edition, dispositif d'epissage, procede d'epissage, dispositif de codage et procede de codage - Google Patents
Dispositif d'edition, procede d'edition, dispositif d'epissage, procede d'epissage, dispositif de codage et procede de codage Download PDFInfo
- Publication number
- WO1999005864A1 WO1999005864A1 PCT/JP1998/003332 JP9803332W WO9905864A1 WO 1999005864 A1 WO1999005864 A1 WO 1999005864A1 JP 9803332 W JP9803332 W JP 9803332W WO 9905864 A1 WO9905864 A1 WO 9905864A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- encoding
- stream
- picture
- encoded
- editing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 207
- 239000000872 buffer Substances 0.000 claims abstract description 179
- 238000013139 quantization Methods 0.000 claims abstract description 95
- 230000033001 locomotion Effects 0.000 claims description 149
- 230000008569 process Effects 0.000 claims description 146
- 230000002457 bidirectional effect Effects 0.000 claims description 19
- 238000001514 detection method Methods 0.000 claims description 14
- 230000006866 deterioration Effects 0.000 claims description 12
- 239000000463 material Substances 0.000 claims description 11
- 230000015556 catabolic process Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 7
- 238000006731 degradation reaction Methods 0.000 claims description 6
- 230000002542 deteriorative effect Effects 0.000 claims description 4
- 230000007812 deficiency Effects 0.000 claims 2
- 238000010586 diagram Methods 0.000 description 40
- 230000015654 memory Effects 0.000 description 40
- 230000000875 corresponding effect Effects 0.000 description 27
- 230000005540 biological transmission Effects 0.000 description 16
- 230000006870 function Effects 0.000 description 14
- 238000007781 pre-processing Methods 0.000 description 12
- 230000001276 controlling effect Effects 0.000 description 11
- 230000008707 rearrangement Effects 0.000 description 10
- 102100030111 Organic solute transporter subunit beta Human genes 0.000 description 9
- 101150011268 SLC51B gene Proteins 0.000 description 9
- 101000867232 Escherichia coli Heat-stable enterotoxin II Proteins 0.000 description 7
- 101000867205 Escherichia coli Heat-stable enterotoxin ST-2 Proteins 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 239000000284 extract Substances 0.000 description 5
- 230000000694 effects Effects 0.000 description 3
- 238000010791 quenching Methods 0.000 description 2
- 230000000171 quenching effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 101000969688 Homo sapiens Macrophage-expressed gene 1 protein Proteins 0.000 description 1
- 102100021285 Macrophage-expressed gene 1 protein Human genes 0.000 description 1
- 102100039506 Organic solute transporter subunit alpha Human genes 0.000 description 1
- 101150107341 RERE gene Proteins 0.000 description 1
- 241000555745 Sciuridae Species 0.000 description 1
- 235000010726 Vigna sinensis Nutrition 0.000 description 1
- 244000042314 Vigna unguiculata Species 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004886 process control Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 101150101156 slc51a gene Proteins 0.000 description 1
- HKSZLNNOFSGOKW-HMWZOHBLSA-N staurosporine Chemical compound C12=C3N4C5=CC=CC=C5C3=C3CNC(=O)C3=C2C2=CC=CC=C2N1[C@@H]1C[C@H](NC)[C@H](OC)[C@@]4(C)O1 HKSZLNNOFSGOKW-HMWZOHBLSA-N 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/91—Television signal processing therefor
- H04N5/92—Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/114—Adapting the group of pictures [GOP] structure, e.g. number of B-frames between two anchor frames
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/142—Detection of scene cut or scene change
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
- H04N19/159—Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/172—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/177—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/40—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/513—Processing of motion vectors
- H04N19/517—Processing of motion vectors by encoding
- H04N19/52—Processing of motion vectors by encoding by predictive encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/577—Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23406—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving management of server-side video buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44004—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video buffer management, e.g. video decoder buffer or video display buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440254—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering signal-to-noise parameters, e.g. requantization
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/24—Systems for the transmission of television signals using pulse code modulation
- H04N7/52—Systems for transmission of a pulse code modulated video signal with one or more other pulse code modulated signals, e.g. an audio signal or a synchronizing signal
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/20—Disc-shaped record carriers
- G11B2220/25—Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
- G11B2220/2537—Optical discs
- G11B2220/2562—DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/149—Data rate or code amount at the encoder output by estimating the code amount by means of a model, e.g. mathematical model or statistical model
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/15—Data rate or code amount at the encoder output by monitoring actual compressed data size at the memory before deciding storage at the transmission buffer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
Definitions
- Description-Editing device editing method, splicing device, splicing method, encoding device, and encoding method
- the present invention relates to an editing apparatus and method for generating an edited video material by editing a plurality of video materials, and a bitstream splicer for generating a seamless spliced stream by splicing a plurality of bitstreams.
- the present invention relates to a licensing apparatus and method, and an encoding apparatus and method for encoding video data. Background art
- compression-encoded image data is recorded and played back using storage media such as DVDs (digital versatile discs or digital video discs), which are optical discs that can record large amounts of digital data.
- storage media such as DVDs (digital versatile discs or digital video discs), which are optical discs that can record large amounts of digital data.
- a recording / reproducing system and a multiplex transmission system for multiplexing and transmitting a plurality of compression-encoded broadcast materials (programs) have been proposed.
- compression coding technology of image data according to the Moving Picture Experts Group (MPEG) standard is used.
- a bidirectional predictive coding method is adopted as a coding method.
- this bidirectional predictive coding method three types of coding are performed: intraframe coding, interframe forward predictive coding, and bidirectional predictive coding, and the image of each coding type is It is called an I picture (intra coded picture), a P picture (predictive coded picture) and a ⁇ direction (bidirectionally predictive coded picture).
- I picture intra coded picture
- P picture predictive coded picture
- ⁇ direction bidirectionally predictive coded picture.
- the amount of generated code for each picture is the largest for I-pictures, followed by the largest number of P-pictures, and the smallest for B-pictures.
- the bit generation amount differs for each picture as in the MPEG standard
- the amount of data occupied in the input buffer of the image decoding device is determined by the image codec.
- the MPEG standard assumes a VBV (Video Buffering Verifier) buffer, which is a virtual buffer corresponding to the input buffer in the image decoding device, and the image encoding device side breaks the VBV buffer, A stream must be created to avoid overflow.
- VBV Video Buffering Verifier
- FIG. Fig. 1 shows a transmission system configured to realize ISO / IEC13818-1 (MPEG1) and ISO / IEC13818-2 (MPEG2). I have.
- This transmission system encodes (encodes) input video data DV as an encoding device 110 side and outputs a video elementary stream (video ES) as an encoded bitstream.
- a video encoder 111 and a video elementary stream output from the video encoder 111 are added with a header or the like to form a bucket, and a video bucket sized elementary stream (video PES) is output.
- Baketsutaiza 1 1 2 the audio data D a input to Enko one de, an audio encoder 1 1 3 for outputting an audio elementary squirrel stream is encoded bit stream (audio ES), this audio encoder 1 Add a header etc.
- a bucketizer 114 that outputs a packetized elementary stream (audio PES), and a video bucketized elementary stream and a bucketizer that is output from the bucketizer 112.
- the output audio bucketized elementary stream is multiplexed with the stream stream packet to create a 188-byte transport stream packet, which is output as a transport stream (TS). Note: 1 1 and 5 are provided.
- the transmission medium output from the transport stream multiplexer 115 is used as the configuration of the decryption device 120.
- Transport stream demultiplexer that receives a transport stream transmitted via the MV and separates it into a video bucketized-elementary stream (video PES) and an audio bucketized elementary stream (audio PES). (In the figure, it is denoted as TSDE MU X.) 1 2 1 and the video bucket sized elementary stream output from the transport stream demultiplexer 1 2 1 are debucketed to form a video elementary stream.
- Video ES A video decoder 124 that decodes (decodes) the video elementary stream output from this bucketizer 122 and outputs video data DV, and a transport stream. Audio bucketized elementary stream output from the demultiplexer 1 2 1 is depacketized and an audio elementary stream (audio ES) is output. An audio decoder 125 for decoding the mental stream and outputting the audio data DA.
- the decryption device 120 in FIG. 1 is generally called an intelligent receiver decoder (IRD).
- IRD intelligent receiver decoder
- a video bucketized elementary stream and an audio bucketized elementary stream are used instead of the transport stream multiplexer 115 in FIG.
- a program stream multiplexer for multiplexing the stream and outputting a program stream (PS) is provided.
- a transmission medium 116 a storage medium for recording the program stream is used.
- a program stream demultiplexer 1221 a program stream demultiplexer for separating the program stream into a video packetized elementary stream and an audio packetized elementary stream is provided.
- each picture has an input video with the same bit amount.
- the data Dv is encoded by the video encoder 111, converted into a different bit amount for each picture according to its redundancy, compressed, and output as a video elementary stream.
- the bucketizer 1 1 2 receives the video elementary stream, converts it into a packet to absorb (average) the fluctuation of the bit amount on the bitstream time axis, and forms a video bucketized elementary stream.
- the PES packet packetized by the bucketizer 112 includes one access unit or a plurality of access units. Generally, one access unit is composed of one frame.
- the transport stream multiplexer 1 15 multiplexes the video bucketized elementary stream and the audio bucketized elementary stream output from the bucketizer 114 to create a transport stream bucket, and creates a transport stream bucket. It is sent as a port stream (TS) to the decoding device 120 via the transmission medium 116.
- TS port stream
- the transport stream is separated into a video bucketized elementary stream and an audio packetized elementary stream by the transport stream demultiplexer 122.
- the debucketizer 122 depackets the video bucketized elementary stream and outputs a video elementary stream
- the video decoder 123 decodes the video elementary stream and outputs video data DV.
- the decoding device 120 buffers the stream transmitted at a fixed transmission rate in a VBV buffer, and also outputs data for each picture from the VBV buffer based on a decoded time stamp (DTS) set for each picture in advance. Pull out.
- DTS decoded time stamp
- the capacity of this VBV buffer is determined according to the standard of the signal to be transmitted. In the case of a main video signal (MP @ ML) standard video signal, the capacity of 1.75 Mbits have.
- the encoder 110 must control the amount of bits generated in each picture so that the VBV buffer does not overflow or underflow.
- the polygonal line represents the change in the amount of data occupied in the VBV buffer, and its slope 1 Represents the transmission bit rate, and the vertically falling portion 132 represents the amount of bits extracted from the VBV buffer by the video decoder 123 to reproduce each picture.
- the timing to be derived by the video decoder 123 is specified by information called a presentation time stamp (PTS) or information called a decoded time stamp (DTS).
- PTS presentation time stamp
- DTS decoded time stamp
- the interval between the PTS and the DTS is generally one frame period.
- I, P, and B represent an I picture, a P picture, and a B picture, respectively. This is the same in other figures.
- FIG. 2 the same in other figures.
- vbv_delay is the time from when the occupancy of the VBV buffer is zero to full, and T p indicates the presentation time period.
- the transmitted stream fills the VBV buffer at a fixed bit rate of 131, and data is extracted from this VBV buffer for each picture at the timing according to the presentation time. It is.
- FIG. 3 shows the picture order of the input video data supplied to the encoder, (b) shows the picture order after rearrangement is performed in the encoder, and (c) shows Indicates the picture order of the video stream output from the encoder.
- the input video frames are rearranged according to the picture type (1, P, B) during encoding, and are encoded in the rearranged order.
- a B picture is predictively coded from an I picture or a P picture.
- the encoder performs an encoding process in the order of the rearranged pictures, and outputs each encoded picture as a video stream in the order shown in FIGS. 3 (b) and 3 (c).
- the output coded stream is supplied to a decoder or a storage medium via a transmission path.
- a GOP is composed of 15 pictures.
- FIG. 4 is a diagram for illustrating which picture is used for predictive encoding processing in the predictive encoding method described in FIG. 3, and FIG. 9 shows the relationship between the rearrangement of pictures and the prediction encoding process.
- (a) indicates the order of pictures in the input video data to the encoder
- (b) indicates the order of pictures after rearrangement is performed in the encoder
- (c) and (d) Indicates the pictures stored in the two frame memories FM1 and FM2 in the encoder
- (e) indicates the elementary stream (ES) output from the encoder.
- the numbers attached to I, P, and B indicate the order of pictures.
- the input video data as shown in FIG. 4 (a) is rearranged in the order of pictures as shown in FIG. 4 (b).
- the two frame memories FM 1 and FM2 are shown in Fig. 4 (c) and
- the picture as shown in (d) is retained.
- the encoder performs encoding processing based on only the input video data (I picture).
- the encoder stores the input video data and the frame data in the frame memory FM1.
- Predictive coding processing based on the current I-picture or P-picture, and when the input video data is a B-picture, based on the input video data and the two pictures held in the frame memories FM1 and FM2. Perform predictive encoding processing.
- the codes in FIG. 4 (e) represent pictures used for the encoding process.
- (a) represents the order of pictures in the encoded video stream supplied from the encoder to the decoder via the transmission path
- (b) shows the encoded video stream supplied from the encoder to the decoder via the transmission path
- (c) shows the picture order of the video data output from the decoder.
- the B picture is subjected to the predictive encoding process using the I picture or the P picture at the time of encoding, so that the picture used for the predictive encoding process is different from the picture used for the predictive encoding process. Perform decoding using the same picture.
- the picture order of the video data output from this decoder is such that the B picture is output earlier than the I picture or P picture.
- FIG. 6 is a diagram for explaining the decoding process described in FIG. 5 in more detail. is there.
- FIG. 6 (a) shows the order of the encoded video-elementary stream (ES) pictures supplied to the decoder, and (b) and (c) show the order of 2 pictures in the decoder, respectively.
- the pictures stored in the two frame memories FM1 and FM2 are shown, and (d) shows the output video data output from the decoder.
- the numbers attached to I, P, and B in the figure indicate the order of the pictures.
- the decoder receives the input elementary stream (ES) power S as shown in Fig. 6 (a), it has to store the two pictures used in the predictive coding process.
- the memories FM 1 and FM2 are shown in Fig. 6 (b) and
- the picture as shown in (c) is held.
- the decoder performs decoding processing based on only the input elementary stream (I-picture), and when the input elementary stream is a P-picture, the input elementary stream.
- the decoding process is performed based on the I picture or P picture stored in the frame memory FM1. If the input elementary stream is a B picture, the decoding is performed in the input elementary stream and the frame memories FM1 and FM2.
- a decoding process is performed based on the two pictures to generate output video data as shown in FIG. 6 (d).
- Figure 7 shows the direction of prediction (the direction in which the difference is taken) between pictures arranged in the order of the input video frames by arrows.
- the MPEG standard employs motion compensation that enables higher compression.
- the encoder performs motion detection according to the prediction direction shown in FIG. 7 at the time of encoding to obtain a motion vector.
- the P picture and the B picture are composed of a difference value between the motion vector and a search image or a predicted image obtained according to the motion vector.
- a P picture and a B picture are reconstructed based on the motion vector and the difference value.
- an I picture is a picture encoded from information of the I picture, and is a picture generated without using inter-frame prediction.
- a P picture is a picture generated by performing prediction from a past I picture or P picture.
- B picture is past I or P picture Either a picture predicted from both directions of the future I or P picture, a picture predicted from the forward direction of the past I or P picture, or a picture predicted from the backward direction of I or P .
- splicing means editing multiple streams while maintaining the state of the encoded stream.
- the first problem is a problem from the viewpoint of picture presentation order.
- the picture presentation order is the display order of each video frame. This first problem will be described with reference to FIG. 8 and FIG.
- FIG. 8 and 9 both show the relationship between the order of pictures in the stream before and after splicing and the order of picture presentation after splicing when simple splicing of the stream is performed.
- FIG. 8 shows a case where no problem occurs in the order of picture presentation
- FIG. 9 shows a case where a problem occurs in the order of picture presentation. 8 and 9, (a) shows one video stream A to be spliced,
- (b) shows the other video stream B to be spliced
- (c) shows the stream after splicing
- (d) shows the order of presentation.
- SP A represents a splicing point in the stream A
- SP B represents a splicing point in the string over beam B
- ST A represents the stream A
- STB denotes a stream B
- ST S p is spliced Shows the stream.
- FIG. 8 is a diagram showing an example in which a splice point SPA is set after a B picture of stream A and a splice point SPB is set before a P picture of stream B. .
- a splice point SPA is set after a B picture of stream A
- a splice point SPB is set before a P picture of stream B.
- the picture of stream A appears after the picture of stream B, or the picture of stream B
- splicing is performed at the splice points SPA and SPB shown in FIG. 8, there is no problem in the presentation order.
- the splicing process shown in Fig. 9 shows an example in which a splice point SP A is set after the P picture of stream A and a splice point SP B is set before the B picture of stream B. It is.
- the last picture of stream A appears after the picture of stream B, or the picture of stream B
- the phenomenon occurs when two pictures are displayed before the last picture in stream A. That is, if the picture is displayed in this order, the video image of stream A switches from the video image of stream A to the video image of stream B near the splicing point, and two frames later, the video image of stream A changes to 1 It becomes a strange image that only the frame is displayed.
- a problem occurs in the order of presentation.
- FIGS. 11 and 12 show the cases where the splices shown in FIGS. 8 and 9 are performed, respectively. In each figure, (a) shows the stream after splicing, and (b) shows the order of presentation.
- stream B is a closed GOP (a closed GOP whose forecast does not depend on the previous GOP), and the splice is performed at the break of the GOP. Motion compensation is performed without any excess or shortage, and picture decoding is performed without any problem.
- FIGS. 11 and 12 when decoding, motion compensation referring to pictures in different streams is performed, so that a problem occurs in motion compensation. More specifically, since a B picture or a P picture of stream B cannot be created with reference to a P picture of stream A, motion compensation is performed using the motion vector in the prediction direction indicated by the broken line in the figure. Is invalid (denoted as NG in the figure). Therefore, in the examples shown in FIGS. 11 and 12, the picture remains broken until the next I picture. When splicing in arbitrary picture units, this problem cannot be solved with a simple stream splice.
- the third problem is a problem from the viewpoint of the VBV buffer. This problem will be described with reference to FIGS.
- FIG. 13 shows an example in which ideal stream splicing that satisfies the conditions of picture presentation order, motion compensation, and VBV buffer is performed.
- STC indicates stream A, stream B, and stream C, respectively.
- (a) shows the state of the VBV buffer on the decoder side
- (b) shows the stream after splicing
- (c) shows the generation timing of each picture after rearrangement in the encoder
- (d) ) Indicates the order of pictures after decoding.
- SPV indicates a splice point in the VBV buffer
- V0C indicates the occupancy of the VBV buffer in the splice point SPV
- SPS indicates a splice point in the stream.
- splicing does not cause the VBV buffer to fail such as overflow or underflow of the VBV buffer.
- FIGS. 14 to 18 show normal stream A and stream B, respectively, that satisfy the VB V buffer constraints
- FIGS. 16 to 18 show three examples in which streams A and B are simply spliced at arbitrary positions. In these figures,
- the state of the VBV buffer differs depending on where the streams A and B are spliced.
- the spliced stream also satisfies the VBV buffer constraint, but in the example shown in Fig. 17, the overflow indicated by reference numeral 141 occurs, and the VBV buffer The constraint has not been met.
- the underflow indicated by reference numeral 142 occurs, and does not satisfy the restrictions of the VBV buffer.
- the present invention has been made in view of such a problem, and a first object of the present invention is to cause a failure of a virtual buffer corresponding to an input buffer on the decoding device side and a discontinuity of data occupancy in the virtual buffer.
- An object of the present invention is to provide a splicing device and a stream editing device that realize seamless splicing of a plurality of coded streams so as not to perform such a process.
- a second object of the present invention is to provide a splicing apparatus, a stream editing apparatus, and a method for reducing image quality deterioration near a splicing point and preventing image quality deterioration in re-encoding processing in addition to the above objects. And an encoding device.
- a stream editing apparatus receives a plurality of encoded bit streams obtained by encoding a plurality of video materials and connects the plurality of encoded bit streams.
- decoding each encoded bit stream in a predetermined section before and after a connection point including a connection point of the plurality of encoded bit streams and Decoding means for outputting image data within the frame
- target code amount setting means for setting a new target code amount for image data within a predetermined section outputted by the decoding means
- decoding means The image data in the predetermined section output by the above is encoded according to the new target code amount set by the target code amount setting means, and a new encoded bit stream in the predetermined section
- Encoding means for outputting a stream, and replacing the original coded bit stream in the predetermined section with a new coded bit stream in the predetermined section output by the coding means
- Coded bit stream output means for connecting and outputting the original coded bit stream and the new coded bit stream.
- the decoding means decodes each encoded bit stream in a predetermined section before and after a connection point including a connection point of a plurality of coded bit streams, and decodes the image in the predetermined section.
- the data is output, the target code amount setting means sets a new target code amount for the image data in the predetermined section, and the encoding means codes the image data in the predetermined section according to the new target code amount.
- a new coded bit stream within a predetermined section is output.
- the coded bit stream output means replaces the original coded bit stream in the predetermined section with a new coded bit stream in the predetermined section, and outputs the original coded bit stream before and after the predetermined section.
- the stream and the new coded bitstream are concatenated and output.
- the splicing apparatus of the present invention is capable of processing a plurality of source encoded streams.
- Splicing point setting means for setting splicing points, respectively; decoding means for decoding pictures near the splicing points of a plurality of source encoded streams, respectively, and outputting decoded video data; Re-encoding video data and outputting a re-encoded stream, and a spliced spliced industry by switching and outputting the source encoded stream and the re-encoded stream.
- Equipment It is.
- the splice control means of the splicing apparatus of the present invention has a function of calculating the target bit amount of the re-encoding in the re-encoding means so that the overflow and underflow do not occur in the VBV buffer. ing.
- the splice control means of the splicing apparatus of the present invention is characterized in that the locus of the data occupancy of the VBV buffer at the point of switching between the source encoding stream and the re-encoding stream or at the splicing point of the re-encoding stream. It has the function of calculating the target bit amount of re-encoding in the re-encoding means so as not to be disconnected.
- the splice control means of the splicing apparatus of the present invention comprises a VBV buffer which can assume that the trajectory of the data occupancy of the VBV buffer corresponding to the re-encoding stream would have originally been possessed by the source coded stream. It has a function to control the re-encoding means so as to be close to the locus of the data occupancy.
- the splice control means of the splicing apparatus of the present invention extracts encoded parameters included in the source encoded stream, and selectively extracts the extracted code parameters at the time of re-encoding processing by the re-encoding means. It has the function of preventing image quality deterioration of the splicing stream by reuse. Further, the splice control means of the splicing apparatus of the present invention extracts information on the quantization characteristic included in the source coding stream, and performs re-encoding processing in the re-encoding means based on the extracted quantization characteristic. Re-encode It has the function of controlling the means.
- the splice control means of the splicing apparatus of the present invention extracts information on the quantization characteristic included in the source coding stream for each picture, and performs re-processing so that overflow and underflow do not occur in the VBV buffer.
- the target bit amount of each picture in the re-encoding process in the encoding means is calculated, and information on the quantization characteristics included in the source coded stream is extracted for each picture, and extracted from the source coded stream.
- Re-encoding means for calculating a new quantization characteristic based on the calculated quantization characteristic and the calculated target bit amount, and performing a re-encoding process based on the calculated new quantization characteristic. It has the function of controlling
- the splice control means of the splicing apparatus of the present invention calculates a target bit amount in the re-encoding processing in the re-encoding means so as to prevent overflow and underflow from occurring in the VBV buffer, and performs source coding.
- the target bit amount is allocated to each picture to be re-encoded, and the target bit amount allocated to each picture is allocated. It has a function of controlling the re-encoding means so as to perform the re-encoding process for each picture according to the amount of data.
- the splice control means of the splicing apparatus of the present invention calculates a target bit amount in the re-encoding process in the re-encoding means so that overflow and underflow do not occur in the VBV buffer, and performs source coding.
- a target bit amount is assigned to each picture to be re-encoded so as to be close to a generated bit amount for each picture in a past encoding process of the stream, and the target bit amount assigned to each picture is set. It has a function of controlling the re-encoding means so that re-encoding processing is performed for each picture according to the amount.
- the splice control means of the splicing apparatus of the present invention selectively re-uses the motion vector information extracted from the source coding stream at the time of re-encoding by the re-encoding means, thereby providing a splicing signal. It has a function to prevent image quality deterioration of the stream.
- the splice control means of the splicing apparatus of the present invention includes a source coding splicing device. It is determined whether or not the motion vector information extracted from the stream is to be used during re-encoding by the re-encoding means, and if it is determined to be reused, the motion detected by the motion detecting means is determined. It has a function of controlling the re-encoding means so as to supply the motion vector extracted from the stream instead of the vector to the motion compensation circuit of the re-encoding means.
- the splice control means of the splicing apparatus of the present invention performs the re-encoding process of the re-encoding means so as not to be predicted from a picture of a different source coded stream across the splice point. It has a function to set the prediction direction for the picture near the splice point to be predicted.
- the splice control means of the splicing apparatus of the present invention selectively changes a picture type of a picture near the splice point re-encoded by the re-encoding means, thereby obtaining a picture near the splice point of the splicing stream.
- the splice control means of the splicing apparatus of the present invention includes a splice re-encoding means for re-encoding by a re-encoding means so as not to be predicted from a picture of a different source coded stream across a splice point. It has the function of selectively changing the picture type of the picture near the point.
- FIG. 1 is a block diagram showing a schematic configuration of a transmission system according to the MPEG standard.
- FIG. 2 is an explanatory diagram for describing a VBV buffer.
- FIG. 3 is an explanatory diagram for describing picture rearrangement in the encoder required in the bidirectional predictive encoding scheme of the MPEG standard.
- FIG. 4 is an explanatory diagram showing the relationship between picture rearrangement and encoding processing in the encoder.
- FIG. 5 is an explanatory diagram for describing the rearrangement of pictures in the decoder.
- FIG. 6 shows the relationship between picture reordering and decoding in the decoder.
- FIG. FIG. 7 is an explanatory diagram for describing motion detection and motion compensation in the bidirectional predictive coding system of the MPEG standard.
- FIG. 8 is an explanatory diagram showing an example of the relationship between the order of pictures in a stream before and after splicing and the order of picture presentation after splicing when a stream is simply spliced.
- FIG. 9 is an explanatory diagram showing another example of the relationship between the order of pictures in a stream before and after splicing and the order of picture presentation after splicing when a simple splicing of a stream is performed.
- FIG. 10 is an explanatory diagram showing an example of the relationship between the order of pictures and the order of picture presentation in a stream after splicing when a simple splicing of the stream is performed.
- FIG. 11 is an explanatory diagram showing another example of the relationship between the order of pictures and the order of picture presentation in a stream after splicing when a simple stream splicing is performed.
- FIG. 12 is an explanatory diagram showing still another example of the relationship between the order of pictures and the order of picture presentation in a stream after splicing when a stream is simply spliced.
- FIG. 13 is an explanatory diagram showing an example of an ideal stream splice satisfying the conditions of picture presentation order, motion compensation, and VBV buffer.
- FIG. 14 is an explanatory diagram showing a normal stream satisfying the constraints of the VBV buffer.
- FIG. 15 is an explanatory diagram showing another normal stream satisfying the constraints of the VBV buffer.
- FIG. 16 is an explanatory diagram illustrating an example of a case where two streams are simply spliced at an arbitrary position.
- FIG. 17 is an explanatory diagram for explaining another example in which two streams are simply spliced at an arbitrary position.
- FIG. 19 is a block diagram showing a configuration of a splicing device and a stream editing device according to an embodiment of the present invention.
- FIG. 20 is a block diagram showing a configuration of the MPEG decoder and MPEG encoder in FIG.
- FIG. 21 is an explanatory diagram showing an example of a splice point and a re-encoding section in the presentation video data obtained by decoding by the MPEG decoder in FIG.
- FIG. 22 is an explanatory diagram showing a sequence of pictures before and after decoding two streams in the example shown in FIG. 21.
- FIG. 23 is an explanatory diagram showing an arrangement of pictures of a spliced stream after splicing in the example shown in FIG. 21.
- FIG. 24 is an explanatory diagram showing another example of a splice point and a re-encoding section in the presentation video data obtained by decoding by the MPEG decoder in FIG.
- FIG. 25 is an explanatory diagram showing the picture arrangement of the two streams before and after decoding in the example shown in FIG. 24.
- FIG. 26 is an explanatory diagram showing an arrangement of pictures of a spliced stream after splicing in the example shown in FIG.
- FIG. 27 is an explanatory diagram showing an example in which an underflow occurs in the data occupancy of the VBV buffer.
- FIG. 28 is an explanatory diagram showing an example in which the underflow described in FIG. 27 is improved by the splicing device of the present invention.
- FIG. 28 is an explanatory diagram showing an example in which an overflow occurs in the data occupancy of the VBV buffer.
- FIG. 29 is an explanatory diagram showing an example in which the overflow described in FIG. 28 is improved by the splicing device of the present invention.
- FIG. 30 and FIG. 31 are flowcharts for explaining the operation of the splicing device and the stream editing device of the present invention.
- FIG. 19 is a block diagram showing a configuration of a splicing device and an editing device according to an embodiment of the present invention.
- M The splicing device and the editing device, for example, multiple video material video data VD A, the VD B encoder 1 A, the IB
- P EG standard multiple obtained by Enko one de accordance bidirectional predictive coding method according to coding bit string - beam (hereinafter, simply referred to as streams.) Has become ST A, so as to enter the ST B .
- the stream in the present embodiment may be any of an elementary stream, a bucketized elementary stream, and a transport stream.
- Splicing device and the editing device inputs stream STA, the ST B, counts the buffer memory 10 for temporarily storing them, stream-ST A, the number of bits of the STB It comprises a stream counter 11, a stream analyzer 12 for analyzing the syntax of the streams ST A and ST B, and a splice controller 13 for controlling each block described later for performing splicing processing. I have.
- the splicing device and the editing device further decode the streams STA and STB output from the buffer memory 10 according to the MPEG standard, respectively, and output the MPEG decoders 14A and 14B that output baseband video data.
- a switch 15 for switching output video data from the MPEG decoders 14A and 14B, and an MPEG encoder 16 for re-encoding output video data output from the switch 15 If, stream-ST a output from the buffer memory 1 0, by switching and outputting re Enko one de stream ST rE output from the STB and the MPEG encoder 1 6, spliced string over arm ST S p And a switch 17 for outputting the same.
- the buffer memory 10 temporarily stores the supplied two streams ST A and STB in response to a write command from the splice controller 13 described later. Then, in response to a read command from the splice controller 13, the stored streams ST A and STB are read, respectively.
- Yotsute thereto, stream-STA in order to perform the splice in the splicing point set for each STB, the stream STA, it is possible to match the phases ⁇ Pi timing of splicing points ST B.
- the stream counter 11 receives the streams ST A and STB, counts the number of bits in each of the streams, and supplies the count value to the splice controller 13.
- the reason for counting the number of bits of the supplied bit streams ST A and STB is to virtually grasp the trajectory of the data occupancy of the VBV buffer corresponding to the streams ST A and ST B.
- the stream analysis unit 12 extracts appropriate information from the sequence layer, the GOP layer, the picture layer, and the macroblock layer by analyzing the syntax of the streams STA and STB. For example, picture information indicating a picture type (1, B or P), motion vector, quantization step, and encoding information such as a quantization matrix are extracted and the information is output to the splice controller 13. I do.
- Pieces of encoding information are encoding information generated in the past encoding processing in the encoders 1A and 1B, and in the splicing apparatus of the present invention, these pieces of encoding information are used. Is used selectively during re-encoding.
- the splice controller 13 includes a count value output from the stream counter 11, encoding information output from the stream analysis unit 12, and parameters n 0 , m 0 and splices for setting a re-encoding section. It receives a parameter P 0 for indicating a point, and controls the switch 15, the MPEG encoder 16 and the switch 17 based on the information. Specifically, the splice controller 13 controls the switching timing of the switch 15 in accordance with the input parameter p 0, and controls the switching of the switch 17 in accordance with the parameters n 0, m 0 , and p 0. The timing is controlled.
- the splice controller 13 has a stream counter 11 and a stream counter. Based on the count value supplied from the system analysis unit 12 and the encoding information supplied from the stream analysis unit 12, the spliced stream prevents the VB V buffer from overflowing and underflowing. The new target code amount is calculated for each picture in the re-encoding section so that the trajectory of the data occupancy of the VBV buffer does not become discontinuous due to the spliced stream.
- the price controller 13 adjusts the delay amount of each of the streams ST A and STB in the buffer memory 10 by controlling, for example, the write address and the read address of the buffer memory 10, and sets the presentation time.
- the phase of the splice point of each stream ST A and STB is adjusted based on the reference.
- FIG. 20 is a block diagram showing the configuration of the MPEG decoders 14A and 14B and the MPEG encoder 16 in FIG.
- MP EG decoder 14 on behalf of the MPEG decoder 14 A, 14 B, shows stream ST A, on behalf of the STB as a stream ST.
- the MPEG decoder 14 receives the stream ST, and performs variable length decoding on the variable length decoding circuit (denoted as VLD in the figure) 21 and the output data from the variable length decoding circuit 21.
- Motion compensation unit for outputting the predicted image data to the addition circuit 24 and a 28.
- MP EG encoder 16 outputs the output video supplied from MP EG decoder 14
- An encoder pre-processing unit 30 for performing pre-processing for encoding on data is provided.
- the encoder pre-processing unit 30 rearranges pictures for encoding by bidirectional predictive encoding, calculates a macroblock of 16 ⁇ 16 pixels, calculates the encoding difficulty of each picture, etc. Is performed.
- the MPEG encoder 16 further includes a subtraction circuit 31 for obtaining a difference between the output data of the encoder preprocessing unit 30 and the predicted image data, and an output data of the encoder preprocessing unit 30 and an output of the subtraction circuit 31.
- a switch 32 for selectively outputting one of the data and a DCT (discrete cosine transform) block unit for the output data of the switch 32 to output a coefficient of 0 ⁇ 001: 001 Circuit (shown as DC T in the figure)
- variable-length coding circuit outputs the ST RE (in the figure to serial and VLC.) and a 35.
- the MPEG encoder 16 further includes an inverse quantization circuit (denoted as IQ in the figure) 36 for inversely quantizing the output data of the quantization circuit 34, and an output data of the inverse quantization circuit 36.
- Two frame memories shown as FM 1 and FM 2 in the figure) for holding the output data of the frame memories 39 and 40 and the frame memories 39 and 40
- a motion compensation unit that performs motion compensation on the basis of the data held in 40 and the motion vector information to generate predicted image data, and outputs the predicted image data to a subtraction circuit 31 and an addition circuit 38. (In the figure, it is described as MC.) 41 is provided.
- the MPEG encoder 16 further detects a motion vector based on the data held in the frame memories 39 and 40 and the output data of the encoder preprocessing unit 30, and outputs a motion vector to output the motion vector information. It receives the circuit (denoted by ME in the figure) 42, the encode information supplied from the splice controller 13 and the target code amount, and based on these information, a quantization circuit 34 and an inverse quantization circuit 36. And an encoder controller 43 for controlling the frame memories 39 and 40, and movements controlled by the encoder controller 43 and output from the encoder controller 43. A switch 44 for selectively outputting one of the vector information and the motion vector information output from the motion detection circuit 42 to the motion compensation unit 41 is provided.
- the stream ST is variable-length decoded by the variable-length decoding circuit 21, is inversely quantized by the inverse quantization circuit 22, and inverse DCT is performed by the inverse DCT circuit 23.
- the output data of the inverse DCT circuit 23 is input to the adding circuit 24 and the switch 25.
- output data of the inverse DCT circuit 23 is output as output data of the MPEG decoder 14 via the switch 25.
- the output data of the inverse DCT circuit 23 and the predicted image data output from the motion compensator 28 are added by the addition circuit 24 to reproduce the P picture or the B picture.
- the output data of the adder circuit 24 is output as output data of the MPEG decoder 14 via the switch 25.
- the I picture or the P picture is appropriately held in the frame memories 26 and 27, and is used by the motion compensation unit 28 to generate predicted image data.
- the output data of the MPEG decoder 14 is input to the encoder pre-processing unit 30, and the encoder pre-processing unit 30 performs picture rearrangement, macroblock conversion, and the like.
- the encoder pre-processing unit 30 rearranges pictures based on the picture type information from the splice controller 13.
- Output data of the encoder pre-processing unit 30 is input to the subtraction circuit 31 and the switch 32.
- the switch 32 selectively outputs the output data of the encoder preprocessor 30.
- the subtraction circuit 31 subtracts the predicted image data output from the motion compensation section 41 from the output data of the encoder preprocessing section 30, and the switch 3 2 Output data of 1 selectively.
- the output data of the switch 32 is subjected to DCT by the DCT circuit 33, the output data of the DCT circuit 33 is quantized by the quantization circuit 34, and is variable-length coded by the variable-length coding circuit 35, and the stream ST Output as RE .
- the output data of the quantization circuit 34 is
- the inverse DCT circuit 37 performs inverse quantization, and the inverse DCT circuit 37 performs inverse DCT.
- the output data of the inverse DCT circuit 37 is held in the frame memory 39 or the frame memory 40.
- the output data of the quantization circuit 34 is inversely quantized by the inverse quantization circuit 36
- inverse DCT is performed by the inverse DCT circuit 37
- the inverse DCT circuit 3 is calculated by the addition circuit 38.
- 7 and the predicted image data from the motion compensator 41 are added and held in the frame memory 39 or the frame memory 40.
- the I-picture or P-picture held in the frame memory 39 or the frame memory 40 is appropriately used by the motion compensation unit 41 to generate predicted image data.
- the motion detection circuit 42 detects a motion vector based on the data held in the frame memories 39 and 40 and the output data of the encoder preprocessing unit 30 and outputs the information of the motion vector. Output.
- the encode controller 43 receives the encode information and the target code amount for each picture supplied from the splice controller 13 and, based on the information, a quantization circuit 34, an inverse quantization circuit 36, Frame memory 39, 40 and switch
- the encoding controller 43 uses the information of the motion vector as a switch.
- the motion vector included in the encoding information supplied from the splice controller 13 is not reused, the motion vector is newly generated in the motion detection circuit 42.
- the switch 44 is controlled so that information on the motion vector is input to the motion compensator 41.
- the encoding controller 43 holds the pictures necessary for generating the predicted image data in the frame memories 39 and 40 based on the picture type included in the encode information supplied from the splice controller 13
- the frame memories 39, 40 are controlled as described above.
- the encode controller 43 monitors the amount of code generated by the variable-length encoding circuit 35 and controls the variable-length encoding circuit 35. Then, the encoder controller 43 determines that the generated code amount does not match the set target code amount. When the VBV buffer is likely to overflow, dummy data is added, that is, stuffing is performed to compensate for the shortage of the generated code amount with respect to the target code amount. Also, when the generated code amount exceeds the set target code amount and the VBV buffer is likely to underflow, the encode controller 43 performs a skipped macro that is a process of stopping the encoding process in units of macro blocks. Block processing (ISO / IEC 1 381 8-2 7.6.6) is performed.
- FIG. 21 is an explanatory diagram showing an example of splice points and re-encoding sections in video data (hereinafter, referred to as presentation video data) obtained by decoding by the MPEG decoders 14A and 14B.
- FIG. 21 (a) that shows the presentation video data corresponding to the presentation video data
- ST B stream B
- a re-encoding section is set as a predetermined section before and after the splice point including the splice point.
- the re-encoding section is set using parameters n 0 and m 0 .
- the picture of the splice point in the presentation video data corresponding to the stream STA is represented as A n _ Po using the parameter p 0
- the picture of the splice point A n Future pictures than -p 0 are A (n — p 0) +1 , A (n — p 0) +2 , A ( n — p 0) +3 , A (n — Po) +4
- the picture past the splice point picture A n _ Po is A ( n — po) — ⁇ A
- n 0 frames are set before the splice point
- m 0 frames are set after the splice point. Therefore, re-encoding section, the picture A (n - p 0) + n0 ⁇ picture A n _ Po pixels off - catcher> m - p 0 ⁇ pixels catcher D (m -po) - the m0.
- the re-encoding process is performed for the re-encoding section set as described above.
- This re-encoding process means that the supplied source encoded streams ST A and ST B are restored to baseband video data by decoding, and the two pieces of decoded video data are connected at a splice point.
- the video data and re-encoding is a process that creates a new stream-ST rE.
- This re-encoding eliminates the problems associated with picture reordering and motion compensation. This will be described below.
- FIG. 22 shows the arrangement of pictures before and after decoding in the example shown in FIG.
- (a) shows the stream STA near the re-encoding section
- (b) shows the presentation video data corresponding to the stream ST A near the re-encoding section
- (c) shows the It shows the presentation video data corresponding to the stream STB near the encoding section
- (d) shows the stream STB near the re-encoding section.
- REP A indicates a picture to be re-encoded in the presentation video data corresponding to the stream STA
- REP B indicates a picture to be re-encoded in the presentation video data corresponding to the stream STB.
- the curved arrows indicate the prediction direction.
- FIG. 23 shows a state after the stream ST A and the stream ST B shown in FIGS. 21 and 22 have been spliced
- FIG. 23 (a) shows a state where the two streams have been spliced. It indicates presentation video data after
- FIG 2 3 (b) shows a stream ST SP after splicing two streams.
- the stream ST SP shown in Fig. 23 (b) The image data shown in FIG. 23 (a) is re-encoded to generate a new stream ST RE , which is further re-coded as an original stream ST A (hereinafter referred to as OST A ) before the re-encoding section.
- OST A original stream ST A
- TRE indicates a re-encoding period.
- FIGS. 24 to 26 show other examples having different splice points from the examples shown in FIGS. 21 to 23.
- FIG. 24 to 26 show other examples having different splice points from the examples shown in FIGS. 21 to 23.
- FIG. 24 is an explanatory diagram showing another example of a splice point and a re-encoding section in presentation video data.
- (a) shows presentation video data corresponding to stream ST A
- (b) shows presentation video data corresponding to stream ST B.
- FIG. 25 shows the arrangement of pictures before and after decoding in the example shown in FIG.
- (a) shows the stream ST A near the re-encoding section
- (b) shows the presentation video data corresponding to the stream STA near the re-encoding section
- (c) shows the re-encoding section.
- the presentation video data corresponding to the stream STB near the section is shown
- (d) shows the stream STB near the re-encoding section.
- FIG. 26 shows an arrangement of pictures after splicing in the example shown in FIG.
- (a) shows image data obtained by concatenating the presentation video data shown in FIG. 25 (b) and the presentation video data shown in FIG. 25 (c), and (b) shows splice data.
- This shows the stream ST SP that was created.
- the stream ST sp shown in (b) re-encodes the image data shown in (a) for the re-encoding section to generate a new stream ST RE , and further generates the stream ST RE before the re-encoding section.
- FIG. 26 (a) is the result of the reconstruction of the picture type, picture B m - shows the status after the Po is changed to the I picture from the P-picture.
- picture A n-po is a B picture
- this picture A n -p 0 is originally a picture on which bidirectional predictive encoding processing is performed.
- predictive coding process using a picture belonging to a different stream ST B is occur while image quality deterioration occurs. Therefore, in the present embodiment, even in the case of a B picture, prediction encoding is performed at the time of re-encoding without using prediction from the picture side belonging to a different stream.
- the picture A n _p the previous ⁇ picture - performing ( ⁇ ( ⁇ ⁇ ) +1) only predictive encoding process using the.
- the setting of the picture type reconstruction as described above is performed by the splice controller 13, and the information of the picture type reconstruction setting is given to the encoder controller 43 of the MPEG encoder 17.
- the encoding controller 43 performs an encoding process according to the picture type reconstruction setting.
- the reuse of the encoding information generated in the past encoding processing such as the motion vector is also performed according to the picture type reconfiguration setting.
- the splice processing exists in the stream STB after (past) the picture B m — Po in the stream STB as shown in Fig. 25 (d).
- B-picture which has been (B (m - Po) +2 and B (m _ P o) +1 ) , in order to be discarded after the decoding as shown in FIG. 25 (c), re Enko It does not exist in the picture sequence after loading.
- a method of calculating a new target code amount for image data in a re-encoding section according to the present embodiment will be described with reference to FIGS. 27 to 30.
- the spliced stream's VBV buffer will either underflow or overflow after the splice point, or the data occupancy trajectory of the VBV buffer of the spliced stream will be Discontinuities occur.
- the re-encoding process of the splicing apparatus of the present invention for solving these problems will be described with reference to FIGS. 27 to 30.
- FIG. 27 shows an example in which a simple splicing process corresponding to FIG. 23 described above is performed, and FIG. 27 (a) shows the VB of the stream ST RE 'to be re-encoded.
- FIG. 27B is a diagram showing a locus of the data occupancy of the V buffer
- FIG. 27B is a diagram showing a stream ST RE for re-encoding.
- T RE indicates a re-encoding control period
- OS T A indicates an original stream A
- ST RE ' indicates a re-encoding target stream to be re-encoded.
- this stream to be re-encoded ST RE ' is different from the re-encoded stream ST RE which is actually re-encoded, and is assumed to be such a stream ST RE when simple splicing is performed.
- OST B indicates the original stream B
- SP VBV indicates a splice point in the VBV buffer
- SP indicates a splice point in the stream.
- the trajectory of the VBV buffer of stream STRE 'to be spliced is the data occupancy of the VBV buffer of stream A (STA) before the splice point SP.
- STA data occupancy of the VBV buffer of stream A
- STB locus of the data occupancy of the VBV buffer of the stream B
- FIG. 27 (a) the start level of the data occupancy of VBV in stream B at the splice point must match the end level of the data occupancy of VBV in the splice point of stream A. That is, in order to make these levels coincide, in the example shown in FIG. 27 (a), the locus of the data occupancy of the VBV buffer of stream B originally had during the re-encoding control period T RE . The level must be lower than the trajectory. Note that the trajectory that this data occupancy trajectory would originally have is the data occupancy of the VB V buffer for stream B assuming that the supplied stream B was not spliced. This is indicated by the extended trajectory of VB V 0ST B in FIG. 27 (a).
- the VBV buffer underflows at the timing of extracting the I-picture having the largest amount of bits extracted from the VBV buffer.
- the trajectory of the data occupancy of the VBV buffer is continuous at the splice point, and it is necessary to prevent the occurrence of an underflow after the splice point.
- a new target code amount is set for each picture in the re-encoding period.
- the trajectory of the data occupancy of the VBV buffer of stream B is simply reduced so that the trajectory of the data occupancy of the VBV buffer of the spliced stream is continuous at the spplus point, an underflow occurs
- the trajectory of the data occupancy of the VBV buffer becomes discontinuous at the switching point between the stream ST RE 'for re-encoding and the original stream OSTB.
- the locus of the data occupancy of the VBV buffer is continuous at the switching point between the stream to be re-encoded ST RE -and the original stream OS TB as shown in FIG. In this way, a new target code amount is set for each picture in the re-encoding period.
- the reason for not controlling the origination Narusutori Ichimu OSTB the corresponding trajectory VB V 0st data occupancy quantity of the VBV buffer B includes a trajectory VBV 0st B, the locus of the data occupancy quantity of VB V buffer of stream B This is because it is a trajectory that it originally had, and this trajectory cannot be controlled. Because this trajectory VB V 0ST B is the optimal trajectory determined so that the original stream OSTB does not overflow or underflow, if the level of this optimal trajectory is controlled, the overflow or This is because underflow may occur.
- FIG. 29 shows an example in which the splicing process corresponding to FIG. 26 described above is performed
- FIG. 29 (a) is a diagram showing a locus of the data occupancy of the VBV buffer of the splicer stream ST sp.
- FIG. 29 (b) is a diagram showing a spliced stream STsp.
- T RE indicates a splice period under splicing control
- OST A indicates an original stream A
- ST RE indicates a stream to be re-encoded
- OST B indicates an original stream B.
- SP VBV indicates a splice point in the VBV buffer
- SP indicates a splice point in the stream.
- the locus of the VBV buffer of the spliced stream to be re-encoded ST RE occupies the data of the VBV buffer of stream A (STA). After the splice point SP, it becomes a locus of the data occupancy of the VBV buffer of the stream B (STB).
- STB locus of the data occupancy of the VBV buffer of the stream B
- FIG. 29 (a) To realize seamless splicing in which the locus of VBV data occupancy at the splice point of stream A and the locus of VBV data occupancy of stream B at the splice point are continuous, FIG. As in a), the start level of the data occupancy of the stream B VBV at the splice point must match the end level of the VBV data occupancy at the stream A splice point. In other words, in order to make these levels coincide, in the example shown in Fig. 29 (a), the locus of the data occupation amount of the stream B VBV buffer originally exists in the re-encoding process control period T RE . The level must be higher than the trajectory that would have been. The trajectory that this data occupancy trace would have originally is the data occupancy of the VB V buffer for stream B assuming that supplied stream B was not spliced. This is indicated by the extended trajectory of VBV 0ST B in Fig. 29 (a).
- this VBV buffer is It overflows.
- the trajectory of the data occupancy of the VBV buffer is continuous at the splice point, and no overflow occurs after the splice point. Then, a new target code amount is set for each picture in the re-encoding period.
- trajectory of the data occupancy of the VBV buffer of stream B is simply increased so that the trajectory of the data occupancy of the VBV buffer of the spliced stream is continuous at the spplus point, overflow occurs.
- the trajectory of the data occupancy of the VBV buffer becomes discontinuous at the point of switching between the stream ASTR 'for re-encoding and the original stream OSTB.
- a new target code amount is set for each picture in the re-encoding period so that the trajectory of the data occupancy of the VBV buffer is continuous. ing.
- the reason why the trajectory VBV 0ST B of the data occupancy of the VBV buffer corresponding to the original stream OS TB is not controlled is that the trajectory VB V 0 S TB is originally the trajectory of the data occupancy of the VB V buffer of stream B. This is because it is a trajectory that it would have had, and this trajectory cannot be controlled. Because this trajectory VB V 0ST B is the optimal trajectory determined so that the original stream OSTB does not overflow or underflow, if the level of this optimal trajectory is controlled, the overflow or underflow will occur. This is because it may occur.
- vbv_under indicates the amount of underflow of the VBV buffer
- vbv—over indicates the amount of overflow of the VBV buffer
- vbv_gap indicates the stream ST RE 'that is to be re-encoded. This data shows the gap value of the VBV buffer at the switching point with the stream OS TB.
- the splice controller 13 determines the data occupancy of the original stream ⁇ ST A VBV buffer based on the bit count value of stream A and the bit count value of stream B supplied from the stream counter 11. Locus, the locus of the data occupancy of the VBV buffer of the original stream OS TB, and the data occupancy of the VBV buffer of the stream ST RE , which is to be re-encoded when stream A and stream B are simply spliced. Calculate the trajectory.
- the trajectory of the data occupancy of each VBV buffer is calculated by subtracting the bit amount output from the VBV buffer according to the presentation time from the bit count value supplied from the stream counter 11 for each presentation time. By doing so, it can be easily calculated.
- the splice controller 1 the origination Narusu stream OS T data occupancy quantity of the locus of the VBV buffer A, the original stream locus of the data occupancy quantity of the VBV buffer of OSTB, and stream A
- the trajectory of the data occupancy of the VBV buffer of the re-encoding target stream STR E can be virtually grasped.
- the splice controller 13 refers to the trajectory of the data occupancy of the VBV buffer of the stream to be re-encoded ST RE , which is virtually obtained, and thereby under-streams the stream to be re-encoded ST RE '. Calculate the flow amount (vbv-under) or the overflow amount (vbv-over).
- the splice controller 13 calculates the trajectory of the data occupancy of the VBV buffer of the stream ST RE ', which is virtually obtained, and the occupancy of the data of the VBV buffer of the original stream OS TB. (by reference to the VB V 0st, it calculates a gap value of the VBV buffer (vbv- gap) in the sweep rate Tsu quenching Poin DOO the re-encoding Target stream ST rE 'to the original stream OS TB.
- the splice controller 13 obtains the offset amount vbv-off of the target code amount by the following equations (1) and (2).
- vov_off — (vbv— under-vbv— gap)... (1)
- vbv—off + (vbv_over-vbv—gap)... (2) If the VBV buffer underflows as in the example shown in Fig. 27 (a), use equation (1). The offset amount vbv—off is calculated, and when the VBV buffer overflows as in the example shown in FIG. 29 (a), the offset amount vbv_off is calculated using equation (2).
- the splice controller 13 calculates the target code amount (target bit amount) TB ⁇ by the following equation (3) using the offset amount vbv_off obtained by the equation (1) or (2). . no mo
- this target bit amount TB po is calculated for the picture to be re-encoded. Is a value indicating a target bit amount to be allocated.
- GB_A is a value indicating the bit generation amount of any picture from picture A n -p 0 to picture A (n — Po) + no in stream A
- ⁇ GB_A (n _ Po) + i from the picture a n _ Po picture a (n - is the total value of the quantity of generated bits of each picture to Po) + n0.
- GB_B is picture B from the picture B m _ Po in the stream B (m - Po) - up m0 les, a value indicating the quantity of generated bits of a picture of Zureka, sigma
- GB_B (m - p 0) - i is the picture B m - a - (p 0 m) value which is the sum of bits generated amount of each picture to _ m0 pictures from Po B.
- the target code amount TB po expressed by the equation (3) is calculated as follows: picture ⁇ ( ⁇ — ⁇ 0) + ⁇ 0
- the splice controller 13 converts the target bit amount TB po obtained based on the equation (3) from the picture A (n— p0 ) + n0 to the picture B (m— p0 ) —m0. Assign.
- the quantization characteristics of each picture are determined so that the pictures are distributed in a ratio of 4: 2: 1.
- the splicing apparatus of the present invention does not simply use a quantization characteristic that distributes the target bit amount TB Po at a fixed ratio of 4: 2: 1 with respect to the I picture: P picture: B picture. without referring to the quantization characteristics, such as past of the quantization step and the quantization matrix of each picture a (n -p 0) + n0 ⁇ picture B (m-Po) -mo, determine a new quantizer characteristic I do.
- the encoding controller 43 refers to the information of the quantization step and the quantization matrix included in the stream A and the stream B, and performs the quantization in the past encoding process in the encoders 1A and 1B.
- the quantization characteristic at the time of re-encoding is determined so that it does not greatly differ from the quantization characteristic. However, a picture whose picture type has been changed due to Regarding the channel, a new quantization characteristic is calculated at the time of re-encoding without referring to the quantization step and quantization matrix information.
- Fig. 28 shows the data in the VBV buffer when re-encoding is performed with the target bit amount TB po calculated in the splice controller 13 in order to solve the problem of VBV buffer underflow described in Fig. 27. It is a figure for showing occupancy. Also, FIG. 30 shows the V when the re-encoding process is performed with the target bit amount TB po calculated in the splice controller 13 in order to solve the VBV buffer overflow problem described in FIG.
- FIG. 5 is a diagram showing the data occupancy of a BV buffer.
- the re-encoded stream ST RE after re-encoding is, as shown in FIGS. 28 and 30, the VB V buffer of the re-encoding target stream ST RE in FIG. 27 (a).
- the locus of the data occupancy is similar to the locus of the data occupancy of the VBV buffer of the re-encoded code stream ST in Fig. 28 (a), and the trajectory of the stream to be re-encoded ST RE 'in Fig. 29 (a) is similar.
- the locus of the data occupancy of the VBV buffer is similar to the locus of the data occupancy of the VBV buffer of the re-encoding stream ST RE in FIG. 30 (a).
- the present embodiment satisfies the rules of Annexe C of IS01381-8-2 and IS011172-2 and the rules of AnnexeL of IS01381818-1.
- step S 10 the splice controller 1 3 receives a stream ST A, re-encoding section n at the splicing point p 0 and splicing processing for splicing the STB at any picture position 0, m 0.
- the operator inputs these parameters from the outside.
- the re-encoding sections n 0 and m 0 may be automatically set according to the stream GOP configuration and the like.
- a case where the stream STA is switched to the stream STB at the splice point will be described as an example. However, the reverse is also possible.
- step S11 splice controller 13 sends stream ST A And control the write operation of the buffer memory 10 so that the stream STB is temporarily stored in the buffer memory 10, and the phase of the splicing point of the stream STA and the stream STB with respect to the presentation time.
- the read operation of the buffer memory 10 is controlled so that the signals are synchronized.
- step S12 the splice controller 13 selects a picture of the stream ST A so as not to output a picture future than the picture A n _ Po of the splice point set in the stream ST A, and selects a picture of the stream ST B
- the picture of stream ST B is selected so as not to output a picture past the splice point picture B m — ⁇ set to.
- the picture A (n -p 0) - 2 a is P picture, string - On-time ST A, spliced Boyne City of picture A It is more past than n - Po , but in the order of presentation it is more future than picture A n-po. Therefore, this picture ⁇ ( ⁇ — ⁇ 0) — 2 ⁇ picture is not output. Further, in the example shown in FIG. 25 (a) ⁇ Pi view 25 (b), the picture A (n -p 0) - 2 a is P picture, string - On-time ST A, spliced Boyne City of picture A It is more past than n - Po , but in the order of presentation it is more future than picture A n-po. Therefore, this picture ⁇ ( ⁇ — ⁇ 0) — 2 ⁇ picture is not output. Further, in the example shown in FIG.
- the splice controller 13 controls the decoders 14A and 14B, so that the pictures not selected in this step are not supplied to the encoder 16.
- step S13 the splice controller 13 starts processing for setting encoding parameters required for picture reconstruction processing when performing re-encoding processing.
- This picture reconstruction processing means the following processing from step S14 to step S30, and the parameters set in this processing are a picture type, a prediction direction, a motion vector, and the like. .
- step S14 the splice controller 13 performs a picture reconstruction process. It is determined whether the picture to be processed is a picture A n _Po at the splice point. If the picture to be subjected to the picture reconstruction processing is the picture A n — po at the splice point, the process proceeds to the next step S15. On the other hand, otherwise, that is, picture relevant catcher to be subjected to the picture reconstruction process is a picture Alpha - in the case of ( ⁇ ⁇ ) + picture ⁇ ( ⁇ _ ⁇ 0) from .eta.0 +1, the step S 2
- step S15 the splice controller 13 determines whether the picture to be subjected to the picture reconstruction processing is a ⁇ picture, a ⁇ picture, or an I picture. If the picture to be subjected to the picture reconstruction processing is a ⁇ picture, the process proceeds to step S17. If the picture to be subjected to the picture reconstruction processing is a ⁇ or I picture, Proceed to step S18.
- Step S 1 6 the splice controller 1 3, in spliced spliced stream ST S p, it is determined whether two or more ⁇ pixels Chiya the previous picture A ⁇ - ⁇ exists. For example, as shown in Figure 26 (b), before the picture A n-po, two ⁇ pictures (picture ⁇ ( ⁇ _ ⁇ 0 ) +2 and picture ⁇ ( ⁇ - ⁇ ) + 3) are If so, go to step S18. If not, go to step S17.
- step S 1 7 the splice controller 1 3 determines that no required changes pixels Chiyataipu picture A n _ Po, as the picture type at the time of re-encoding process of the picture A n _ Po, past the encoder 1 A Set the same picture type as the picture type (B picture) set in the encoding process of. Therefore, at the time of the re-encoding processing described later, the picture A n — po is coded again as a picture.
- step S 1 8 the splice controller 1 3 changes the picture type of picture ⁇ ⁇ _ ⁇ () from ⁇ picture in ⁇ picture.
- the reason for changing the picture type will be described.
- Reaching this step S18 means that before the ⁇ picture (picture A ⁇ - 1 po), two B pictures (picture A (n- 1 p0 ) +2 and picture A ( n 1 p 0) +3 ) exists. I taste. That is, in the re-encoding target stream ST RE ', is that are lined up three B Piku Chiya.
- a normal MPEG decoder has only two frame memories to temporarily store the predicted picture, so if three B pictures are consecutively arranged on the stream Will not be able to decode the last B picture. Therefore, as explained in FIG. 2 6, by changing the picture type of the picture A ⁇ - ⁇ ⁇ the picture or al ⁇ picture, reliably picture A n _ Po can and child decoded.
- step S 1 9 the splice controller 1 3 determines that no required changes pixels Chiyataipu picture A ⁇ _ ⁇ , picture A n - as a picture type at the time of re-encoding process of the P (), the encoder 1 A Set the same picture type as the picture type (I picture or P picture) set in the past encoding process in.
- step S 2 the splice controller 1 3 determines that no required changes pixels Chiyataipu picture A n -p 0, as a picture type at the time of re-encoding process of the picture A ⁇ _ ⁇ , in the encoder 1 A Set the same picture type as the picture type (I picture, P picture or B picture) set in the past encoding process.
- step S21 the splice controller 13 sets a prediction direction and a parameter related to a motion vector for each picture.
- the picture picture a ⁇ - ⁇ which are subject to the configuration process, ⁇ ( ⁇ _ ⁇ ) +1 of [rho picture ⁇ Pi ⁇ ( ⁇ - ⁇ ()) - is bidirectional prediction from both of the picture of the second [rho picture
- the picture was That is, in the past E down code processing in the encoder 1 Alpha, picture ⁇ ⁇ - ⁇ is, A (n - p 0) +1 P-picture and A (n - p 0) _
- Step S 1 2 as described in, since A (n one p 0) one second P-picture is not output as the splicing stream picture reconstruction
- the backward prediction picture of the picture A n-po to be processed is ⁇ ( ⁇ - ⁇ () )
- the splice controller 13 sets the unidirectional prediction in the forward direction so as to predict only the P picture of A (n — p 0) +1 for the picture A n — ⁇ .
- a one-sided prediction that predicts only the P picture of A ( n — p 0) +1 is performed.
- step S 1 9 picture type in the set that no modified picture A n _ Po (P-picture), the prediction direction is not changed.
- the splice controller 13 performs a one-way forward operation on the picture A n — p 0 so as to predict only the same picture as the picture predicted during the past encoding process in the encoder 1 ⁇ . Set forecasts.
- the change of prediction direction picture type settings that no modified picture A (n _ Po) + n0 from picture A in step S 2 0. That is, in this case, the splice controller 13 compares the picture A ( n_p0 ) + n0 to the picture A (n— p0 ) +1 with the picture predicted during the past encoding processing in the encoder 1A. Set the prediction direction to predict the same picture. However, both picture A (n- p0 ) +1 and picture A n- po are predicted from bidirectional pictures of forward P-pictures or I-pictures and reverse I-pictures or P-pictures.
- the picture A ( n- ⁇ 0) + ⁇ must be changed to unilateral prediction in which not only picture A n _p 0 but also picture A ( n ⁇ 1 ⁇ 0 ) + ⁇ is predicted only from forward pictures.
- the splice controller 13 uses the motion vector set by the previous encoding process in the encoder 1A for each picture based on the newly set prediction direction. Determine whether or not to reuse during re-encoding processing.
- the motion vector used in the past encoding process in the encoder 1A is used as it is in the re-encoding process. For example, in the examples shown in FIGS. 23 and 26, pictures A (n — Po) + n0 to picture A (n _p 0) +1 are used for past encoding processing in the encoder 1 A, respectively.
- the motion vector is reused at the time of restart.
- the picture ⁇ ( ⁇ — ⁇ 0) + i and the picture A n _p 0 are a P picture or I picture in the forward direction and a B picture predicted from both directions of the I or P picture in the reverse direction.
- the prediction is changed to unilateral prediction in which prediction is performed only from the forward picture, only the motion vector corresponding to the forward picture needs to be used. That is, the splice controller 1 3, in this scan Tetsupu S 2 1, pictures A - If (n p 0) +1 ⁇ Pi picture A n _ Po is B-picture, with respect to these pictures, forward Use the motion vector for the picture in the direction, and do not perform the motion vector for the picture in the opposite direction.
- picture A n _ P 0 is a picture that was unilaterally predicted in the reverse direction only from ⁇ ( ⁇ — ⁇ 0) — 2, which is a future picture , generated during the re-encoding process, without any motion base-vector generated during the encoding process in the past is used in the encoder 1 Alpha, a new motion vector corresponding to a (n _ P ()) +1 I do. That is, the splice controller 13 makes a setting in step S21 in which no past motion vector is used.
- step S22 the splice controller 13 determines a picture type, a prediction direction, and a past motion vector for all pictures from picture A ( n -PO) + no to picture A n-Po. Then, it is determined whether or not the parameters relating to are set.
- step S 2 3 the splice controller 1 3 picture has a picture reconstruction processing of the object to determine whether the picture B m _ Po splice Boyne bets. If the picture to be subjected to the picture reconstruction processing is the picture B m -p 0 at the splice point, the process proceeds to the next step S24. On the other hand, if not, that is, the picture targeted for picture reconstruction processing If the key is picture B (m- p0 ) -i to picture B ( m- Po) + m0 , the process proceeds to step S2-8 .
- step S24 the splice controller 13 determines whether the picture to be subjected to the picture reconstruction processing is a B picture, a P picture, or an I picture. If the picture to be reconstructed is a B picture, the process proceeds to step S25. If the picture to be reconstructed is a P picture, the process proceeds to step S25. Proceeding to step S26, if the picture to be subjected to the picture reconstruction processing is an I picture, the procedure proceeds to step S27.
- step S 2 5 the splice controller 1 3, as shown by the example in FIG. 2 2 ⁇ Pi Figure 2 3, picture B during re-encoding process m - change of the picture type of Po is determined that there is no need and is set as picture type during re-encoding process of the picture B m -p 0, the same picture type as set by picture type (B-picture) in Enko one de processing the past in the encoder 1 B.
- step S 2 6 the splice controller 1 3, as in the example shown in FIG. 2 5 and 2 6, the picture B m - to change the picture type of po from P picture to the I pixels Chiya.
- a P-picture is a picture of uni-prediction predicted from an I-picture or P-picture in the jlll direction, and therefore, is a picture that always exists at a position after the predicted picture on the stream. If the first picture B M — Po of the splice point in the stream STB is a P-picture, it must be predicted from the forward picture of the stream STA that precedes this picture B m -Po .
- the splice controller 1 3 the first picture B m of stream-ST B Keru Supuraisupo Into - if the picture type of Po was P-picture, the picture B m - the picture type of P () I Change to a picture.
- step S 2 7 the splice controller 1 3, picture B m _ P of () It is determined that there is no need to change the picture type, and the picture type (I picture) set in the past encoding process in the encoder 1B is used as the picture type at the time of re-encoding the picture B m -po. Set the same picture type.
- step S 2 8 the splice controller 1 3, picture B (m-po) ⁇ force al picture B (m one p 0) - m0 required picture type changes is judged to no re their picture
- the same picture type (I picture, P picture or B picture) set in the past encoding processing in encoder 1B is set.
- step S29 the splice controller 13 sets a prediction direction and a motion vector for each picture.
- the picture B m _ Po which are subject to the picture reconstruction process is a B-picture in the original stream OSTB is re picture relevant catcher
- the picture B m - Po that is the target of the composition processing was bi-directionally predicted from both the P picture of B ( m- 1 ⁇ 0 ) + ⁇ and the I picture of ⁇ ⁇ - ⁇ )-2 It is a picture.
- the picture B m - Po is the picture of both the ⁇ picture of B ( m — ⁇ 0 ) + ⁇ and the I picture of B ( m- p 0) — 2 Is a picture that has been bidirectionally predicted from.
- B (m - p 0) since +1 P-picture is not output as the splicing stream, picture B m _ P to be subjected to the picture reconstruction process () P picture of B (m — ⁇ 0) + ⁇ cannot be specified as the forward prediction picture of.
- Step S 2 5 pixels Chiya set and without changing the picture type in B m - for the Po (B-pictures), B (m - p 0 ) - of Yoyo only to predict the second I-picture Reverse one-sided prediction must be performed. Therefore in this case, splice controller 1 3, picture B m - relative po, B (m - ⁇ 0 ) - 2 I-picture relevant Yanomi to perform a reverse side prediction as to predict Set the prediction direction to.
- step S28 picture B ( m- For the picture from (P0) + m0 to picture B (m-P0) +1, the prediction direction needs to be changed. That in this case, the splice controller 1 3, picture B (m - P ()) + a m () for the picture B (m _p 0) + i , predicted during past encoding processing in the encoder 1 B Set the prediction direction to predict the same picture as the predicted picture. However, when B ⁇ p ⁇ -i is a B picture, as in the case of picture B m - Po , for picture B (m -p 0) -1, only B (2 I pictures The prediction direction is set such that one-sided prediction in the reverse direction is performed to predict the prediction.
- step S29 the splice controller 13 uses the motion vector set by the previous encoding process in the encoder 1B for each picture based on the newly set prediction direction. Determine whether or not to reuse during re-encoding processing.
- the motion vector used in the past encoding process in the encoder 1B is used as it is in the re-encoding process.
- B (m -p 0 ) - For each picture from the second I-picture to the P picture to the B (m m0, motion used during past encoding .
- step S 3 the splice controller 1 3, picture B from the picture B m_ PQ (m - p 0 ) - with respect to m o all pictures, picture type, Prediction Direction and motion base-vector It is determined whether the parameter has been set.
- step S31 the splice controller 13 uses the equation already described. Based on (3), a target bit amount (TB Po ) to be generated in the re-encoding period is calculated. This will be specifically described below. First, the splice controller 1 3, bit count value of the stream A supplied from the string Mukaunta 1 1 and based on the bit count value of the stream B, the locus of the data occupancy quantity of the VBV the puffer the original stream OS T lambda , The trajectory of the data occupancy of the VB V buffer of the original stream OST ⁇ , and the data occupancy of the VB V buffer of the stream ST RE , which is the target of re-encoding when stream ⁇ and stream B are simply spliced. Is calculated.
- the splice controller 1 Te cowpea to analyze the locus of the data occupancy quantity of the re-encoding target stream ST RE, VBV buffer which is determined virtually, under re-encoding target stream ST RE ' Calculate the flow amount (vbv-under) or overflow amount (vbv-over). Further, the splice controller 13 calculates the trajectory of the data occupancy of the VBV buffer of the re-encoding target stream ST RE > and the trajectory of the data occupancy of the VBV buffer of the original stream OST B (virtually obtained).
- the splice controller 13 calculates the offset amount vbv_off of the target code amount from the equations (1) and (2) described above, and further calculates the equation (1) or the equation (1).
- the target code amount (target bit amount) TB p 0 is obtained by the equation (3) described above.
- step S 32 the splice controller 1 3, the target bit quantity TB po obtained based on the equation (3), the picture ⁇ ( ⁇ - ⁇ 0) + ⁇ 0 ⁇ picture B (m one p. ) _ Based on the assignment to mQ , determine the quantization characteristic set for each picture.
- the splicing device of the present invention each picture A (n - p 0) + n () ⁇ picture B (m -Po) - Encoder 1 m0 A, 1 past quantization step and the quantization Ma tri box or the like in the B A new quantization characteristic is determined with reference to the quantization characteristic of.
- the splicing controller 13 first sends the encoders 1 A and 1 B, such as quantization steps and quantization matrices, included in the stream A and the stream B.
- the encoding parameter information generated in the past encoding process in the stream is received from the stream analysis unit 12.
- the splicing controller 13 converts the target bit amount TB po obtained based on the equation (3) from the picture A ( n_p0 ) + n () to the picture B (m— p0 ) —m0 .
- the encoders 1A, 1A, and 4B are referred to based on the code amount allocated from the target bit amount TB p 0 and the past coding parameter information.
- the quantization characteristics at the time of encoding are determined so as not to be significantly different from the quantization characteristics at the time of encoding in 1B.
- step S18 and step S26 for the picture whose picture type has been changed by the picture reconstruction processing, the picture is re-encoded without referring to the quantization step and quantization matrix information. A new quantization characteristic is calculated at the time of one-step processing.
- step S 3 the splice controller 1 3, re Enko picture A (n -p 0) included in one de period + n0 ⁇ picture B (m -p 0) - m0 decode the.
- step S 3 4 the splice controller 1 3, Hikura ⁇ A (n -po) Te Step S 3 2 [Nobi Rere + no ⁇ Pikura turbocharger B (m -p 0), respectively for one m o use set quantization characteristics, while control of the amount of generated bits, Pic Chiya a (n - p 0) + n0 ⁇ picture B (m _ Po) - re-encoding m0.
- the splice controller 13 performs motion compensation via the switch 44.
- a control signal is supplied to the encoder controller to supply it to the encoder 41, and the motion vector used in the past encoder processing in the encoders 1A and IB is not used, a new motion detector 4 is used.
- the encoder controller 43 is controlled so that the motion vector generated in 2 is supplied to the motion compensator 41 via the switch 41.
- the encoder controller 43 sets the frame memory 39, 40 so that the pictures necessary for generating the predicted image data are held in the frame memories 39, 40. Controls memory 39, 40.
- the encoder controller 43 also calculates the inverse of the quantization characteristic set for each picture in the re-encoding section supplied from the splice controller 13 by the quantization circuit 34. Set for the child circuit 36.
- step S35 the splice controller 13 controls the switch 17 so that the streams ST A and STB output from the buffer memory 10 and the re-encoding section output from the MPEG encoder 16 are controlled.
- the switch 17 controls the switch 17 so that the streams ST A and STB output from the buffer memory 10 and the re-encoding section output from the MPEG encoder 16 are controlled.
- a new stream ST RE in the re-encoding section obtained by performing re-encoding while performing rate control according to the target bit amount TB p 0 in the MPEG encoder 17 in this way is described.
- Switch 17 to fit the position of picture ⁇ ( ⁇ — ⁇ 0 ) + ⁇ 0 to picture B ( m — p0 ) — m0 in the original stream. This achieves a seamless splice.
- the stream STB after the splice, the overflow of the VBV buffer, the damage such cause underflow, the state of the VBV buffer after the splice of the stream STB, splice previous state Is a condition for buffer control to guarantee continuous picture presentation.
- this condition is satisfied by setting a new target code amount (target bit amount) as described above.
- each stream in the re-encoding section including the splice points of a plurality of streams is decoded, and the obtained image data in the re-encoding section is obtained.
- a new stream in the re-encoding section is generated, and the original stream before and after the re-encoding section is concatenated with the new stream.
- the motion detection information such as the motion vector detected during the previous encoding is reused. Therefore, the image quality does not deteriorate due to the re-encoding process.
- the information used in decoding is reconstructed from the calculation accuracy of the orthogonal transform and the non-linear operation such as mismatch processing (processing to add an error to the high frequency of DCT coefficient). Reconstruction errors that cannot be suppressed by use alone are introduced.
- decoding and re-encoding should be limited to pictures in a certain section near the splice point including the splice point. Therefore, in the present embodiment, instead of decoding and re-encoding the entire stream, a re-encoding section is set, and decoding and re-encoding are performed only in that section. This can also prevent the image quality from deteriorating.
- the re-encoding section may be automatically set according to the degree of image quality deterioration, the GOP length, and the GOP structure. Can be set arbitrarily in consideration of
- the present invention is not limited to the above embodiment.
- the method of calculating a new target code amount for a re-encoding section is not limited to the methods shown in Expressions (1) to (3), and may be appropriately changed. Can be set.
- each encoded bit stream in a predetermined section before and after a connection point including a connection point of a plurality of encoded bit streams is decoded.
- the image data within the predetermined interval is output, the image data within the predetermined interval is encoded according to the new target code amount, a new encoded bit stream within the predetermined interval is output, and the original data within the predetermined interval is output.
- Coded bitstream Is replaced with a new coded bit stream in a predetermined section, and the original coded bit stream before and after the predetermined section is connected to the new coded bit stream and output.
- encoding is performed by using information included in the original encoded bit stream and used at the time of decoding. This has the effect of reducing image quality degradation near points.
- the encoding is performed by reconstructing the picture type so that the predictive encoding process using the pictures belonging to different encoding bit streams is not performed.
- the bidirectional predictive encoding method there is an effect that an image is not broken.
- a new target is set so as to reduce the deviation before and after the connection point of the locus of the data occupancy of the virtual buffer corresponding to the input buffer on the decoding apparatus side. Since the code amount is set, there is an effect that the failure of the virtual buffer can be more reliably prevented.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Signal Processing For Recording (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Description
Claims
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19980933940 EP0923243B1 (en) | 1997-07-25 | 1998-07-27 | Editing device, editing method, splicing device, splicing method, encoding device, and encoding method |
JP50967099A JP3736808B2 (ja) | 1997-07-25 | 1998-07-27 | 編集装置、編集方法、再符号化装置及び再符号化方法 |
DE69841897T DE69841897D1 (de) | 1997-07-25 | 1998-07-27 | Bearbeitungsanlage, bearbeitungsverfahren, spleissungsanlage, spleissungsverfahren, kodieranlage und kodierverfahren |
US09/275,999 US6567471B1 (en) | 1997-07-25 | 1999-03-25 | System method and apparatus for seamlessly splicing data |
US10/282,784 US7139316B2 (en) | 1997-07-25 | 2002-10-29 | System method and apparatus for seamlessly splicing data |
US11/586,245 US7711051B2 (en) | 1997-07-25 | 2006-10-25 | System method and apparatus for seamlessly splicing data |
US11/591,073 US8798143B2 (en) | 1997-07-25 | 2006-11-01 | System method and apparatus for seamlessly splicing data |
US11/591,063 US8923409B2 (en) | 1997-07-25 | 2006-11-01 | System method and apparatus for seamlessly splicing data |
US11/642,369 US8223847B2 (en) | 1997-07-25 | 2006-12-19 | Editing device, editing method, splicing device, splicing method, encoding device, and encoding method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP9/199923 | 1997-07-25 | ||
JP19992397 | 1997-07-25 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US09/275,999 Continuation US6567471B1 (en) | 1997-07-25 | 1999-03-25 | System method and apparatus for seamlessly splicing data |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1999005864A1 true WO1999005864A1 (fr) | 1999-02-04 |
Family
ID=16415853
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1998/003332 WO1999005864A1 (fr) | 1997-07-25 | 1998-07-27 | Dispositif d'edition, procede d'edition, dispositif d'epissage, procede d'epissage, dispositif de codage et procede de codage |
Country Status (7)
Country | Link |
---|---|
US (6) | US6567471B1 (ja) |
EP (3) | EP1467563A1 (ja) |
JP (5) | JP3736808B2 (ja) |
KR (2) | KR100555164B1 (ja) |
CN (1) | CN1161989C (ja) |
DE (1) | DE69841897D1 (ja) |
WO (1) | WO1999005864A1 (ja) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000062551A1 (en) * | 1999-04-14 | 2000-10-19 | Sarnoff Corporation | Frame-accurate seamless splicing of information streams |
EP1058262A2 (en) * | 1999-06-01 | 2000-12-06 | Sony Corporation | Encoding and multiplexing of video streams |
JP2000354249A (ja) * | 1999-04-16 | 2000-12-19 | Sony United Kingdom Ltd | ビデオ信号処理装置、ビデオ信号処理方法及びコンピュータプログラム製品 |
JP2001008210A (ja) * | 1999-04-16 | 2001-01-12 | Sony United Kingdom Ltd | ビデオ信号処理装置、コンピュータプログラム製品及びビデオ信号処理方法 |
JP2001119305A (ja) * | 1999-08-26 | 2001-04-27 | Sony United Kingdom Ltd | 信号処理装置 |
JP2001204035A (ja) * | 1999-11-30 | 2001-07-27 | Thomson Licensing Sa | デジタル・ビデオ復号システム、複数のビデオ・プログラムを順次表示する方法およびユーザーによって選択される次のチャネルを予測する方法 |
JP2002281433A (ja) * | 2001-03-15 | 2002-09-27 | Kddi Corp | 動画像検索閲覧編集装置および記録媒体 |
EP1045589A3 (en) * | 1999-04-16 | 2004-03-03 | Sony United Kingdom Limited | Apparatus and method for splicing of encoded video bitstreams |
WO2006022221A1 (ja) * | 2004-08-25 | 2006-03-02 | Sony Corporation | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP2007059996A (ja) * | 2005-08-22 | 2007-03-08 | Sony Corp | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
US7254175B2 (en) | 1999-07-02 | 2007-08-07 | Crystalmedia Technology, Inc. | Frame-accurate seamless splicing of information streams |
US8817887B2 (en) | 2006-09-05 | 2014-08-26 | Sony Corporation | Apparatus and method for splicing encoded streams |
Families Citing this family (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0725399U (ja) * | 1993-10-19 | 1995-05-12 | 廉正 赤澤 | ミッションオイルクリーナ |
GB2327548B (en) * | 1997-07-18 | 2002-05-01 | British Broadcasting Corp | Switching compressed video bitstreams |
KR100555164B1 (ko) * | 1997-07-25 | 2006-03-03 | 소니 가부시끼 가이샤 | 편집 장치, 편집 방법, 재부호화 장치, 재부호화 방법, 스플라이싱 장치 및 스플라이싱 방법 |
EP0982726A4 (en) * | 1998-01-19 | 2003-06-04 | Sony Corp | CUTTING SYSTEM, CUTTING CONTROL DEVICE AND CUTTING METHOD |
JP2000138896A (ja) * | 1998-10-30 | 2000-05-16 | Hitachi Ltd | 画像音声記録装置 |
JP2000165802A (ja) | 1998-11-25 | 2000-06-16 | Matsushita Electric Ind Co Ltd | ストリーム編集装置と編集方法 |
US8284845B1 (en) | 2000-01-24 | 2012-10-09 | Ati Technologies Ulc | Method and system for handling data |
US6778533B1 (en) | 2000-01-24 | 2004-08-17 | Ati Technologies, Inc. | Method and system for accessing packetized elementary stream data |
US6885680B1 (en) | 2000-01-24 | 2005-04-26 | Ati International Srl | Method for synchronizing to a data stream |
US6988238B1 (en) | 2000-01-24 | 2006-01-17 | Ati Technologies, Inc. | Method and system for handling errors and a system for receiving packet stream data |
US6785336B1 (en) | 2000-01-24 | 2004-08-31 | Ati Technologies, Inc. | Method and system for retrieving adaptation field data associated with a transport packet |
US6763390B1 (en) | 2000-01-24 | 2004-07-13 | Ati Technologies, Inc. | Method and system for receiving and framing packetized data |
JP2001218213A (ja) * | 2000-01-31 | 2001-08-10 | Mitsubishi Electric Corp | 画像信号変換符号化装置 |
JP4170555B2 (ja) * | 2000-02-28 | 2008-10-22 | 株式会社東芝 | 映像符号化装置及び映像符号化方法 |
GB0007868D0 (en) | 2000-03-31 | 2000-05-17 | Koninkl Philips Electronics Nv | Methods and apparatus for editing digital video recordings and recordings made by such methods |
JP2002010259A (ja) * | 2000-06-21 | 2002-01-11 | Mitsubishi Electric Corp | 画像符号化装置及び画像符号化方法及び画像符号化プログラムを記録した記録媒体 |
US7490344B2 (en) * | 2000-09-29 | 2009-02-10 | Visible World, Inc. | System and method for seamless switching |
US7095945B1 (en) * | 2000-11-06 | 2006-08-22 | Ati Technologies, Inc. | System for digital time shifting and method thereof |
US20020133486A1 (en) * | 2001-03-15 | 2002-09-19 | Kddi Corporation | Video retrieval and browsing apparatus, video retrieval, browsing and editing apparatus, and recording medium |
FI111590B (fi) * | 2001-04-20 | 2003-08-15 | Swelcom Oy | Menetelmä ja laite datan lokalisointia varten |
US7349691B2 (en) * | 2001-07-03 | 2008-03-25 | Microsoft Corporation | System and apparatus for performing broadcast and localcast communications |
US6965597B1 (en) * | 2001-10-05 | 2005-11-15 | Verizon Laboratories Inc. | Systems and methods for automatic evaluation of subjective quality of packetized telecommunication signals while varying implementation parameters |
KR100454501B1 (ko) * | 2001-12-26 | 2004-10-28 | 브이케이 주식회사 | 영상신호를 부호화 또는 복호화하기 위한 예측 장치 및 방법 |
KR100475412B1 (ko) * | 2002-03-11 | 2005-03-10 | 주식회사 럭스퍼트 | 상부 펌핑방식의 광소자 |
DE10212656A1 (de) * | 2002-03-21 | 2003-10-02 | Scm Microsystems Gmbh | Selektive Verschlüsselung von Multimediadaten |
US7151856B2 (en) * | 2002-04-25 | 2006-12-19 | Matsushita Electric Industrial Co., Ltd. | Picture coding apparatus and picture coding method |
US9948977B2 (en) | 2003-01-09 | 2018-04-17 | Avago Technologies General Ip (Singapore) Pte. Ltd. | System, method, and apparatus for determining presentation time for picture without presentation time stamp |
US7426306B1 (en) * | 2002-10-24 | 2008-09-16 | Altera Corporation | Efficient use of keyframes in video compression |
FR2848766B1 (fr) * | 2002-12-13 | 2005-03-11 | Thales Sa | Procede de commutation de signaux numeriques avant emission, commutateur et signal resultant |
US8175154B2 (en) * | 2003-06-03 | 2012-05-08 | General Instrument Corporation | Method for restructuring a group of pictures to provide for random access into the group of pictures |
US7924921B2 (en) | 2003-09-07 | 2011-04-12 | Microsoft Corporation | Signaling coding and display options in entry point headers |
US7852919B2 (en) * | 2003-09-07 | 2010-12-14 | Microsoft Corporation | Field start code for entry point frames with predicted first field |
US8213779B2 (en) | 2003-09-07 | 2012-07-03 | Microsoft Corporation | Trick mode elementary stream and receiver system |
US7839930B2 (en) * | 2003-11-13 | 2010-11-23 | Microsoft Corporation | Signaling valid entry points in a video stream |
US7609762B2 (en) * | 2003-09-07 | 2009-10-27 | Microsoft Corporation | Signaling for entry point frames with predicted first field |
US20050060420A1 (en) * | 2003-09-11 | 2005-03-17 | Kovacevic Branko D. | System for decoding multimedia data and method thereof |
JP3675464B2 (ja) * | 2003-10-29 | 2005-07-27 | ソニー株式会社 | 動画像符号化装置および動画像符号化制御方法 |
US9715898B2 (en) * | 2003-12-16 | 2017-07-25 | Core Wireless Licensing S.A.R.L. | Method and device for compressed-domain video editing |
US7391809B2 (en) * | 2003-12-30 | 2008-06-24 | Microsoft Corporation | Scalable video transcoding |
TW200845724A (en) | 2004-06-02 | 2008-11-16 | Matsushita Electric Ind Co Ltd | Multiplexing apparatus and demultiplexing apparatus |
CN1713727B (zh) * | 2004-06-14 | 2010-11-10 | 松下电器产业株式会社 | 编辑资料流的方法及装置 |
KR100608061B1 (ko) * | 2004-07-12 | 2006-08-02 | 삼성전자주식회사 | 전송 스트림 생성을 위한 다중화 방법 및 그 장치 |
CN1998242B (zh) * | 2004-08-11 | 2010-07-07 | 株式会社日立制作所 | 图像编码装置和图像解码装置 |
JP4438059B2 (ja) * | 2004-08-24 | 2010-03-24 | キヤノン株式会社 | 画像再生装置及びその制御方法 |
JP4221667B2 (ja) * | 2004-08-25 | 2009-02-12 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP4174728B2 (ja) * | 2004-08-25 | 2008-11-05 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP2009504036A (ja) * | 2005-07-28 | 2009-01-29 | トムソン ライセンシング | ビデオチャネルを通して多数のビデオストリームを送信する方法及び装置 |
JP4528694B2 (ja) * | 2005-08-12 | 2010-08-18 | 株式会社東芝 | 動画像符号化装置 |
JP4791129B2 (ja) * | 2005-10-03 | 2011-10-12 | ルネサスエレクトロニクス株式会社 | 画像符号化装置、画像符号化方法及び画像編集装置 |
US20070116117A1 (en) * | 2005-11-18 | 2007-05-24 | Apple Computer, Inc. | Controlling buffer states in video compression coding to enable editing and distributed encoding |
JP4828925B2 (ja) * | 2005-11-30 | 2011-11-30 | パナソニック株式会社 | 符号化装置 |
JP4932242B2 (ja) * | 2005-12-13 | 2012-05-16 | 三菱電機株式会社 | ストリーム切換装置及びストリーム切換方法 |
JP4207072B2 (ja) | 2006-04-07 | 2009-01-14 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP4207981B2 (ja) * | 2006-06-13 | 2009-01-14 | ソニー株式会社 | 情報処理装置および情報処理方法、プログラム、並びに記録媒体 |
JP4229149B2 (ja) * | 2006-07-13 | 2009-02-25 | ソニー株式会社 | ビデオ信号処理装置およびビデオ信号処理方法、ビデオ信号符号化装置およびビデオ信号符号化方法、並びにプログラム |
JP2008066851A (ja) * | 2006-09-05 | 2008-03-21 | Sony Corp | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP4369948B2 (ja) * | 2006-09-20 | 2009-11-25 | シャープ株式会社 | 画像表示装置及び方法、画像処理装置及び方法 |
JP4303743B2 (ja) * | 2006-10-04 | 2009-07-29 | シャープ株式会社 | 画像表示装置及び方法、画像処理装置及び方法 |
JP4241839B2 (ja) | 2007-02-02 | 2009-03-18 | ソニー株式会社 | データ及びファイルシステム情報の記録装置及び記録方法 |
JP2009077105A (ja) * | 2007-09-20 | 2009-04-09 | Sony Corp | 編集装置および編集方法、プログラム、並びに記録媒体 |
US20090083811A1 (en) * | 2007-09-26 | 2009-03-26 | Verivue, Inc. | Unicast Delivery of Multimedia Content |
US8457958B2 (en) | 2007-11-09 | 2013-06-04 | Microsoft Corporation | Audio transcoder using encoder-generated side information to transcode to target bit-rate |
US8432804B2 (en) * | 2007-11-29 | 2013-04-30 | Hewlett-Packard Development Company, L.P. | Transmitting video streams |
US8543667B2 (en) | 2008-01-14 | 2013-09-24 | Akamai Technologies, Inc. | Policy-based content insertion |
US8335262B2 (en) * | 2008-01-16 | 2012-12-18 | Verivue, Inc. | Dynamic rate adjustment to splice compressed video streams |
CN102084355B (zh) * | 2008-04-17 | 2016-11-23 | 索尼公司 | 多媒体内容的双类型重放 |
DE102008002005A1 (de) * | 2008-05-27 | 2009-12-03 | Robert Bosch Gmbh | Exzenter-Planetenantrieb |
JP2010004142A (ja) * | 2008-06-18 | 2010-01-07 | Hitachi Kokusai Electric Inc | 動画像符号化装置、復号化装置、符号化方法及び復号化方法 |
US8904426B2 (en) * | 2008-06-30 | 2014-12-02 | Rgb Networks, Inc. | Preconditioning ad content for digital program insertion |
US8347408B2 (en) * | 2008-06-30 | 2013-01-01 | Cisco Technology, Inc. | Matching of unknown video content to protected video content |
US8259177B2 (en) * | 2008-06-30 | 2012-09-04 | Cisco Technology, Inc. | Video fingerprint systems and methods |
US20090327334A1 (en) * | 2008-06-30 | 2009-12-31 | Rodriguez Arturo A | Generating Measures of Video Sequences to Detect Unauthorized Use |
EP2334082A1 (en) * | 2008-09-17 | 2011-06-15 | Sharp Kabushiki Kaisha | Scalable video stream decoding apparatus and scalable video stream generating apparatus |
US20100104022A1 (en) * | 2008-10-24 | 2010-04-29 | Chanchal Chatterjee | Method and apparatus for video processing using macroblock mode refinement |
US20100128779A1 (en) * | 2008-11-14 | 2010-05-27 | Chanchal Chatterjee | Method and apparatus for splicing in a compressed video bitstream |
US8743906B2 (en) * | 2009-01-23 | 2014-06-03 | Akamai Technologies, Inc. | Scalable seamless digital video stream splicing |
US8396114B2 (en) * | 2009-01-29 | 2013-03-12 | Microsoft Corporation | Multiple bit rate video encoding using variable bit rate and dynamic resolution for adaptive video streaming |
US8311115B2 (en) * | 2009-01-29 | 2012-11-13 | Microsoft Corporation | Video encoding using previously calculated motion information |
WO2010093430A1 (en) * | 2009-02-11 | 2010-08-19 | Packetvideo Corp. | System and method for frame interpolation for a compressed video bitstream |
US9565397B2 (en) * | 2009-02-26 | 2017-02-07 | Akamai Technologies, Inc. | Deterministically skewing transmission of content streams |
US9906757B2 (en) * | 2009-02-26 | 2018-02-27 | Akamai Technologies, Inc. | Deterministically skewing synchronized events for content streams |
CN102318345B (zh) * | 2009-02-27 | 2014-07-30 | 富士通株式会社 | 动态图像编码装置、动态图像编码方法 |
US8650602B2 (en) * | 2009-02-27 | 2014-02-11 | Akamai Technologies, Inc. | Input queued content switching using a playlist |
US8270473B2 (en) * | 2009-06-12 | 2012-09-18 | Microsoft Corporation | Motion based dynamic resolution multiple bit rate video encoding |
US8724710B2 (en) * | 2010-02-24 | 2014-05-13 | Thomson Licensing | Method and apparatus for video encoding with hypothetical reference decoder compliant bit allocation |
US8718148B2 (en) * | 2010-03-11 | 2014-05-06 | Sony Corporation | Information processing apparatus, information processing method, and program |
JP2011211691A (ja) * | 2010-03-11 | 2011-10-20 | Sony Corp | 情報処理装置、情報処理方法、およびプログラム |
US8705616B2 (en) | 2010-06-11 | 2014-04-22 | Microsoft Corporation | Parallel multiple bitrate video encoding to reduce latency and dependences between groups of pictures |
AU2011268104B2 (en) | 2010-06-18 | 2016-12-15 | Akamai Technologies, Inc. | Extending a content delivery network (CDN) into a mobile or wireline network |
JP6056122B2 (ja) * | 2011-01-24 | 2017-01-11 | ソニー株式会社 | 画像符号化装置と画像復号装置およびその方法とプログラム |
EP2547062B1 (en) * | 2011-07-14 | 2016-03-16 | Nxp B.V. | Media streaming with adaptation |
US9591318B2 (en) * | 2011-09-16 | 2017-03-07 | Microsoft Technology Licensing, Llc | Multi-layer encoding and decoding |
US11089343B2 (en) | 2012-01-11 | 2021-08-10 | Microsoft Technology Licensing, Llc | Capability advertisement, configuration and control for video coding and decoding |
JP5891975B2 (ja) | 2012-07-02 | 2016-03-23 | 富士通株式会社 | 動画像符号化装置、動画像復号装置、動画像符号化方法および動画像復号方法 |
ITMI20131710A1 (it) * | 2013-10-15 | 2015-04-16 | Sky Italia S R L | "sistema di cloud encoding" |
EP3185564A1 (en) * | 2015-12-22 | 2017-06-28 | Harmonic Inc. | Video stream splicing of groups of pictures (gop) |
CN105657547B (zh) * | 2015-12-31 | 2019-05-10 | 北京奇艺世纪科技有限公司 | 一种相似视频和盗版视频的检测方法及装置 |
US11936712B1 (en) * | 2023-04-06 | 2024-03-19 | Synamedia Limited | Packet-accurate targeted content substitution |
CN118674618A (zh) * | 2024-08-21 | 2024-09-20 | 苏州东方克洛托光电技术有限公司 | 使用图传视频编码信息实现航拍图像快速拼接的方法 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06253331A (ja) * | 1993-03-01 | 1994-09-09 | Toshiba Corp | 可変長符号化信号に対応した編集装置 |
JPH0837640A (ja) * | 1994-07-22 | 1996-02-06 | Victor Co Of Japan Ltd | 画像データ編集装置 |
JPH08149408A (ja) * | 1994-11-17 | 1996-06-07 | Matsushita Electric Ind Co Ltd | ディジタル動画編集方法及び装置 |
JPH10112840A (ja) * | 1996-10-07 | 1998-04-28 | Sony Corp | 編集装置 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5614952A (en) * | 1994-10-11 | 1997-03-25 | Hitachi America, Ltd. | Digital video decoder for decoding digital high definition and/or digital standard definition television signals |
DE69422960T2 (de) * | 1993-12-01 | 2000-06-15 | Matsushita Electric Industrial Co., Ltd. | Verfahren und Vorrichtung zum Editieren oder zur Mischung von komprimierten Bildern |
JPH07212766A (ja) | 1994-01-18 | 1995-08-11 | Matsushita Electric Ind Co Ltd | 動画像圧縮データ切り換え装置 |
GB9424437D0 (en) * | 1994-12-02 | 1995-01-18 | Philips Electronics Uk Ltd | Encoder system level buffer management |
US5612900A (en) * | 1995-05-08 | 1997-03-18 | Kabushiki Kaisha Toshiba | Video encoding method and system which encodes using a rate-quantizer model |
GB2307613B (en) * | 1995-08-31 | 2000-03-22 | British Broadcasting Corp | Switching bit-rate reduced signals |
JP3493872B2 (ja) * | 1996-02-29 | 2004-02-03 | ソニー株式会社 | 画像データ処理方法およびその装置 |
US6137834A (en) * | 1996-05-29 | 2000-10-24 | Sarnoff Corporation | Method and apparatus for splicing compressed information streams |
CA2208950A1 (en) * | 1996-07-03 | 1998-01-03 | Xuemin Chen | Rate control for stereoscopic digital video encoding |
US5982436A (en) * | 1997-03-28 | 1999-11-09 | Philips Electronics North America Corp. | Method for seamless splicing in a video encoder |
US6151443A (en) * | 1997-05-16 | 2000-11-21 | Indigita Corporation | Digital video and data recorder |
US6298088B1 (en) * | 1997-05-28 | 2001-10-02 | Sarnoff Corporation | Method and apparatus for splicing compressed information signals |
US6101195A (en) * | 1997-05-28 | 2000-08-08 | Sarnoff Corporation | Timing correction method and apparatus |
GB2327548B (en) * | 1997-07-18 | 2002-05-01 | British Broadcasting Corp | Switching compressed video bitstreams |
KR100555164B1 (ko) * | 1997-07-25 | 2006-03-03 | 소니 가부시끼 가이샤 | 편집 장치, 편집 방법, 재부호화 장치, 재부호화 방법, 스플라이싱 장치 및 스플라이싱 방법 |
US6611624B1 (en) * | 1998-03-13 | 2003-08-26 | Cisco Systems, Inc. | System and method for frame accurate splicing of compressed bitstreams |
GB9908809D0 (en) * | 1999-04-16 | 1999-06-09 | Sony Uk Ltd | Signal processor |
GB2353655B (en) * | 1999-08-26 | 2003-07-23 | Sony Uk Ltd | Signal processor |
GB2353653B (en) * | 1999-08-26 | 2003-12-31 | Sony Uk Ltd | Signal processor |
-
1998
- 1998-07-27 KR KR1019997002513A patent/KR100555164B1/ko not_active IP Right Cessation
- 1998-07-27 KR KR1020057018094A patent/KR100604631B1/ko not_active IP Right Cessation
- 1998-07-27 EP EP20040076010 patent/EP1467563A1/en not_active Ceased
- 1998-07-27 WO PCT/JP1998/003332 patent/WO1999005864A1/ja active IP Right Grant
- 1998-07-27 EP EP20040076022 patent/EP1445773A1/en not_active Ceased
- 1998-07-27 JP JP50967099A patent/JP3736808B2/ja not_active Expired - Lifetime
- 1998-07-27 CN CNB98801159XA patent/CN1161989C/zh not_active Expired - Lifetime
- 1998-07-27 DE DE69841897T patent/DE69841897D1/de not_active Expired - Lifetime
- 1998-07-27 EP EP19980933940 patent/EP0923243B1/en not_active Expired - Lifetime
-
1999
- 1999-03-25 US US09/275,999 patent/US6567471B1/en not_active Expired - Lifetime
-
2002
- 2002-10-29 US US10/282,784 patent/US7139316B2/en not_active Expired - Lifetime
-
2005
- 2005-05-16 JP JP2005143130A patent/JP2005295587A/ja active Pending
- 2005-05-16 JP JP2005143132A patent/JP4088800B2/ja not_active Expired - Lifetime
- 2005-05-16 JP JP2005143131A patent/JP4088799B2/ja not_active Expired - Lifetime
- 2005-05-16 JP JP2005143129A patent/JP4045553B2/ja not_active Expired - Lifetime
-
2006
- 2006-10-25 US US11/586,245 patent/US7711051B2/en not_active Expired - Fee Related
- 2006-11-01 US US11/591,073 patent/US8798143B2/en not_active Expired - Fee Related
- 2006-11-01 US US11/591,063 patent/US8923409B2/en not_active Expired - Fee Related
- 2006-12-19 US US11/642,369 patent/US8223847B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06253331A (ja) * | 1993-03-01 | 1994-09-09 | Toshiba Corp | 可変長符号化信号に対応した編集装置 |
JPH0837640A (ja) * | 1994-07-22 | 1996-02-06 | Victor Co Of Japan Ltd | 画像データ編集装置 |
JPH08149408A (ja) * | 1994-11-17 | 1996-06-07 | Matsushita Electric Ind Co Ltd | ディジタル動画編集方法及び装置 |
JPH10112840A (ja) * | 1996-10-07 | 1998-04-28 | Sony Corp | 編集装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP0923243A4 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2000062551A1 (en) * | 1999-04-14 | 2000-10-19 | Sarnoff Corporation | Frame-accurate seamless splicing of information streams |
JP2000354249A (ja) * | 1999-04-16 | 2000-12-19 | Sony United Kingdom Ltd | ビデオ信号処理装置、ビデオ信号処理方法及びコンピュータプログラム製品 |
JP2001008210A (ja) * | 1999-04-16 | 2001-01-12 | Sony United Kingdom Ltd | ビデオ信号処理装置、コンピュータプログラム製品及びビデオ信号処理方法 |
JP4689001B2 (ja) * | 1999-04-16 | 2011-05-25 | ソニー ヨーロッパ リミテッド | ビデオ信号処理装置、コンピュータプログラム及びビデオ信号処理方法 |
EP1045589A3 (en) * | 1999-04-16 | 2004-03-03 | Sony United Kingdom Limited | Apparatus and method for splicing of encoded video bitstreams |
US6760377B1 (en) | 1999-04-16 | 2004-07-06 | Sony United Kingdom Limited | Signal processing |
EP1058262A2 (en) * | 1999-06-01 | 2000-12-06 | Sony Corporation | Encoding and multiplexing of video streams |
EP1058262A3 (en) * | 1999-06-01 | 2004-04-21 | Sony Corporation | Encoding and multiplexing of video streams |
US7254175B2 (en) | 1999-07-02 | 2007-08-07 | Crystalmedia Technology, Inc. | Frame-accurate seamless splicing of information streams |
JP2001119305A (ja) * | 1999-08-26 | 2001-04-27 | Sony United Kingdom Ltd | 信号処理装置 |
JP2001204035A (ja) * | 1999-11-30 | 2001-07-27 | Thomson Licensing Sa | デジタル・ビデオ復号システム、複数のビデオ・プログラムを順次表示する方法およびユーザーによって選択される次のチャネルを予測する方法 |
JP2012157046A (ja) * | 1999-11-30 | 2012-08-16 | Thomson Licensing | デジタル・ビデオ復号システム、複数のビデオ・プログラムを順次表示する方法およびユーザーによって選択される次のチャネルを予測する方法 |
JP2002281433A (ja) * | 2001-03-15 | 2002-09-27 | Kddi Corp | 動画像検索閲覧編集装置および記録媒体 |
WO2006022221A1 (ja) * | 2004-08-25 | 2006-03-02 | Sony Corporation | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JPWO2006022221A1 (ja) * | 2004-08-25 | 2008-05-08 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP4743119B2 (ja) * | 2004-08-25 | 2011-08-10 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
US8295347B2 (en) | 2004-08-25 | 2012-10-23 | Sony Corporation | Information processing apparatus and information processing method, recording medium, and program |
KR101194967B1 (ko) * | 2004-08-25 | 2012-10-25 | 소니 주식회사 | 정보 처리 장치 및 정보 처리 방법, 및 기록 매체 |
JP2007059996A (ja) * | 2005-08-22 | 2007-03-08 | Sony Corp | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP4492484B2 (ja) * | 2005-08-22 | 2010-06-30 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
US8311104B2 (en) | 2005-08-22 | 2012-11-13 | Sony Corporation | Information processing apparatus and method, recording medium, and program |
KR101450864B1 (ko) * | 2005-08-22 | 2014-10-15 | 소니 주식회사 | 정보 처리 장치, 정보 처리 방법 및 기록 매체 |
US8817887B2 (en) | 2006-09-05 | 2014-08-26 | Sony Corporation | Apparatus and method for splicing encoded streams |
Also Published As
Publication number | Publication date |
---|---|
US20070047662A1 (en) | 2007-03-01 |
JP3736808B2 (ja) | 2006-01-18 |
JP2005323386A (ja) | 2005-11-17 |
JP2005295587A (ja) | 2005-10-20 |
JP4088799B2 (ja) | 2008-05-21 |
US8923409B2 (en) | 2014-12-30 |
KR20050103248A (ko) | 2005-10-27 |
KR20000068626A (ko) | 2000-11-25 |
EP0923243A1 (en) | 1999-06-16 |
CN1236522A (zh) | 1999-11-24 |
EP1467563A1 (en) | 2004-10-13 |
DE69841897D1 (de) | 2010-10-28 |
US20030067989A1 (en) | 2003-04-10 |
JP2005253116A (ja) | 2005-09-15 |
US7711051B2 (en) | 2010-05-04 |
CN1161989C (zh) | 2004-08-11 |
KR100604631B1 (ko) | 2006-07-28 |
US8798143B2 (en) | 2014-08-05 |
KR100555164B1 (ko) | 2006-03-03 |
EP1445773A1 (en) | 2004-08-11 |
US20070047661A1 (en) | 2007-03-01 |
JP2005328548A (ja) | 2005-11-24 |
US20070165715A1 (en) | 2007-07-19 |
EP0923243A4 (en) | 2002-12-04 |
US20070058729A1 (en) | 2007-03-15 |
JP4045553B2 (ja) | 2008-02-13 |
US6567471B1 (en) | 2003-05-20 |
JP4088800B2 (ja) | 2008-05-21 |
EP0923243B1 (en) | 2010-09-15 |
US8223847B2 (en) | 2012-07-17 |
US7139316B2 (en) | 2006-11-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3736808B2 (ja) | 編集装置、編集方法、再符号化装置及び再符号化方法 | |
US6611624B1 (en) | System and method for frame accurate splicing of compressed bitstreams | |
US8260122B2 (en) | MPEG picture data recording apparatus, MPEG picture data recording method, MPEG picture data recording medium, MPEG picture data generating apparatus, MPEG picture data reproducing apparatus, and MPEG picture data reproducing method | |
KR20060045719A (ko) | 고충실도 트랜스코딩 | |
JPH11511605A (ja) | ビットレートを低減された信号間のスイッチング方法 | |
US20030002583A1 (en) | Transcoding of video data streams | |
JP2000059790A (ja) | 動画像符号列変換装置及びその方法 | |
US6483945B1 (en) | Moving picture encoding method and apparatus | |
WO2004102972A1 (ja) | 画像処理装置および画像処理方法、情報処理装置および情報処理方法、情報記録装置および情報記録方法、情報再生装置および情報再生方法、記録媒体、並びに、プログラム | |
WO1999036912A1 (fr) | Systeme d'edition, dispositif de commande d'edition et procede de commande d'edition | |
US6683911B1 (en) | Stream editing apparatus and stream editing method | |
JP4342139B2 (ja) | 信号処理装置 | |
JPH08251582A (ja) | 符号化データ編集装置 | |
WO2004112397A1 (ja) | 画像処理装置および画像処理方法、情報処理装置および情報処理方法、情報記録装置および情報記録方法、情報再生装置および情報再生方法、記録媒体、並びに、プログラム | |
JP4193224B2 (ja) | 動画像符号化装置及び方法並びに動画像復号装置及び方法 | |
JP2005278207A (ja) | 編集装置および方法、再符号化装置および方法 | |
JP2002218470A (ja) | 画像符号化データのレート変換方法、及び画像符号化レート変換装置 | |
JP2009049826A (ja) | 符号化装置、符号化方法、符号化方法のプログラム及び符号化方法のプログラムを記録した記録媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 98801159.X Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN JP KR MX US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019997002513 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09275999 Country of ref document: US Ref document number: PA/a/1999/002866 Country of ref document: MX |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1998933940 Country of ref document: EP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWP | Wipo information: published in national office |
Ref document number: 1998933940 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1019997002513 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1019997002513 Country of ref document: KR |