WO1999036912A1 - Systeme d'edition, dispositif de commande d'edition et procede de commande d'edition - Google Patents
Systeme d'edition, dispositif de commande d'edition et procede de commande d'edition Download PDFInfo
- Publication number
- WO1999036912A1 WO1999036912A1 PCT/JP1999/000151 JP9900151W WO9936912A1 WO 1999036912 A1 WO1999036912 A1 WO 1999036912A1 JP 9900151 W JP9900151 W JP 9900151W WO 9936912 A1 WO9936912 A1 WO 9936912A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- editing
- picture
- encoded
- baseband
- decoding
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 45
- 239000000463 material Substances 0.000 claims abstract description 78
- 238000003860 storage Methods 0.000 claims description 38
- 230000005540 biological transmission Effects 0.000 claims description 32
- 239000013598 vector Substances 0.000 claims description 21
- 230000002123 temporal effect Effects 0.000 claims description 6
- 230000008520 organization Effects 0.000 claims description 2
- 230000008707 rearrangement Effects 0.000 claims 2
- 239000000872 buffer Substances 0.000 description 77
- 238000010586 diagram Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 20
- 238000012545 processing Methods 0.000 description 18
- 230000006866 deterioration Effects 0.000 description 17
- 230000006835 compression Effects 0.000 description 9
- 238000007906 compression Methods 0.000 description 9
- 238000013139 quantization Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 6
- 230000002457 bidirectional effect Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 5
- 230000015556 catabolic process Effects 0.000 description 4
- 238000006731 degradation reaction Methods 0.000 description 4
- 230000014759 maintenance of location Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000003780 insertion Methods 0.000 description 3
- 230000037431 insertion Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 244000309464 bull Species 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 102100037812 Medium-wave-sensitive opsin 1 Human genes 0.000 description 1
- 241001482237 Pica Species 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000002542 deteriorative effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000005297 material degradation process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 125000006850 spacer group Chemical group 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/40—Combinations of multiple record carriers
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B2220/00—Record carriers by type
- G11B2220/40—Combinations of multiple record carriers
- G11B2220/41—Flat as opposed to hierarchical combination, e.g. library of tapes or discs, CD changer, or groups of record carriers that together store one title
Definitions
- the present invention relates to an editing system, an editing control device, and an editing control method applied when handling a bitstream.
- the main advantage of applying compression technology such as MPEG to video materials in stations and the like is to save the capacity of the video material archiver Z server.
- compression technology such as MPEG
- the transmission system encodes the input video data D,., As a configuration related to the video data on the encoder 110 side.
- the video encoder that outputs the video elementary stream ES, and the video elementary stream ES output from this video encoder coder are bucketed, and the header and the like are output.
- a bucket packetizer 112 for outputting a video packetized elementary stream PES is provided.
- the audio O encoder 1 1 3 which outputs the audio Oereme Ntarisu preparative rie beam ES, this audio O E emissions coder 1 1 3
- a packetizer 111 that converts the audio elementary stream ES output from the MV into a bucket, adds a header, etc., and outputs a video bucketized elementary stream PES.
- the elementary streams from the bucketizers 112 and 114 are multiplexed to create a transport stream bucket of length 188 bytes, which is referred to as a transport stream TS.
- a multiplexer 1 15 for outputting the data.
- a demultiplexer 1 2 1 for separating the transport stream received via the transmission medium 116 into a video PES and an audio PES and outputting them.
- De-bucketizers 122, 124 for decomposing buckets of video PES and audio PES respectively, and video decoders 123 for decoding video ES and audio ES from de-bucketizers 122, 124, respectively, audio A decoder 125 is provided.
- the video decoders 123 output baseband video signals D,., And audio signals D,.
- Such a decoding side 120 is called an IRD (Integrated Receiver / Decoder).
- each picture has the same bit amount.
- Input video data D v with is encoded by the video encoder 1 1 1
- each picture is converted into a bit quantity corresponding to its redundancy, is output as a video Ereme pointer list rie arm You.
- the transport stream multiplexer 115 multiplexes the video bucketized elementary stream and the audio bucketized elementary stream output from the bucketizer 114 to multiplex the transport stream bucket. It is created and sent as a transport stream TS to the decoding device 120 via the transmission medium 116.
- the transport stream is converted into a video bucketized elementary stream and an audio bucketized elementary stream by the transport stream demultiplexer 122. Separated.
- the depacketizer 122 depackets the video packetized elementary stream and outputs it as a video elementary stream, and the video decoder 123 decodes the video elementary stream to obtain video data D and D. Will output.
- the decoding device 120 performs control for extracting a variable bit amount for each playback picture from a fixed-rate arriving stream, for example, a 1.75 Mbit VBV (Video Buffer Buffer). ng Verifier) buffer. Therefore, the encoding apparatus 110 needs to control the bit generation amount of each picture so that the VBV buffer does not overflow or underflow. Such control is called VBV buffer processing.
- MPEG performs coding using frame correlation in GOP (Group Of Picture) units.
- GOP Group Of Picture
- bit rate control (VBV buffer processing) is performed to satisfy the buffer conditions of the destination IRD.
- bit rate control is performed according to the capacity of the transmission path.
- Figure 16 shows the master server with the MPEG stream archiver and the interface of the editing studio.
- the intra-station transmission is the baseband.
- reference numeral 101 denotes a master-archive server in the station.
- the archiver server 101 is a non-linear archiver, and accumulates MPEG compressed stream material in an accumulation unit in order to reduce the capacity.
- the server and the server both store video material, and the server is a device dedicated to storage, whereas the server is a device that outputs video material in response to external requests. It is.
- the archiver Z server 101 is provided with an MPEG decoder for decoding the MPEG stream from the storage unit.
- the baseband video data S1 and S2 formed by the MPEG decoder are input to the editing studio 102.
- the transmission protocol of the intra-office transmission line is based on It is a knowledge.
- editing splice editing, AB roll, etc.
- the archiver Z server 103 is provided with an MPEG encoder, and the edited result is stored in the storage unit as an MPEG stream.
- FIG. 17 shows an example of the configuration of the editing studio 102. Since tape-based video data has a large data capacity (high bit rate), a tape-shaped medium is used as a recording medium. That is, the video data S1 is recorded on the linear storage 104a, and the video data S2 is recorded on the linear storage 104b.
- the linear storages 104a and 104b are used as players, and the video data Sa and Sb are supplied to the editor and switcher 105, and the video data Sa and Sb are supplied to the editor and switcher 105.
- the edited video data Sc is recorded in the linear storage 104c as a recorder.
- the edited video data S3 is output from the linear storage 104c.
- the editing studio 102 consists of non-linear storage media 106a, 106b and 106c that use non-linear recording media (hard disk, optical disk, etc.). It is also possible to do so.
- non-linear recording media hard disk, optical disk, etc.
- using a non-linear recording medium to transmit a baseband signal requires a large capacity and is expensive, and the configuration shown in FIG. 18 in which the baseband signal is placed in each editing studio is not practical.
- decoding and encoding chains occur each time editing is performed, thereby deteriorating the image quality of the material and accumulating the deterioration.
- Figure 19 shows the interface of the master server and editing studio when the transmission protocol of the intra-station transmission path is set to MPEG stream.
- the archiver Z servers l 31 and 133 store the material of the MPEG stream.
- the archiver Z server 13 1 outputs the MP EG stream to the editing stadium 13 2, and the archiver / server 13 1 outputs the MP EG stream from the editing stadium 13 2.
- they are input they do not have MPEG decoders and MPEG encoders.
- By transmitting video material in the MPEG stream two or more video materials can be multiplexed as streams TS1 and TS2.
- the streams TS1 and TS2 may be any of an elementary stream and a transport stream.
- FIGS. 20 and 21 An example of the editing studio 132 in the system of FIG. 19 and another example are shown in FIGS. 20 and 21, respectively.
- the streams TS 1 a and TS 1 b are separated from the stream TS 1
- the streams TS 1 a and TS 1 b are separated from each other by the MPEG decoders 134 a and 1.
- each is converted to a baseband signal.
- These baseband signals are recorded in linear storage 135a and 135b, respectively.
- Baseband video data Sa and Sb obtained using the linear storages 135a and 135b as players are input to the baseband editor and switcher 136.
- Video data Sc as an edited result from the baseband editor and the switcher 1336 is recorded by the linear storage 135c as a recorder.
- Video data from the linear storage 135c is output as the MPEG stream Ts2 by the MPEG encoder 134c.
- FIG. 21 Another example of the editing studio 13 2 shown in FIG. 21 is that the linear storages 135 a and 135 b are replaced by non linear storages 13 7 a, 13 7 b and 1 It uses 37 c.
- the system shown in Figure 21 also The intra-station transmission path can be composed of an MPEG stream that can be easily converted to multiple channels.
- decoding and encoding chains occur each time editing is performed, and the image quality of the material cannot be ignored each time. Invite.
- the capacity is large and expensive to handle the baseband signal on the non-linear recording medium, and the configuration shown in FIG. 21 in which the baseband signal is arranged in units of editing studios is not realistic. If you want to avoid material degradation caused by chemical chains, you will naturally perform material archiving with baseband materials. In this case, the data amount of the video material becomes large, and it becomes difficult to store the video material on the non-linear recording medium.
- Transcoding has been proposed in which necessary information (referred to as codec information) is multiplexed into a baseband, and codec information is reused during re-encoding to improve the accuracy of image reconstruction. ing.
- codec information includes information such as motion vector information, quantization step, and picture type.
- Codec information is not a small amount of information. Therefore, the baseband signal does not have sufficient auxiliary area to multiplex the codec information, and the remaining codec information that cannot be multiplexed is multiplexed into the effective image area or transmitted over a separate line. No choice but was.
- FIG. 22 shows an editing studio that uses transcoding to solve the problem that decoding-encoding chains occur each time editing is performed, and this can lead to accumulation of image quality deterioration of the material that cannot be ignored every time.
- FIG. 22 shows a configuration in which codec information is transmitted on a different path from the material signal line.
- the streams TS1a and TS1b are converted into baseband signals by MPEG decoders 134a and 134b, respectively, and the baseband video data Sa and Sb are converted to baseband signals.
- the video data Sc which is input to the band editor and switch 1336 and is edited by the base editor and switch 1336, is output from the MPEG EG stream TS 2 by the MPEG encoder 134c. Is re-encoded as
- an information detector that detects the codec information from the stream or decoders 134a and 134b used in the MPEG decoders 134a and 134b.
- a signal line for transmitting codec information a signal line for transmitting codec information to the encoder, and an information estimator for using the codec information in the encoder.
- a codec information adapter 144 that organically combines the editing information of the span editor and switcher 136 is provided. As described above, when the code information is transmitted on a separate line, the editor and the switcher 136 edit the code information and also handle the code information transmitted from another system. It is necessary to add a special configuration such as codec information adapter 144. In other words, a problem arises in that the existing editing studio that handles baseband signals cannot be used.
- FIG. 23 shows a configuration of an editing stadium in which codec information is multiplexed into a valid signal area of a base band in order to solve this problem.
- Information detectors 144a and 144b are provided for detecting codec information from the input streams TS1a and TS1b or the decoders 134a and 134b, respectively.
- the detected codec information is multiplexed with the baseband video data Sa and Sb in the importers 144a and 144b, respectively.
- a spanned signal in which codec information is multiplexed is input to the baseband editor and the switcher 1336.
- the multiplexing method a method in which codec information is randomly superimposed as the least significant bit of each sample of video data can be adopted.
- Codec information is multiplexed in the video data output from the baseband editor and the switcher 136. This video data is supplied to the separator 146, and the codec information is separated. The video data Sc from the sensor 144 is re-encoded in the MPEG encoder 134c. At the time of re-encoding, the codec information from the separator 146 is used.
- FIG. 24 shows a case where stream TS1 is recorded once and the reproduced stream is output to MPEG decoders 1334a and 134b in the configuration of FIG.
- Non-linear storage 1 4 7 and non-linear storage for recording a T-encoded stream from MP EG 1 34 c Storage 1 4 8 is added.
- the configuration in which codec information is multiplexed into a baseband signal and transmitted is based on a configuration in which the baseband editor and the switcher 1336 are specially configured such as a codec information adapter. There is no need to have equipment. However, the technique of inserting codec information in the effective video signal section gives distortion to the video and impairs S / N even if it is converted to random data and multiplexed.
- FIG. 23 when codec information is multiplexed to a baseband signal, a configuration for multiplexing is provided in the editing studio.
- Fig. 25 shows an example of a configuration in which codec information is multiplexed and demultiplexed in the archiver / server.
- MP EG decoders 155a and 155b for decoding the MP EG stream from the storage unit 154 in the Z server 155, and a codec from the stream.
- Information detectors 1556a and 156b for detecting information and importers 1557a and 1557b for multiplexing codec information with base video data are installed. Is done.
- the spanned video data S 11 and S 12 in which codec information is multiplexed are input to the editing studio 15 2.
- the editing studio 152 handles baseband signals, and includes a linear storage, a baseband editor, and a switcher as in the configuration shown in FIG. 24 described above.
- An archiver z server 153 that stores video data of the edited result is input with baseband video data 13 in which codec information from the editing stadium 152 is multiplexed.
- the codec information is separated by the separator 158, and the MPEG encoder 159 performs re-encoding using the codec information.
- MPEG encoder 1 5 9 These streams are stored in the storage section 160.
- Fig. 25 does not actually function properly, resulting in incorrect connection.
- video data is recorded on a recording medium that has already spread, such as VTR ′ (Video Tape Recorder) for baseband use.
- VTR ′ Video Tape Recorder
- the existing VTR does not support the function of extracting codec information, storing it, and transmitting it to the next stage.
- most digital VTRs that are currently in widespread use a compression method different from MPEG, so that information multiplexed in the effective signal area is subjected to compression and decompression processing in the same manner as video data.
- the codeic information is also subjected to the same processing and is thus distorted, and cannot be used as codec information. For example, even if codec information is superimposed on the least significant bit of video data, the least significant bit changes due to VTR compression / expansion processing.
- the configuration shown in Fig. 23 or Fig. 24 is a configuration for transmitting a stream, and additional components such as an MPEG decoder and an encoder for re-encoding are installed in the editing studio.
- additional components such as an MPEG decoder and an encoder for re-encoding are installed in the editing studio.
- This eliminates the possibility of interfacing the existing VTR with a baseband signal in which codec information is multiplexed.
- codec information is multiplexed.
- the video is distorted and the problem of impairing the SZN cannot be solved.
- one object of the present invention is to make effective use of a storage medium and a transmission medium, to suppress deterioration of image quality, and to use an existing baseband editor.
- An editing system, an editing control device, and an editing control method are provided.
- Another object of the present invention is to provide an editing system, an editing control device, and an editing system that can detect an editing position without receiving editing position information from an editor.
- An object of the present invention is to provide an editing control method.
- Still another object of the present invention is to provide an editing system and an editing control apparatus which can use codec information for re-encoding in a unit finer than a picture, and prevent deterioration of image quality due to re-encoding. And an editing control method.
- the invention according to claim 1 includes an editor for editing a baseband signal, and an editing controller connected to the editor.
- First decoding means for decoding a first coded bitstream in which the material has been coded and outputting a first baseband signal
- Second decoding means for decoding the second encoded bitstream in which the material is encoded and outputting a second base-span signal to the editor; And the third baseband signal resulting from editing the second baseband signal is re-encoded by using the codec information used in the first and second decoding means, and the third code Encoding means for outputting an encoded bitstream;
- Control means for selecting codec information to be used in the encoding means based on the editing position information received from another device
- An editing system comprising:
- the invention according to claim 8 is characterized in that the first decoding means for decoding the first encoded bit stream in which the material is encoded and outputting the first baseband signal and the second decoding means in which the material is encoded.
- a second decoding means for decoding the encoded bit stream of the first and second baseband signals to the editor, and editing the first and second baseband signals from the editor.
- the result Encoding means for re-encoding the third baseband signal by using the codec information used in the first and second decoding means, and outputting a third encoded bit stream
- Control means for selecting codec information used in the encoding means based on the editing position information
- An editing control device comprising:
- a first encoded bit stream in which a first material is encoded and a second encoded bit stream in which a second material is encoded are input.
- First and second baseband signals obtained by decoding the first and second encoded bit streams, respectively, are sent to the editor,
- the invention of claim 16 comprises an editor for editing a baseband signal, and an editing controller connected to the editor.
- First decoding means for decoding a first coded bitstream in which the material is coded and outputting a first baseband signal
- Second decoding means for decoding the second encoded bitstream in which the material has been encoded, and outputting a second baseband symbol to the editor; By comparing the first and second baseband signals with the third baseband signal in phase, the comparison means for detecting the editing position and the information on the editing position allow the recoding to be performed. Control means for selecting codec information to be used;
- the third baseband signal resulting from editing the first and second baseband signals from the editor is re-encoded using the selected codec information, and the third encoded bit is re-encoded. And an encoding means for outputting a stream.
- the invention according to claim 19 is characterized in that: a first decoding means for decoding a first encoded bitstream in which a material is encoded and outputting a first baseband signal;
- Second decoding means for decoding the second coded bit stream in which the material has been coded and outputting a second baseband signal to the editor; and first and second baseband signals And a third baseband signal in phase with each other, and a comparison means for detecting an editing position, and a control for selecting codec information used in re-encoding based on the information on the editing position.
- the third baseband signal resulting from editing the first and second baseband signals from the editor is re-encoded using the selected codec information, and the third encoded bit is An editing control device including an encoding unit that outputs a stream.
- the invention according to claim 22 is characterized in that the first encoded bitstream in which the first material is encoded and the second encoded bitstream in which the second material is encoded are included. Entered, First and second baseband signals obtained by decoding the first and second encoded bit streams, respectively, are sent to the editor,
- the editing position is detected by comparing the first and third base spanned signals with their phases matched and comparing the second and third base spanned signals with their phases matched.
- An editing control method characterized by outputting a third encoded bit stream by re-encoding a third base-span signal using selected codec information.
- the input / output signal form of the editing control device is a coded bitstream, it is easy to multiplex coded data of a plurality of video materials, and a transmission medium can be used effectively.
- the editing control device interfaces with the editing device using a baseband signal, and codec information is not multiplexed in the baseband signal. Also, it is not necessary to transmit the codec information used for transcoding on another signal line, so that it is possible to prevent an increase in the number of signal lines and to make the editor edit the codec information.
- the existing baseband editing device can be used as an editor without any modification-Also, the first and second baseband signals output to the editor and the return from the editor can be used.
- the state where the phase is matched with the third baseband signal By comparing with, the editing position can be detected. Therefore, the line for transmitting the editing position information to the editor can be omitted, and there is no need to translate the editing position information to the stream time axis.
- codec information can be used for re-encoding can be determined not only on a picture basis but also on a block basis. Therefore, even if the picture at the edit point is a mixture of two original materials, deterioration of the image quality due to re-encoding can be suppressed.
- FIG. 1 is a block diagram showing the entire system in a station according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing an example of an editing stadium in one embodiment of the present invention.
- FIG. 3 is a block diagram showing another example of the editing stadium in one embodiment of the present invention.
- FIG. 4 is a block diagram showing an example of a broadcast network to which the present invention can be applied.
- FIG. 5 is a block diagram showing an example of a splicer Z transcoder which is a main part in one embodiment of the present invention.
- FIG. 6 is a block diagram showing an example of a configuration of a management information generation unit in the splicer Z transcoder.
- FIG. 7 is a block diagram showing another example of a splicer / transcoder which is a main part in one embodiment of the present invention.
- FIG. 8 is a block diagram showing an example of a splicer transcoder which is a main part in another embodiment of the present invention.
- FIG. 9 is a schematic diagram for explaining a time relationship of baseband signals and a process for reusing codec information.
- FIG. 10 is a schematic diagram showing the relationship between pictures and macroblocks.
- FIG. 11 is a flowchart showing a determination process for reusing codec information.
- FIG. 12 is a flowchart showing a picture subroutine in FIG.
- FIG. 13 is a flowchart showing a macro block subroutine in FIG.
- FIG. 14 is a schematic diagram for explaining reuse of codec information in macroblock units.
- FIG. 15 is a block diagram of a conventional MPEG encoding / decoding system.
- FIG. 16 is a block diagram showing an example of the configuration of an intra-station system for reference in the description of the present invention.
- FIG. 17 is a block diagram showing an example of the editing studio in FIG.
- FIG. 18 is a block diagram showing another example of the editing studio in FIG.
- FIG. 19 is a block diagram showing another example of the configuration of the system in the station for reference in the description of the present invention.
- FIG. 20 is a block diagram showing an example of the editing stadium in FIG.
- FIG. 21 is a block diagram showing another example of the editing stadium in FIG.
- FIG. 22 is a block diagram showing an example of the configuration of an editing studio for reference in the description of the present invention.
- FIG. 23 is a block diagram showing another example of the configuration of the editing studio used as a reference for explaining the present invention.
- FIG. 24 is a block diagram showing a configuration obtained by adding a non-linear storage to the configuration of FIG.
- FIG. 25 is a block diagram showing a configuration of an intra-office system referred to in the description of the present invention.
- FIG. 1 shows the configuration of an editing system according to the present invention.
- the archiver servers 1 and 3 include storage units 4 and 5, respectively, for storing video material of an encoded bit stream, for example, an MPEG stream. Since the data is compressed by MPEG, non-linear recording media can be used as the storage units 4 and 5.
- the archiver and the server both store video materials.
- the archiver is a device dedicated to storage, whereas the server is a device that outputs video materials in response to external requests. Since the present invention has a common function as a video storage unit, it can be applied to both an archiver and a server. In that sense, the term of the archiver // server is used.
- the transmission path between the archiver Z server 1, the editing studio 2 and the archiver / server 3 transmits an encoded bit stream, for example, an MPEG stream.
- an encoded bit stream for example, an MPEG stream.
- the stream TS2 is a stream of the edited result. If necessary, two or more original video Z audio materials can be multiplexed together with the edited result.
- the streams TS1 and TS2 are transport streams, but may be elementary streams or bucket-tied elementary streams.
- the editing studio 2 is configured as shown in FIG. 2 or FIG.
- the stream TS 1 is input
- the stream TS 2 is output
- the splicer transcoder 21 and the baseband signal video data Sa and Sb are input.
- a splicano transcoder 21 and a baseband editor and a switcher 22 having a baseband interface are provided so as to output the data Sc.
- the splicer transcoder 21 is an editing control device, and the base editor and the switcher 22 are editors.
- the splicer Z transcoder 21 basically performs a decoding process for converting an incoming stream into a base-span signal output to the editor and the switcher 22, and a return from the editor and the switcher 22.
- This transcoder performs transcoding and re-encoding processing to convert the baseband signal into an output stream.
- it is also possible to perform transcoding only for a predetermined period including the edit point, and to switch and output the input stream and the transcoding output. In other words, it may have a splicer function. Therefore, it is called a splicer / transcoder.
- the input / output signal form of the splicer Z transcoder 21 is an MPEG stream, which makes it easy to increase the number of channels and makes effective use of transmission resources.
- the baseband editor and switcher 22 are adapted to perform an interface with the baseband signal.
- the splicer Z transcoder 21 performs transcoding, and does not need to output codec information required for re-encoding to the editor and the switcher 22. Therefore, it is possible to construct an editing system using the existing baseband editing device as it is as the baseband editor and switcher 22.
- the splicer and transcoder 21 associates an MPEG picture (including codec information) included in the input stream ST 1 with a baseband input / output frame (or field). .
- the time code based on the correspondence between the MPEG picture and the time code specified in the splicer transcoder 21 is not shown in the figure, but is a bidirectional signal line provided between the baseband editor and the switcher 22.
- the time management information of the MPEG picture used in the splicer Z transcoder 21 and the time management information (time code) used in the editing process of the editor and the switcher 22 have a one-to-one correspondence.
- the time when the edited baseband signal Sc returns from the baseband editor and the switcher 22 is based on the baseband time at the time of output. Adds the system delay of the command editor and switcher 22.
- the association between the codec information for re-encoding and the frame of the returned baseband signal Sc can be obtained by recording the output time of the spanned signal from the splicer Z transcoder 21. It can be done easily.
- the splicer / "transcoder 21" can be used from a baseband editor and switcher 22 or from a host CPU or control machine controlling the baseband editor and switcher 22. Receives editing position information such as cue information from one source and associates it with the MPEG picture, that is, the splicer transcoder 21 detects the edited frame based on the cue information and uses it when re-encoding. child -. selecting deck information codec information, motion vector, picture relevant Yataipu, quantization step size, which is information such as the quantization scale 0
- a comparison table showing the relationship between PTS and timecode.
- PTS Presentation Time Stamp
- a table can be placed in a user area such as an extension on the stream syntax.
- the time code itself associated with the MPEG picture may be inserted into the stream and transmitted. In this case, no control table is required: , Not just time code .
- a picture index corresponding to the time code one-to-one may be transmitted as long as a sufficient time can be specified.
- the time code can be obtained by using information such as the PTS, picture type, GOP, and the repeat fast field associated with pull-down and field flip, which are the syntaxes (rules of the encoded data stream) on the MPEG stream.
- FIG. 4 shows a schematic configuration when the present invention is applied to a broadcasting system. 31 and a plurality of local stations 32 a, 32 b, 32 c, 32 d,..., Are connected via a transmission network 33.
- the MPEG bitstream is transmitted via the transmission network. Multiple channels can be multiplexed and transmitted by the MPEG bitstream.
- the main station 31 is provided with an antenna 35 for receiving radio waves from a communication or broadcasting satellite 34.
- the program material received by the antenna 35, the live material sent from the site via the microwave line 36, the program material from the archiver server 41 in the station, and the CM material are the spacer Z transcoder 4 2 Is input in the form of an MPEG stream.
- the splicer / transcoder 42 has a baseband interface between the baseband editor and the switcher 43.
- the splicer / transcoder 42 switches the input program material to create a broadcast program (MPEG bitstream). This broadcast program is distributed from the main station 31 to the local stations 32a, 32b, ... via the network 33.
- the local station 32a communicates the MPEG stream received from the main station 31 with the CM material from the CM (commercial) server 46a. Input to transcoder 44a. Splicer / Transco
- the baseband interface is used to connect between the damper 44a and the CM insertion scheduler 45a.
- the CM server 46a stores CM materials at the local station 32a.
- the CM insertion scheduler 45a replaces the CM in the program bitstream sent from the main station 31 with a local CM unique to the local station 32a. With transcoding, CMs can be replaced with little deterioration. Similarly, CMs can be replaced in the other local stations 32b, 32c,.
- the work of inserting the mouthpiece of the broadcasting station into the program bitstream can be performed.
- the present invention can be similarly applied not only to terrestrial broadcasting but also to the relationship between a cable operator and a head-end station in CATV.
- FIGS. 5 and 7 An example of the splicer transcoder 21 and another example are shown in FIGS. 5 and 7, respectively.
- the example shown in FIG. 5 transcodes all of the incoming MPEG bitstream.
- stream switching (splicing) is performed after partially transcoding the input MPEG bitstream.
- the input of the MPEG bit stream S1 such as the output of the Kiba Z server, the signal received from the satellite, and the signal arriving via the microwave line.
- This stream TS1 is a stream in which a plurality of programs (program materials) are multiplexed. It is not limited to a transport stream TS in which at least two or more programs are multiplexed.
- Rementor lease stream ES is good. However, in the case of ES, an identification tag or input information for identifying which stream is currently being input is required.
- a target program can be extracted by PID (packet ID).
- PID packet ID
- information such as an identification tag is necessary as described above.
- the two streams A and B extracted by the filter 51 are decoded by the MPEG decoders 52a and 52b, respectively.
- the baseband video Z audio data Sa of the program A is obtained by the MPEG decoder 52a
- the baseband video Z audio data Sb of the program B is obtained by the MPEG decoder 52b.
- These baseband data Sa and Sb are output to an external editor and switch 22.
- the return baseband video z audio data Sc of the edited result is input.
- the baseband data Sc is supplied to the MPEG re-encoder 53.
- the re-encoder 53 receives the codec information for MPEG re-encoding corresponding to the video frame of the baseband data Sc from the information buffer 55 via the path 54. Based on the codec information for re-encoding, the data Sc is re-encoded into the: l PEG stream TS 2 to the requested target bit amount. Then, from the re-encoder 53, a stream TS 2 resulting from the AB roll editing of the input stream A and the stream B is output.
- Codec information for re-encoding includes the motion vector, picture type, quantization step size, and quantization level. G The lance coding suppresses the deterioration of the image quality due to the decoding-encoding chain.
- the transmission line for transmitting the time code to the slicer Z transcoder 21 in response to the data and the request from the switcher 22 is omitted.
- the codec information used for decoding in each of the MPEG decoders 52a and 52b is input.
- the information buffer 55 is supplied with the write address W AD and the write enable WE from the write controller 56.
- the read address R AD from the read controller 57 is supplied to the information buffer 55.
- codec information for re-encoding is supplied from the information buffer 55 to the re-encoder 53 in synchronization with the edit point of the stream Sc.
- the video data Sc in which the video data Sb is connected at the edit point (in point) to the video data Sa returns, the video data Sa is recoded from the codec information for re-encoding the video data Sa.
- the data Sb is switched to codec information for re-encoding.
- the capacity of the information buffer 55 may correspond to the system delay (a few frame times) of the editor and the switcher 22.
- the management table 62 controls the phase between the video data Sa and Sb output from the splicer Z transcoder 21 and the video data Sc returned from the baseband editor and switcher 22. For this reason, the write controller 56 and the read controller 57 are connected to the management table 62, and the management table 62 is stored in the input table.
- the write Z-read of the information buffer 55 is controlled using the picture count value of the system and the frame count value of the return video data Sc.
- the frame counter 58 counts the number of frames of the video data Sc, and gives a read request REQ to the management table 62 as a count value as an address.
- the management table 62 has a configuration of a ring buffer, in which input information is sequentially written to an address to be incremented, and a read pointer is incremented in response to a read request REQ.
- the re-encoding information of the address indicated by the read pointer is read from the information buffer 55 and sent to the MPEG re-encoder 53 via the path 54.
- a management information generation unit 61 is provided in association with the management table 62. Queue information is input to the management information generator 61. The management information generator 61 will be described later.
- the queue information for editing the program is supplied to the management information generator 61 of the splicer Z transcoder.
- the cue information is usually editing position information specified by a time code. More specifically, information on the IN point and the Z out point is included in the queue information.
- the codec information is selected such that the edit frame is detected based on the cue information and the codec information is used in synchronization with the baseband data Sc.
- an enable signal indicating that the codec information can be used is supplied from the read controller 57 to the re-encoder 53. Is done.
- the re-encoder 53 is connected to the bit amount estimator 59, and performs processing of the VBV buffer. That is, appropriate encoding is performed so that the buffer for decoding the MPEG bitstream TS2 obtained by the re-encoding does not overflow or underflow.
- the target bit amount (allocation of bit generation amount, weighting information) near the edit point, which is written in the corresponding index slot of the management table 62, is a bit.
- Re-encoding basically satisfies the target number of generated bits. What is done in normal coding control is that the set target bit amount is
- the generated bit amount of the re-encoder 53 When the generated bit amount of the re-encoder 53 is insufficient, dummy data is added. In addition, when the generated bit amount exceeds the target bit amount, that is, when an underflow is likely to occur at the decoder side, the power to make a skipped macro block is used as the prediction residual (the difference from the predicted image macro block MB). ) Is set to 0. If this process cannot cope and the underflow occurs, the reproduced image will be affected depending on the processing method on the decoder side. Normally, a delay occurs until data is accumulated in the buffer, and as a result, the reproduced image freezes.
- FIG. 6 shows a more detailed configuration of the management information generation unit 61.
- the cue information as the editing position information is supplied to the interpreter 71 and is appropriately translated.
- Information extracted from the interpreter 71 is supplied to a matching device 72.
- the mapping unit 72 maps the cue information represented by the time code to the scale of the time stamp PTS (time management information of reproduction output) of the input stream 73 extracted by the filter 51. .
- the picture counter 74 detects the picture header from the input stream 73 and counts the number of pictures.
- the number of pictures counted by the picture counter 74 is fed to the picture Z-frame index generator 75:
- the picture Z-frame index generator 75 is used to organize the picture and information management table 62. Therefore, an index corresponding to the picture is generated.
- the management table 62 is stored in this index.
- the management information is output with the count value of the number of frames of the video data Sc from the frame counter 58 as an address.
- the time stamp reader 76 reads the time stamp PTS from the incoming stream 73.
- the time stamp PTS and the output of the mapping unit 72 are supplied to the re-encoding strategy planner 77.
- the output of the mapper 72 is the result of mapping the time code indicating the edit point for the video frame according to the time stamp scale. Therefore, the re-encoding strategy planner 77 associates the edit points with the pictures in the input stream 73.
- the output of the re-encoding strategy planner 77 is written to the address of the management table 62 indicated by the index.
- Reference numeral 78 denotes the number of bits generated in the input stream 73, supplies the count result to the VBV buffer simulator 79, and simulates the VBV buffer.
- the VBV buffer is the capacity of the buffer on the decoder side assumed at the time of encoding by the encoder, and the simulation of the VBV buffer can prevent underflow or overflow of the buffer on the decoder side.
- the result of the VBV buffer simulator 79 is sent to the re-encoding strategy planner 77, which assigns and weights the amount of bit generation near the edit point for re-encoding, and this is also used in the management table 6. Write to the corresponding index slot in 2.
- FIG. 7 shows another example of the splicer Z transcoder 21.
- Other examples include transcoding only for the minimum necessary period, including the parts that are actually affected by editing, and switching between the transcoded stream and the incoming stream. It is what you do.
- Another example is image quality that can't be avoided by transcoding. Deterioration can be minimized.
- the difference from the example of the above-described splicer / transcoder 21 shown in FIG. 5 is that the input stream 73 from the filter 51 is stored in the picture buffer 63, and the re-encoder 5 is switched by the switching circuit 66. This is to switch between the stream from 3 and the stream from the picture cuff 63.
- a write controller 64 for controlling the light of the picture buffer 63 and a read controller 65 for controlling its read are provided.
- the write controller 64 and the read controller 65 are controlled by a management table 62.
- the same control as the above-described control for writing the codec information to the information buffer 55 and reading the codec information used for re-encoding from the information buffer 55 is applied to the picture buffer 63. Also applies.
- the switching circuit 66 switches the data from the picture buffer 63 before the transcoding period before and after the edit point.
- the stream corresponding to the data Sb from the picture buffer 63 is selected after this period, and the stream from the re-encoder 53 is selected during this period.
- the selection operation of the switching circuit 66 is controlled by a control signal 67 from the read controller 65.
- the capacity of the picture buffer 63 may be equivalent to the system delay of the editor and the switcher 22 (several frame times) + the encoding delay (several pictures), and the picture buffer 63 is burdened by the circuit configuration. do not become.
- the outline of the editing system is the same as that of the embodiment described above (FIGS. 1, 2 and (See Fig. 3). It can also be applied to a broadcasting system as in the embodiment (see FIG. 4).
- the above-described embodiment of the present invention decodes the stream of the original material, stores the codec information at that time, and transmits only the decoded baseband signal to the editor, The editor edits the baseband signal, matches the time (phase) relationship between the edited baseband signal and the stored codec information based on the queue information, and edits the edited result. In this configuration, the baseband signal is re-encoded and output as a stream.
- the capacity of the storage medium of the storage means can be saved, the transmission medium can be used effectively, and the deterioration of the image quality can be minimized by transcoding.
- the existing baseband editor can be used as the editor with little modification.
- the editing position information includes information on the editing position in units of frames or fields and, when using a switcher function such as a wipe, the duration of the frame.
- a switcher function such as a wipe
- codec information that can be used for re-encoding could not be properly used in Frame II.
- the first and second baseband signals output to the editor and the third baseband signal returned from the editor can be obtained without receiving the editing position information. With the phase matched By comparing, the editing position can be detected. Therefore, a line for transmitting the editing position information between the editing device and the editing device can be omitted, and there is no need to translate the editing position information into the time axis of the stream.
- whether or not codec information can be used for re-encoding can be determined not only on a picture basis but also on a block basis. Therefore, even if the picture at the edit point is a mixture of two original materials, the deterioration of image quality due to re-encoding can be suppressed.
- the splicer / transcoder 21 outputs the output baseband signals S a, S b and the return baseband signal. Compare with S c. Based on the result of this comparison, the editing position is detected, and codec information to be used at the time of re-encoding is selected.
- the editing position information the editing position in frame (picture) units and the editing status in finer units in the picture can be obtained.
- the codec information is information such as a motion vector, a picture type, a quantization step size, and a quantization scale.
- a picture buffer that stores the original material is required to detect the editing position.
- An information buffer for storing codec information is required for transcoding.
- FIG. 8 An example of a splicer Z transcoder 21 according to another embodiment of the present invention is shown in FIG. 8:
- the example shown in FIG. 8 transcodes all of the input MPEG bitstreams. However, after partially transcoding the incoming MPEG bitstream, It is good to perform stream switching (splice). That is, a stream in which the baseband signal Sc is transcoded is selected only for a predetermined period including before and after the edit point, and in a period other than the predetermined period,
- Switching means for selecting a stream is provided so as to select an input stream.
- the decoding-encoding chain is a part, the degradation of image quality due to the decoding-encoding chain cannot be avoided even in transcoding. Can be.
- This stream T S1 is a stream in which a plurality of programs (program materials) are multiplexed. At least two programs have been duplicated. It is not limited to the transport stream TS, but may be a time-multiplexed elementary stream ES. However, in the case of ES, a tag for identification or input information for identifying which stream is the stream currently input is required.
- Reference numeral 251 denotes a filter for extracting buckets of two programs (original materials) A and B to be edited.
- a target program can be extracted by PID (packet ID).
- PID packet ID
- information such as an identification tag is required as described above.
- a stream 268 is obtained in which the two selected programs A and B are multiplexed.
- the two programs A and B extracted by the filter 25 1 are decoded by the MPEG decoders 25 2 a and 25 2 b, respectively: MP EG decoder 2 5 2a enables program A baseband video
- the audio data Sa is obtained, and the baseband video Z audio data Sb of the program B is obtained by the MPEG decoder 25 2 b. These baseband data Sa and Sb are output to an external editor and switcher 22. At the same time, the baseband data Sa and Sb are stored in the picture buffer 263.
- a light controller 264 for controlling the write of the picture buffer 263 and a read controller 265 for controlling the read thereof are provided.
- the write address WAD and the write enable WE from the write controller 264 are supplied to the picture buffer 2643.
- the read address RAD from the read controller 265 is supplied to the picture buffer 263.
- the codec information used for decoding in each of the M PEG decoders 25 2 a and 25 2 b is input to the information buffer 255.
- the information buffer 255 is supplied with a write address WAD and a write enable WE from the light controller 256.
- the read address RAD from the read controller 255 is supplied to the information buffer 255.
- the codec information for re-encoding needs to be supplied from the information buffer 255 to the re-encoder 25 3 in synchronization with the edit point of the stream Sc. . For example, when the video data S c connected to the video data S a at the edit point (in point) returns at the editing point (in point), the video data S a is converted from the codec information for re-encoding the video data S a. Switching to codec information for re-encoding data Sb is performed.
- the capacity of the information buffer 255 and the picture buffer 263 depends on the system delay of the editor and the switcher 22 (a few frames).
- the information buffer 255 and the picture buffer 263 do not burden the circuit configuration.
- the baseband data Sc is supplied to the M PEG re-encoder 25 3.
- the re-encoder 2553 receives the codec information for MPEG re-encoding corresponding to the video frame of the baseband data Sc from the information buffer 255 via the path 254. Based on the codec information for re-encoding, the data Sc is re-encoded into the M PEG stream T S2 to the requested target bit amount. Then, from the re-encoder 25 3, a stream TS 2 resulting from the AB roll editing of the input stream A and the stream B is output.
- the codec information for re-encoding includes a motion vector, a picture type, a quantization step size, a quantization scale, and the like. Transcoding suppresses the deterioration of image quality due to the decoding-encoding chain.
- the codec information is selected so that the codec information is used in synchronization with the baseband data Sc.
- an enable signal indicating that the codec information can be used is supplied from the read controller 257 to the re-encoder 253. You.
- the incoming stream 268 and the baseband signals S a and S b that are the decoding results are temporally associated one-to-one.
- the codec information used for decoding by each of the MPEG decoders 252a and 252b is stored in the information buffer 255, it is used as an organization tag to take a one-to-one correspondence with time.
- the codec information is stored in association with the codec information.
- a management table 26 1 is provided in order to save codec information.
- the write controllers 25 6 and 26 4 and the read controllers 25 7 and 26 5 control the writing and reading of the information buffer 25 5 and the picture buffer 26 3 respectively. It is connected to one bull 2 6 1.
- the management table 261 controls the write / read of the information buffer 255 and the picture buffer 263 using the picture count value of the incoming stream and the frame count value of the video data Sc returned. I do.
- the frame counter 258 counts the number of frames of the video data Sc, and gives a read request R EQ to the management table 261, using the count value as an address.
- the number of pictures counted by the picture counter 271 is supplied to the picture / frame index generator 272.
- the picture frame index generator 272 generates an index corresponding to a picture in order to organize the picture and information management table 261.
- the management table 261 arranges the table by this index, and outputs the management information with the count value of the number of frames of the video data Sc from the frame counter 258 as an address.
- the management table 261 which is configured as a ring buffer, sequentially writes input information at an address where it is incremented, and increments a read pointer in response to a read request REQ. It has been done.
- the re-encoded information of the address indicated by the read pointer is read from the information buffer 255 and sent to the MPEG re-encoder 25 3 via the path 25 54-picture buffer 26 3 is controlled in the same manner as the information buffer 255.
- the re-encoder 25 3 is connected to the bit amount estimator 2 59, and performs processing of the VBV buffer. That is, appropriate encoding is performed so that the buffer that decodes the MPEG bitstream TS2 obtained by the re-encoding does not overflow or underflow. Re-encoding basically satisfies the target number of generated bits. What is performed in normal encoding control is that dummy data is added when the generated bit amount of the re-encoder 253 is insufficient with respect to the set target bit amount.
- the prediction residual (the difference between the predicted macro block MB and the predicted macro block MB) is set as the skipped macro block. (Difference) is set to 0. If this processing cannot be performed and an underflow occurs, the reproduced image is affected depending on the processing method on the decoder side. Normally, there is a wait until data accumulates in the buffer, and as a result, the playback image freezes
- the 273 counts the number of bits generated in the input stream 268, supplies the count result to the VBV buffer simulator 274, and simulates the VBV buffer. .
- the result of the VBV buffer simulator 274 is sent to the re-encoding-drainage strategy-planner 275, and the amount of bits generated near the edit point for re-encoding is assigned and weighted.
- the target bit amount (allocation of bit generation amount, weighting information) near the editing point written in the corresponding index slot of the management table 26 1 is calculated by the bit amount estimator 2. 5 3 and the amount of bits generated by re-encoding It is controlled as follows.
- the splicer Z transcoder shown in FIG. 8 is capable of detecting an editing point and obtaining the editing status of the baseband signal Sc without the editing position information from the editor.
- a comparison unit 270 is provided for this purpose.
- the comparison unit 270 is supplied with the original two baseband signals S a and S b from the picture buffer 263 and the returned baseband signal S c. Further, additional information such as a GOP header, a picture header, a macroblock type, a motion vector, and the like is supplied from the information buffer 255 to the comparison unit 270.
- the comparison section 270 detects an edit point based on the detection of coincidence between the signals Sa and Sb output to the editor and the switcher 22 and the returned signal Sc. At the same time, it is determined whether codec information can be used for re-encoding on a picture-by-picture basis and on a macroblock basis.
- FIG. 9 shows a result of editing a baseband signal S a (referred to as a picture Pic A) and a base spanned signal S b (referred to as a picture Pic B) used for describing another embodiment of the present invention.
- An example of the base spanned signal S c (referred to as picture Pic C) of FIG. Codec information is stored for each picture and macroblock of the baseband signals Sa and Sc.
- processing such as wipe, cross feed, etc.
- the comparing section 270 performs coincidence detection between the pictures P ic A and P ic C whose phases have been temporally matched, and coincidence detection between the pictures P ic B and P ic C. If at least one of the two comparisons detects a mismatch, it detects the frame as the edit point. If the difference between the pixels at the same position in each picture is 0, it is determined that they match, otherwise, they are not matched and the two pictures are matched or mismatched. For example, the pixels of two pictures are sequentially input to the subtraction circuit in a state where the phases are adjusted in time, and if a pixel having a difference other than 0 occurs, it is determined that they do not match. In this case, if the number of pixels whose difference is not 0 reaches a predetermined number, it may be determined that they do not match.
- codec information used for re-encoding in the re-encoder 253 is selected.
- the codec information to be reused is also selected in picture units in accordance with the switching.
- the process of selecting codec information in units of pictures requires the degradation of image quality due to re-encoding. Is not enough to prevent
- the macroblock (expressed as MBA or MBB) included in picture Pic A (or Pic B) and picture P It is included in ic C and is compared spatially with the macro block MB A / MB B at the same position as the macro block (expressed as MB C).
- a macro block has a size of (16 X 16).
- Macroblock match Z mismatch detection is done in the same way as for pictures.
- FIG. 11 is a flowchart showing a process for determining whether or not codec information can be reused.
- an edited video signal Sc (picture Pic C) arrives from the editor and the switcher, the processing is started.
- the original pictures A and B are read from the picture buffer 263.
- comparison step S2 pictures A and C are compared. If pictures A and C do not match, pictures B and C are compared in step S3 of the comparison. If the pictures B and C do not match, the macroblocks MBA and MBC are compared in step S4 of the comparison. If the macroblocks MBA and MBC do not match, then in a comparison step S5, the macroblocks MBB and MBC are compared. As described above, two macroblocks at the same spatial position in each picture are compared.
- the frame I coded image I ' Intra!
- P Predictive
- BiBidirectionally predictive picture which is a bidirectional predictive picture. Since the conditions for reuse differ depending on these picture types, the processing of the picture subroutine S6 is performed. Saburutin S 6 will be described later After the subroutine S6, the process of step S7 is performed. Steps
- step S7 it is determined whether or not the picture to be predicted on the picture C is an element of the picture A (this is expressed as PICC (FWZBW), PICFg ⁇ 0?).
- the codec information of the corresponding picture is prepared in order to reuse the codec information in picture units (step S8).
- codec information is reused in units of pictures, use of codec information of one-sided pictures for re-encoding pictures in bidirectional prediction is also included.
- step S7 If the condition in step S7 is not satisfied, the process proceeds to step S3, as in the case of Pic A ⁇ Pic C. That is, when the picture to be predicted on picture C is not an element of picture A, the following conditions are searched.
- step S3 picture C is re-encoded by reusing the codec information of picture B.
- the conditions of reuse differ depending on the picture type, so the processing of the picture subroutine S6 is performed.
- step S9 as in step S7 of picture A, it is determined whether or not the picture to be predicted on picture C is an element of picture B. If this condition is satisfied, the process moves to step S8, where the codec information of the corresponding picture is prepared for reuse.
- step S4 the codec information of the corresponding picture is read out from the information buffer 255 and given to the re-encoder 25 3-When the condition of step S9 is not satisfied, Pic ⁇ ⁇ Pic C As in the case, the process moves to step S4. That is, in the case of (Pic A CPic C) and (Pic B ⁇ Pic C), the comparison (MB A-MB C) is performed in units of macro blocks. As in the example shown in Fig. 9, When pictures A and B are mixed in the picture at the editing point of picture C, (Pic A ⁇ Pic C) and (Pic B ⁇ Pic C). In this case, codec information is reused on a macroblock basis.
- macroblocks there are three types of macroblocks, similar to picture types. That is, a frame-coded (Intra) macroblock, a forward interframe prediction macroblock that predicts the future from the past, and a backward (Backwrd) frame that predicts the past from the future.
- prediction macroblocks and interpolative macroblocks that predict from both forward and backward directions.
- All macroblocks in an I-picture are intraframe coded macroblocks.
- the P picture includes an intra-frame coded macro block and a forward inter-frame predicted macro block.
- the B picture includes all four types of macro blocks described above. Since the conditions for reuse differ depending on the macroblock type, the processing of the macroblock subroutine S10 is performed. The subroutine S10 will be described later.
- step S11 it is determined whether or not the macroblock to be predicted on picture C is an element of picture A (this is expressed as MB (FW / BW), MBFg ⁇ 0?).
- the codec information of the corresponding macroblock is prepared in order to reuse the codec information in macroblock units (step S12).
- step S11 If the condition in step S11 is not satisfied, the process moves to step S13. That is, if the macroblock to be predicted on picture C is not an element on picture A, codec information cannot be reused (step S13). In this case, no transcoding is performed, but simple encoding is performed.
- step S12 the code block information of the corresponding macro block is prepared for reuse. Specifically, the codec information of the corresponding macroblock is read out from the information buffer 255 and given to the re-encoder 25. When the condition of step S14 is not satisfied, the codec information is not reused (step S13).
- the picture subroutine S6 will be described in more detail with reference to FIG. First, in order to determine the picture type, in step S21, it is determined whether the picture is a ⁇ picture. Picture type is Information It can be seen from the information of the picture header stored in the queue 255.
- the picture flag PICFg is set to 1 in step S22.
- the picture flag PICFg indicating the presence or absence of the prediction target picture in picture units is defined as follows.
- step S22 when codec information is reused, a picture flag PIC is used to indicate a picture in which a picture to be predicted exists.
- This picture flag is used to determine whether or not to reuse the codec information and to specify the codec information to be given from the information buffer 255 to the re-encoder 25.
- step S21 If it is determined in step S21 that the picture is not an I picture, it is determined in step S23 whether the picture is a P picture. In the case of a P picture, a search and a detection of a picture to be predicted are performed in step S24. In the case of a P picture, since the encoding is performed so as to be predicted from the past picture, the prediction target picture based on which the encoding is based is detected from the past picture. The position of the past prediction target picture is
- the picture to be predicted on the detected picture C exists on picture A (for a subroutine following step S2 in FIG. 11) or picture B (for a subroutine following step S3 in FIG. 11). Is determined (step S25). This decision is made by comparing the picture to be predicted with a picture on picture A or B that has the same temporal relationship as the picture to be predicted. If the picture to be predicted exists on the picture A or B, the picture flag is set to 2 in step S22 as described above. If not, the picture flag PICFg is set to 0 (step S26).
- step S23 When it is determined in step S23 that the picture is not a P picture, that is, in the case of a B picture, a picture to be predicted is searched and detected in step S27. Then, it is determined whether the detected picture to be predicted on the picture C is present on the picture A or the picture B (step S28). If not, the picture flag PICFg is set to 0 (step S26). If present, the picture flag PICF g is a B picture, as described above, depending on whether the picture to be predicted exists before (past), after (future), or in the forward or backward direction. Set to a value of 3, 4, or 5:
- FIG. 9 shows a case where the picture at the edit point is a B picture.
- the picture to be predicted (P picture) in the backward direction as viewed from the edit point are searched.
- a comparison is made between the picture to be predicted in the forward direction and the corresponding picture on picture A. When they match, it is determined that the picture to be predicted exists in the forward direction.
- the backward picture is compared with the corresponding picture on picture A, and if they match, it is determined that the picture to be predicted exists in the backward direction. In both directions, there may be matching pictures.
- the target picture contains two pictures, none of the conditions in steps S2 and S3 are satisfied in the picture unit, and the discrimination in the macroblock unit is performed. Processing will shift.
- FIG. 13 is a flowchart showing the reuse determination process (macroblock subroutine S10) of codec information in macroblock units.
- the macroblock type is determined.
- the macro block type is included in the macro block mode of the macro block layer 1 of the MPEG2 syntax, and the macro block type is determined based on this information.
- step S31 it is determined whether or not the frame is an intra-frame coded macro block (IMB). If it is not IMB, it is determined in step S32 whether the macroblock is an interpolative (bidirectional) macroblock BidiMB. If it is not Bid MB, it is determined in step S33 whether or not it is a backward interframe prediction macroblock (shown as a backward macroblock). If it is neither an I MB, a Bid MB, nor a backward macroblock, the forward interframe prediction macroblock (shown as forward macroblock) ).
- IMB intra-frame coded macro block
- step S32 it is determined in step S32 whether the macroblock is an interpolative (bidirectional) macroblock BidiMB. If it is not Bid MB, it is determined in step S33 whether or not it is a backward interframe prediction macroblock (shown as a backward macroblock). If it is neither an I MB, a Bid MB, nor a backward macroblock, the forward interframe
- step S31 when the macro block of interest is determined to be IMB, in step S34, the value of the macroblock flag MBFg force; 1 is set.
- a motion vector is selected, and a macroblock corresponding to the macroblock to be predicted exists at a position shifted by the selected motion vector in picture A or B. Is determined. When these conditions are satisfied, codec information can be reused.
- the macroblock flag MBFg which indicates the presence or absence of the macroblock to be predicted in picture A or B in macroblock units, is
- This macro block flag is used to determine whether or not codec information can be reused on a macroblock basis. Used to specify.
- step S32 the macroblock of interest to be determined is If the type is a two-way macro block, a forward motion vector and a backward motion vector are prepared (step S35). Using this motion vector, a macroblock to be predicted is searched and detected (step S36). Since the position of the macroblock to be predicted follows the GOP sequence, the macroblock to be predicted is detected based on the information of the GOP sequence included in the GOP header.
- the macroblock to be predicted is picture A (for a subroutine following step S4 in FIG. 11) or picture B (for a subroutine following step S5 in FIG. 11). Is determined to be present. This determination is made by comparing the macroblock to be predicted with the image block corresponding to the macroblock on picture A or B at a position shifted from the macroblock to be predicted by the motion vector.
- FIG. 14 when pictures A and B are mixed, the judgment is made in units of macro blocks.
- the example in Fig. 14 shows the vicinity of the picture at the edit point in Fig. 9, where the picture type of the picture of interest is B, and two bidirectional macroblocks are shown. Is included in picture A in the edit point picture
- the macroblock MB is compared with the macroblock-equivalent image block in the past picture A at the position shifted by the forward motion vector. In this example, both macroblocks match (shown as G0OD in the figure). It is also compared with the image block equivalent to the macroblock in the future picture B at the position shifted by the backward motion vector. Both blocks do not match (indicated as NG in the figure). Therefore, in this case, the value of the macroblock flag MBF g is set to 5.
- the macroblock MB included in the picture B part in the picture at the edit point is also the macroblock equivalent picture of the pictures A and B shifted by the forward and backward motion vectors. Is compared to the As shown in Fig. 14, it matches a macroblock in a future picture (P picture in this example) shifted by a backward motion vector. Therefore, in this case, the value of the macroblock flag MBFg is set to 6. If pictures A and B are not mixed, there is an image block equivalent to the macroblock corresponding to the macroblock to be predicted in both the front and back directions, and the macroblock flag MBF g is set to a value of 4. Set.
- step S44 If it is determined in step S43 that there is an image block equivalent to the macroblock corresponding to the macroblock to be predicted, then in step S44, the macroblock flag MBF g is set to a value of 3; Is performed. When there is no image block corresponding to the macroblock corresponding to the macroblock to be predicted, the macroblock flag MBFg is set to 0 in step S39.
- step S33 if the macroblock of interest in picture C is not a back macroblock, that is, if it is a forward macroblock, a forward motion vector is prepared in step S45. You. Then, a macroblock-equivalent image block in the past picture A or B at a position shifted by the motion vector is searched and detected (step S46). The detected macroblock is compared with the prediction target macroblock of the target macroblock to determine whether there is a macroblock-equivalent image block corresponding to the prediction target macroblock (step S47). ).
- step S48 the macroblock flag is set to the value of MBF g force; 2. You. When the corresponding macro port block corresponding image proc is not present, Makuroburo Kkufuragu MB ⁇ is set Bok to a value of 0 in step S 3 9.
- the reuse of codec information in macro blocks is determined. Therefore, as shown in FIG. 9, even if the original picture A and the original picture B are mixed at the edit point, the codec information is reused in the macroblock unit to obtain the picture C. Can be re-encoded. Therefore, rather than re-encoding on a picture-by-picture basis, The codec information can be reused finely, and the deterioration of image quality can be reduced.
- the input / output interface between the archiver / server and the like is an encoding stream
- the interface with the editor is a baseband
- Transcoding can minimize image quality degradation due to decoding-encoding chains. Since it is not necessary to add recoding information for transcoding to a base span signal and send it to an external editing device or storage device, editing can be performed without affecting external devices. Therefore, materials such as video materials can be stored in a stream. In addition, there is no need to change the configuration of existing editors installed in stations and stadiums. From the user's point of view, an editing system including an editor and an editing controller performs editing on a stream, but inside the editing system, a base band is edited.
- the editing system since the editing system handles the MPEG stream as it is, it is possible to increase the number of channels of the network line already installed in the station and to effectively use the material transmission resources in the station.
- Relationship between head office and local stations in terrestrial broadcasting in Japan, CATV
- the CM of the broadcast material transmitted from the main station is replaced with the local CM, or the station is inserted. Work can be performed on the bitstream with little degradation.
- the baseband signal obtained by decoding the encoded bitstream is edited, and the baseband signal resulting from the editing is re-encoded and the stream is generated.
- a transmission line for transmitting the editing position information can be omitted, and a process of translating the editing information represented by the time code on the time axis of the stream can be omitted.
- the present invention provides a method of reusing codec information for transcoding not only in picture units but also in finer units such as macroblocks. — Deck information can be selected. Therefore, even when two or more original materials are mixed in the image at the edit point, deterioration in image quality due to re-encoding can be suppressed.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Television Signal Processing For Recording (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
- Studio Circuits (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Description
Claims
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP53702799A JP3716328B2 (ja) | 1998-01-19 | 1999-01-19 | 編集装置および方法、再符号化装置および方法 |
US09/381,190 US7305040B1 (en) | 1998-01-19 | 1999-01-19 | Edit system, edit control device, and edit control method |
KR1019997008583A KR100548665B1 (ko) | 1998-01-19 | 1999-01-19 | 편집 시스템, 편집 제어 장치 및 편집 제어 방법 |
EP99900350A EP0982726A4 (en) | 1998-01-19 | 1999-01-19 | CUTTING SYSTEM, CUTTING CONTROL DEVICE AND CUTTING METHOD |
CA002284216A CA2284216A1 (en) | 1998-01-19 | 1999-01-19 | Edit system, edit control device, and edit control method |
US11/334,168 US7920634B2 (en) | 1998-01-19 | 2006-01-18 | Editing system, editing controlling apparatus, and editing controlling method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP10/7689 | 1998-01-19 | ||
JP769098 | 1998-01-19 | ||
JP768998 | 1998-01-19 | ||
JP10/7690 | 1998-01-19 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/334,168 Continuation US7920634B2 (en) | 1998-01-19 | 2006-01-18 | Editing system, editing controlling apparatus, and editing controlling method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1999036912A1 true WO1999036912A1 (fr) | 1999-07-22 |
Family
ID=26342024
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP1999/000151 WO1999036912A1 (fr) | 1998-01-19 | 1999-01-19 | Systeme d'edition, dispositif de commande d'edition et procede de commande d'edition |
Country Status (7)
Country | Link |
---|---|
US (2) | US7305040B1 (ja) |
EP (1) | EP0982726A4 (ja) |
JP (1) | JP3716328B2 (ja) |
KR (1) | KR100548665B1 (ja) |
CN (1) | CN1157727C (ja) |
CA (1) | CA2284216A1 (ja) |
WO (1) | WO1999036912A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1130595A1 (en) * | 1998-11-25 | 2001-09-05 | Matsushita Electric Industrial Co., Ltd. | Stream editing apparatus and stream editing method |
DE10119214A1 (de) * | 2001-04-19 | 2002-10-24 | Highlight Comm Ag Pfaeffikon | Verfahren zum Komprimieren von Videodaten |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7266150B2 (en) | 2001-07-11 | 2007-09-04 | Dolby Laboratories, Inc. | Interpolation of video compression frames |
JP4120934B2 (ja) | 2003-06-16 | 2008-07-16 | ソニー株式会社 | 画像処理装置および画像処理方法、記録媒体、並びに、プログラム |
JP4470431B2 (ja) * | 2003-10-01 | 2010-06-02 | ソニー株式会社 | データ処理装置およびその方法 |
US7502368B2 (en) * | 2004-04-08 | 2009-03-10 | John Sanders | Method and apparatus for switching a source of an audiovisual program configured for distribution among user terminals |
JP4207072B2 (ja) * | 2006-04-07 | 2009-01-14 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
US8279928B2 (en) * | 2006-05-09 | 2012-10-02 | Canon Kabushiki Kaisha | Image encoding apparatus and encoding method, image decoding apparatus and decoding method |
JP2008066851A (ja) * | 2006-09-05 | 2008-03-21 | Sony Corp | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP4221676B2 (ja) * | 2006-09-05 | 2009-02-12 | ソニー株式会社 | 情報処理装置および情報処理方法、記録媒体、並びに、プログラム |
JP4325657B2 (ja) * | 2006-10-02 | 2009-09-02 | ソニー株式会社 | 光ディスク再生装置、信号処理方法、およびプログラム |
US9872066B2 (en) * | 2007-12-18 | 2018-01-16 | Ibiquity Digital Corporation | Method for streaming through a data service over a radio link subsystem |
US8290036B2 (en) * | 2008-06-11 | 2012-10-16 | Optibase Technologies Ltd. | Method, apparatus and system for concurrent processing of multiple video streams |
US20100046623A1 (en) * | 2008-08-19 | 2010-02-25 | Chen Xuemin Sherman | Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams |
US9185426B2 (en) * | 2008-08-19 | 2015-11-10 | Broadcom Corporation | Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams |
US8510107B2 (en) * | 2009-11-18 | 2013-08-13 | General Instrument Corporation | Audio data bit rate detector |
US8755444B2 (en) | 2010-08-06 | 2014-06-17 | Qualcomm Incorporated | Two-stage entropy decoding |
CN102098500A (zh) * | 2011-03-02 | 2011-06-15 | 天津大学 | 一种提高八视点自由立体视频网络传输性能的纠错方法 |
CN102164286B (zh) * | 2011-05-27 | 2012-09-26 | 天津大学 | 一种八视点自由立体视频传输错误隐藏方法 |
US9223621B1 (en) * | 2013-01-25 | 2015-12-29 | Amazon Technologies, Inc. | Organizing content using pipelines |
US8813245B1 (en) | 2013-01-25 | 2014-08-19 | Amazon Technologies, Inc. | Securing content using pipelines |
US9183049B1 (en) | 2013-01-25 | 2015-11-10 | Amazon Technologies, Inc. | Processing content using pipelines |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06253331A (ja) * | 1993-03-01 | 1994-09-09 | Toshiba Corp | 可変長符号化信号に対応した編集装置 |
JPH089366A (ja) * | 1994-06-15 | 1996-01-12 | Nippon Telegr & Teleph Corp <Ntt> | 符号化復号化画像の処理方法 |
JPH08130712A (ja) * | 1994-10-31 | 1996-05-21 | Sanyo Electric Co Ltd | データ編集方法及び編集装置 |
JPH1098713A (ja) * | 1996-09-20 | 1998-04-14 | Sony Corp | 映像信号切換装置 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0492528B1 (en) | 1990-12-27 | 1996-10-09 | Kabushiki Kaisha Toshiba | Recording/reproducing apparatus |
US5265122A (en) * | 1992-03-19 | 1993-11-23 | Motorola, Inc. | Method and apparatus for estimating signal weighting parameters in a diversity receiver |
JP3196906B2 (ja) | 1992-08-21 | 2001-08-06 | 富士ゼロックス株式会社 | 画像信号の符号化装置 |
JP3163830B2 (ja) | 1993-03-29 | 2001-05-08 | ソニー株式会社 | 画像信号伝送方法及び装置 |
NL9301358A (nl) | 1993-08-04 | 1995-03-01 | Nederland Ptt | Transcodeerinrichting. |
US5715009A (en) | 1994-03-29 | 1998-02-03 | Sony Corporation | Picture signal transmitting method and apparatus |
US5940130A (en) | 1994-04-21 | 1999-08-17 | British Telecommunications Public Limited Company | Video transcoder with by-pass transfer of extracted motion compensation data |
DE69535553T2 (de) | 1994-06-17 | 2007-12-06 | Snell & Wilcox Ltd., Havant | Videokompression |
US5828421A (en) * | 1994-10-11 | 1998-10-27 | Hitachi America, Ltd. | Implementation efficient digital picture-in-picture decoding methods and apparatus |
GB9501736D0 (en) | 1995-01-30 | 1995-03-22 | Snell & Wilcox Ltd | Video signal processing |
GB2307613B (en) * | 1995-08-31 | 2000-03-22 | British Broadcasting Corp | Switching bit-rate reduced signals |
JP3277787B2 (ja) * | 1995-12-21 | 2002-04-22 | ソニー株式会社 | 音声・映像データ記録・再生装置 |
JPH09322078A (ja) * | 1996-05-24 | 1997-12-12 | Toko Inc | 画像伝送装置 |
US6137834A (en) * | 1996-05-29 | 2000-10-24 | Sarnoff Corporation | Method and apparatus for splicing compressed information streams |
US6262777B1 (en) * | 1996-11-15 | 2001-07-17 | Futuretel, Inc. | Method and apparatus for synchronizing edited audiovisual files |
JPH10150647A (ja) * | 1996-11-19 | 1998-06-02 | Fujitsu Ltd | ビデオ会議システム |
US6038256A (en) * | 1996-12-31 | 2000-03-14 | C-Cube Microsystems Inc. | Statistical multiplexed video encoding using pre-encoding a priori statistics and a priori and a posteriori statistics |
JP2933132B2 (ja) * | 1997-01-09 | 1999-08-09 | 日本電気株式会社 | 多地点テレビ会議制御装置及び画面合成符号化方法 |
CN1161989C (zh) * | 1997-07-25 | 2004-08-11 | 索尼公司 | 编辑装置、编辑方法、接续装置、接续方法、编码装置和编码方法 |
-
1999
- 1999-01-19 WO PCT/JP1999/000151 patent/WO1999036912A1/ja active IP Right Grant
- 1999-01-19 CA CA002284216A patent/CA2284216A1/en not_active Abandoned
- 1999-01-19 US US09/381,190 patent/US7305040B1/en not_active Expired - Fee Related
- 1999-01-19 KR KR1019997008583A patent/KR100548665B1/ko not_active IP Right Cessation
- 1999-01-19 CN CNB998002410A patent/CN1157727C/zh not_active Expired - Fee Related
- 1999-01-19 EP EP99900350A patent/EP0982726A4/en not_active Withdrawn
- 1999-01-19 JP JP53702799A patent/JP3716328B2/ja not_active Expired - Fee Related
-
2006
- 2006-01-18 US US11/334,168 patent/US7920634B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06253331A (ja) * | 1993-03-01 | 1994-09-09 | Toshiba Corp | 可変長符号化信号に対応した編集装置 |
JPH089366A (ja) * | 1994-06-15 | 1996-01-12 | Nippon Telegr & Teleph Corp <Ntt> | 符号化復号化画像の処理方法 |
JPH08130712A (ja) * | 1994-10-31 | 1996-05-21 | Sanyo Electric Co Ltd | データ編集方法及び編集装置 |
JPH1098713A (ja) * | 1996-09-20 | 1998-04-14 | Sony Corp | 映像信号切換装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP0982726A4 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1130595A1 (en) * | 1998-11-25 | 2001-09-05 | Matsushita Electric Industrial Co., Ltd. | Stream editing apparatus and stream editing method |
US6683911B1 (en) | 1998-11-25 | 2004-01-27 | Matsushita Electric Industrial Co., Ltd. | Stream editing apparatus and stream editing method |
DE10119214A1 (de) * | 2001-04-19 | 2002-10-24 | Highlight Comm Ag Pfaeffikon | Verfahren zum Komprimieren von Videodaten |
Also Published As
Publication number | Publication date |
---|---|
EP0982726A4 (en) | 2003-06-04 |
CN1256782A (zh) | 2000-06-14 |
JP3716328B2 (ja) | 2005-11-16 |
US7305040B1 (en) | 2007-12-04 |
US7920634B2 (en) | 2011-04-05 |
CN1157727C (zh) | 2004-07-14 |
EP0982726A1 (en) | 2000-03-01 |
CA2284216A1 (en) | 1999-07-22 |
KR20010005525A (ko) | 2001-01-15 |
US20060120466A1 (en) | 2006-06-08 |
KR100548665B1 (ko) | 2006-02-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7920634B2 (en) | Editing system, editing controlling apparatus, and editing controlling method | |
JP3736808B2 (ja) | 編集装置、編集方法、再符号化装置及び再符号化方法 | |
US5917988A (en) | Editing apparatus, editing method and decoding apparatus for compressed video signal | |
KR100750520B1 (ko) | 부호화 스트림 생성 장치 및 방법, 데이터 전송 시스템 및 방법, 편집 시스템 및 방법 | |
KR100574186B1 (ko) | 부호화 스트림 스플라이싱 장치 및 방법과 부호화 스트림 생성 장치 및 방법과 편집 장치 및 방법 및 편집 시스템 | |
US6483945B1 (en) | Moving picture encoding method and apparatus | |
US7024100B1 (en) | Video storage and retrieval apparatus | |
JPH11187310A (ja) | ディジタルデータ伝送方法およびディジタルデータ伝送装置 | |
JPH11185317A (ja) | ディジタルデータ記録再生方法および装置、ディジタルデータ記録方法および装置、ディジタルデータ再生方法および装置 | |
JP2005278207A (ja) | 編集装置および方法、再符号化装置および方法 | |
US7650061B2 (en) | Information recording apparatus, information reproducing apparatus, and related computer programs | |
JP2002171529A (ja) | 映像符号化装置及び方法、記録媒体、並びに復号化装置 | |
US9219930B1 (en) | Method and system for timing media stream modifications | |
JPH11177921A (ja) | ディジタルデータ編集方法、ディジタルデータ編集装置 | |
JP2005245006A (ja) | アフレコ信号再生装置 | |
JP2005253093A (ja) | アフレコ信号伝送装置 | |
JP2005260979A (ja) | アフレコ信号伝送方法 | |
JP2005260978A (ja) | アフレコ信号生成用プログラム | |
JP2005198351A (ja) | アフレコ信号再生装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 99800241.0 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CA CN JP KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE |
|
ENP | Entry into the national phase |
Ref document number: 2284216 Country of ref document: CA Ref document number: 2284216 Country of ref document: CA Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1019997008583 Country of ref document: KR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1999900350 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 09381190 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1999900350 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1019997008583 Country of ref document: KR |
|
WWG | Wipo information: grant in national office |
Ref document number: 1019997008583 Country of ref document: KR |