EP1013097A1 - Nahtlose verbindung von komprimierten videoprogrammen - Google Patents

Nahtlose verbindung von komprimierten videoprogrammen

Info

Publication number
EP1013097A1
EP1013097A1 EP98943427A EP98943427A EP1013097A1 EP 1013097 A1 EP1013097 A1 EP 1013097A1 EP 98943427 A EP98943427 A EP 98943427A EP 98943427 A EP98943427 A EP 98943427A EP 1013097 A1 EP1013097 A1 EP 1013097A1
Authority
EP
European Patent Office
Prior art keywords
program
splice
frame
compressed
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP98943427A
Other languages
English (en)
French (fr)
Inventor
Edward A. Krause
Peter A. Monta
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Arris Technology Inc
Original Assignee
Imedia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Imedia Corp filed Critical Imedia Corp
Publication of EP1013097A1 publication Critical patent/EP1013097A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23608Remultiplexing multiplex streams, e.g. involving modifying time stamps or remapping the packet identifiers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/24Systems for the transmission of television signals using pulse code modulation

Definitions

  • the present invention relates to the distribution of video programming. More particularly, the present invention relates to the seamless splicing of compressed digital video programming .
  • Splicing systems are used in cable television head ends, satellite uplink facilities, television stations, and editing laboratories. They are used for switching between two different programs, inserting advertisements into a program, or simply editing program content. Today, almost all program splicing is applied to uncompressed analog or digital sources. In the future, there will be a need to splice different programs that exist in compressed form.
  • a number of problems are presented by efforts to splice compressed programs.
  • programs that are digitally compressed according to standards such as MPEG contain many codeword sequences that are only meaningful to a decoder when interpreted in combination with previous and following information in the compressed bit stream.
  • a compression standard may specify that only the changes that have occurred since a prededing time instant be encoded and transmitted. For this reason, the current program cannot simply be interrupted at arbitrary locations, and similarly, the new program cannot be simply inserted at the point of interruption.
  • the encoder is responsible for insuring that the rate buffer at a receiver remains within acceptable occupancy limits to avoid both overflow and underflow.
  • compression standards are designed such that all receivers receiving the same compressed bit stream exhibit identical buffer occupancy levels. This introduces a third problem, since the programs to be spliced are generally encoded independently. It therefore cannot be assumed that neither overflow nor underflow will be triggered by the transition between the two programs. This is because the encoder utilized for the current program assumes a particular buffer occupancy level at the precise time of the splice, while the encoder utilized for the next program may well assume another. Unless the two assumed occupancy levels are identical, overflow or underflow becomes possible, which may result in visible and/or audible artifacts.
  • a fourth complexity is introduced in connection with splicing compressed programming if a remultiplexer or stream reformatting system is coupled to the output of the splicing system.
  • a remultiplexer is used to recombine several programs into a common signal while a reformatting system may be used to interface a program to a particular distribution system.
  • Either of these systems may cease to function correctly when simultaneously or sequentially provided with two or more compressed programs that have been synchronized to different timing references. For example, if one or more programs were encoded based on a display rate of 29.97001 frames per second while one or more different programs were encoded based on a display rate of 29.96999 frames per second, then a problem may arise even though this amount of variation may be well within the capture range of the intended receivers. If the remultiplexer retransmits the sequences at any rate other than the original rate assumed by each of the respective encoders, then the data rate mismatch occurring at the remultiplexer will eventually overflow or underflow any buffering capacity that exists.
  • the duration of the splice can be extended by inserting a brief silent black interval between the current program and the next program. During this interval, the receiver rate buffer is allowed to empty so that the new program can begin with a known buffer occupancy level. This interval can also simplify the task of creating independence among the two encoded bit streams.
  • this non-seamless splicing technique does not solve the problems of splice time inaccuracy or mismatched timing references.
  • the present invention is thus directed toward a compressed video program splicing system in which a second or "next" compressed program is spliced in to follow a first or "current” compressed program without the introduction of video or audio artifacts caused by the several phenomena discussed in the previous section.
  • the scheduled splice time in general will not coincide with the permissible splice times in the current and next programs. Two of these times must be shifted to achieve triple coincidence, which is required for seamless splicing.
  • One way to do this is to search for a pair of splice times in the two programs that are both relatively close together and relatively close to the scheduled splice time. Then the actual splice time can be shifted from the scheduled splice time to coincide with the permissible splice time in the current program, and the next program can be delayed as needed to achieve the desired triple coincidence.
  • nearly as good results can be achieved with a simpler method and apparatus, so that in a preferred embodiment of the present invention, only the relative position of the scheduled splice time and the permissible splice points of the current program are used.
  • One implementation of the seamless-splicing system of the present invention utilizes two program processors that are responsive to a scheduler for determining when the current program is to be cut out and the next program is to be cut in based on a scheduled splice time.
  • one of the program processors is in an active mode, processing the current program, while the other program processor is in a standby mode, waiting for the splice command from the active processor.
  • the active processor ensures that all of the data required to reconstruct all of the frames in the spliced program is included in the output data stream.
  • the active program processor is also responsible for issuing a splice command signal to the standby program processor at the appropriate time.
  • the standby program processor ensures that the next data stream will contain no references to prior data that are not included after the splice point.
  • the standby program processor also determines where in the next program the appropriate point is for appending the next program to the end of the current program.
  • the standby program processor then carries out various functions such as shifting the next program stream in time and adjusting the next-program content while ensuring that all embedded parameters in the compressed program stream are properly adjusted in order to reflect the changes that have occurred.
  • Figure 1 is a general purpose multi-program splicing system which may incorporate the present invention.
  • FIG. 2 is a more detailed block diagram of a splicing system which may incorporate the present invention.
  • Figure 3 is a block diagram of components used for normalizing multiple time stamps to a single common reference in accordance with one aspect of the present invention.
  • Figure 4 is a flow diagram for the extraction of program clock references from a received data stream.
  • Figure 5 is a flow diagram for the process of extracting time stamps from a received data stream.
  • Figure 6 is a flow diagram for the process of time stamp insertion in accordance with one aspect of the present invention.
  • FIG. 7 is a more detailed block diagram of an output module for use in the splicing system of the present invention.
  • Figure 8 shows several exemplary frame sequences to demonstrate some of the details of splicing compressed program data.
  • FIG. 9 is a block diagram of a program splicer for use in accordance with the present invention.
  • FIG. 10 is a block diagram of the components of a program processor for use in accordance with the present invention.
  • Figure 11 is a timing diagram of the signals corresponding to the operation of the circuit of Figure 10 in accordance with the present invention.
  • Figure 12 is a flow diagram for adjusting a temporal reference parameter after a program splice has occurred in accordance with the splicing system of the present invention.
  • Figure 13 is a flow diagram for time stamp adjustment following a program splice in accordance with the splicing system of the present invention.
  • Figure 14 is a flow diagram for a procedure to statistically multiplex data packets corresponding to different programs.
  • PGP Permissible splice point.
  • Field, Frame A frame is a single complete TV image. When interlace is used, each field comprises either the even lines or the odd lines of a frame.
  • Field/frame mode An adaptive process used by some video compression systems when encoding interlaced television signals. Either field-based or frame-based processing techniques are selected, usually on a block-by-block or frame- by-frame basis.
  • MPEG Motion Picture Experts Group. The compression system adopted by the International Standardization Organization for digital television signals.
  • Data stream A sequence of data symbols, either in compressed or uncompressed form, representing a single channel of video, audio, or other type of data information.
  • Program The collection of data streams containing all video, audio, and other information associated with the presentation of a television production.
  • the data streams representing a program in which the data rate has been reduced without serious loss of image and sound quality.
  • Rate buffer A buffer used to mediate between different transmission rates.
  • Buffer overflow underflow. The situation that arises when a buffer is too full to accept additional data that arrives or is empty at a time when additional data is needed.
  • Remultiplexer A multiplexer used to combine several concurrent programs into one signal.
  • Reformatter Apparatus that reconfigures a signal to accommodate the requirements of a particular distribution system.
  • Multiplexing In general, combining several data streams into one.
  • Mux demux.
  • Input Output Multiplex.
  • Packet A block of data into which a portion of the information to be transmitted is grouped. Each packet contains a header and a payload. The header identifies the packet completely so that original data streams may be correctly reassembled from the payloads of packets. Several different data streams can be transmitted in one stream of packets .
  • Scheduling system The means by which the intended splicing time is generated. In many cases, the intended splicing time is produced by a human editor. In other cases, the intended splicing time is derived from information embedded in one or more data streams included in a program transmission. The list of splicing times for a spliced program is sometimes called a playlist.
  • the scheduling system is not a part of the invention.
  • Presentation time The time at which decoded images are displayed at the receiver.
  • Decoding time The time at which compressed images are decoded at the receiver. Images are always decoded in the same order that they are received. Decoding time is different from presentation time when the images are not received in presentation order.
  • Time stamp Ancillary data embedded in the data streams of a program to indicate the intended presentation time or decoding time of a particular element of the program.
  • DTS DTS
  • PTS Decoding and presentation time stamps. May be included with program data.
  • Clock reference Data representing the absolute clock time at the instant when particular program data is received.
  • PCR Program clock reference. The clock reference applying to a particular program.
  • a receiver which decodes or presents this program must adjust its own internal timing reference to match the received PCR's.
  • Intra Frame A frame that is coded using only data of a single original frame and that therefore can be decoded independently of any other data in the program.
  • Predicted Frame A frame that is coded using data from the corresponding original frame and one or more preceding frames.
  • B-Frame Bi-directionally Predicted Frame
  • Anchor Frame An I- or P-frame that is used to predict another frame. B-frames have two anchor frames. These are the closest preceding I- or P-frame and the closest following I- or P-frame.
  • PAT Program association table. A table defined by the MPEG standard to identify the programs included in a received multiplex.
  • PMT Program map table. A table defined by the MPEG standard to identify the data stream components of a particular program included in a received multiplex.
  • PID Packet identification data. A unique MPEG data code included in the header of each packet of a particular data stream. Most PID's associated with the data streams of a particular program can be inferred by decoding the PAT and PMT tables of the same multiplex. All packets corresponding to the PAT data stream are assigned a PID value of 0. The PID's assigned to the PMT data streams of each program can be inferred from the PAT.
  • Splice command A pulse that triggers the actual splice.
  • Sequence end code A data code embedded in a compressed MPEG data stream to indicate the end of a video segment .
  • FIFO Literally first in first out.
  • GOP Group of pictures. In MPEG, a sequence of coded fields beginning with an I-frame. A closed GOP can be reconstructed without referencing any data not contained within the GOP. An open GOP may include one or more B-frames to be displayed before the first I-frame received at the start of the GOP. Such B-frames cannot be reconstructed without referencing the last anchor frame of the preceding GOP.
  • VBR Variable bit rate. A characteristic of a compression system in which the coded data rate varies with image complexity. MODES FOR CARRYING OUT THE INVENTION
  • components implemented by the present invention are described at an architectural, functional level. Many of the elements may be configured using well-known structures, particularly those designated as relating to various compressed video signal processing techniques. Additionally, for logic to be included within the system of the present invention, functionality and flow diagrams are described in such a manner that those of ordinary skill in the art will be able to implement the particular methods without undue experimentation. It should also be understood that the techniques of the present invention may be implemented using a variety of technologies. For example, the various components of the splicing system described further herein may be implemented in software running on a computer system, or implemented in hardware utilizing microprocessors, application-specific integrated circuits, programmable logic devices, or various combinations thereof. It will be understood by those skilled in the art that the present invention is not limited to any one particular implementation technique and those of ordinary skill of the art, once the functionality to be carried out by such components is described, will be able to implement the invention with various technologies without undue experimentation.
  • FIG. 1 there is illustrated a general purpose, multi-program splicing system 100.
  • one or more programs are received on each of N different multiplexes 101a to 101N.
  • a scheduling system 120 specifies which programs from each of the N input multiplexes will be selected for combination into M output multiplexes 110a to 110M. Changes to the formulation of each output multiplex 110 and the precise time of such changes are also controlled by the scheduling system 120.
  • Each output multiplex 110 may optionally be provided to one of several remultiplexers or reformatters 115a to 115M as shown in Figure 1.
  • a remultiplexer 115 receives a program multiplex from the splicing system 100 and rearranges the packets constituting the several programs in order to more efficiently utilize the available channel bandwidth.
  • An example of a suitable remultiplexer is described in copending U.S. Patent Application entitled “Method and Apparatus for Multiplexing Video Programs for Improved Channel Utilization," Serial No. 08/560,219, assigned to the assignee of the present invention and incorporated herein by reference.
  • the functions performed by a remultiplexer or reformatter 115 may be incorporated into the splicing system 100.
  • Each incoming program multiplex 101 is received by one of N different input modules 210a - 210N.
  • the input modules 210 convert the signals received according to a particular interface to the format required for distribution over a common bus 215.
  • the schedule translation unit 230 interprets the externally generated schedule or playlist from the scheduling system 120 to instructions for each of M output modules 220a - 220M. In this example, these instructions are also conveyed over the same common bus 215.
  • Each output module 220 has independent access to all programs received from each of the N input modules 210.
  • the output modules 220 not only extract a particular set of programs from the bus 215 but also cut out old programs and cut in new programs into the output multiplex at various times as instructed by the Schedule Translation unit 230.
  • time stamps specifying the precise time that video, audio, or other data frames are to be decoded and displayed at the receiver.
  • some frames may be decoded at one time instant and displayed or presented at a later time instant.
  • MPEG B- frames which depend on both preceding and following frame data to be properly decoded.
  • DTSs Decoding Time Stamps
  • PTSs Presentation Time Stamps
  • Time stamps alone are not sufficient to recover the timing of a program. Since the time stamps are related to a particular clock reference, and since the clock references used by different encoded programs may be independent, the MPEG standard requires that the current value of the clock reference be included in the bit stream at a certain minimum frequency. These clock reference values are referred to as PCRs .
  • One of the key tasks performed by an input module 210 is to normalize all time stamps (both DTS and PTS) so that they become relative to a single common reference. By insuring that the common reference has a known relationship with the current time of day (or some other reference shared with the scheduling system) it becomes possible to accurately identify the location within a data stream where a splice should be performed.
  • a remultiplexer or reformatting system 115 it also becomes possible for a remultiplexer or reformatting system 115 to insure that the various frames of each program component are delivered to the receiving systems at a suitable rate. This results because the corrected time stamps now convey the information needed to insure that the corresponding data units arrive at the receiving station in time for decoding but not early enough to cause a buffer overflow .
  • time-stamp correction process removes any further dependence on the PCRs embedded in the bit stream. This is necessary, since any buffering delays introduced during the course of processing the data streams of a program would seriously compromise the usefulness of the embedded PCRs. That is, the usefulness of a clock reference is decreased once the stream has been subjected to an unknown amount of delay. Processing elements having access to the same common reference need only consider the corrected time stamps when controlling playback timing. At some point, the PCRs must be regenerated based on the common reference. In this embodiment, the PCRs are regenerated by a remultiplexer unit 115.
  • a suitable remultiplexer 115 that can be adapted to use the corrected time stamps to control the rate of transmission of the various constituent streams is described in the above-referenced copending patent application. This adaptation is achieved by using the flowchart described further herein with respect to Figure 14 for sequencing the stream of packets for transmission.
  • FIG. 3 shows a more detailed version of an input module 210.
  • a program multiplex is received through an input interface 310.
  • a PCR counter 315 is maintained for each program in the received multiplex.
  • (Different programs having a common PCR reference may share the same PCR counter 315) . Since all time stamps encoded in the form specified by MPEG today are based on a 90 KHz clock, each counter in this example is incremented at this same rate. Each time a PCR is detected in the bit stream, it is immediately used to load the corresponding counter 315. Alternatively, a more sophisticated phase-locked loop counter solution may be employed.
  • a shared system PCR counter 325 locked to the common system reference is also maintained.
  • Each time a time stamp is received (DTS or PTS) , it is summed with a correction value computed as the current difference between the counter value stored in the system PCR reference counter 325 and the count corresponding to the PCR reference for the particular program.
  • the time stamp embedded in the bit stream is replaced with this result.
  • the processes of PCR Extraction, Time Stamp Extraction, and Time Stamp Insertion are shown in more detail by the flow charts in Figures 4, 5, and 6 respectively. These flow charts are specific to the MPEG standard.
  • FIG. 4 is a flow diagram for the process of PCR Extraction.
  • MPEG this is accomplished by extracting from the bitstream and decoding Program Association Tables (PATs) which identify the programs comprising the multiplex and provide the information needed to locate the Program Map Tables (PMTs), which are also embedded in the same multiplex.
  • PATs Program Association Tables
  • PMTs Program Map Tables
  • Each PMT is associated with one of the multiplexed programs and provides the information needed to identify the video, audio, and any other components of the particular program.
  • this information is in the form of a Packet ID (PID) .
  • PIDs are embedded in the header of all packets and only one PID value is used to identify all packets of a single data stream component of the multiplex.
  • the PMT also specifies the data stream containing the PCRs to be used for reconstructing the timing of the program.
  • the PID corresponding to the PAT is always fixed, and therefore known, while the PIDs of the PMTs and the PIDs of the stream components of a program are usually inferred by reading the PAT and PMT tables respectively.
  • the procedure starts at Start Box 410 with a first step of getting the next packet from the incoming stream at step 420. If the packet is a PAT packet at step 430, then at step 440, the PMT PIDs for each program are extracted. If the packet was not a PAT packet, then at step 450 it is determined whether it is a known PMT packet. If it is, then at step 460, the PCR PID for the particular program is extracted, and the procedure returns to get the next packet. If the packet is not a known PMT packet at step 450, then at decision box 470 it is determined whether the packet is one with a PCR. If not, the procedure returns to get the next packet. If it is a packet with a PCR included, then at step 480 the PCR is extracted and at step 490 the PCR is copied to the output port assigned for the particular program. The procedure then returns to get the next packet again at step 420.
  • Figure 5 illustrates the procedure for Time Stamp Extraction which begins at start box 510 with a first step of getting a next incoming packet at step 520. If the packet is found to be a PAT packet at decision box 530, then at step 535, the PMT PIDs for each program are extracted. If the packet is not a PAT packet at decision box 530, then at decision box 540 it is determined whether the packet is a known PMT packet. If it is, then at step 545 the PIDs for each stream component
  • DTS Decoding Time Stamp
  • Figure 6 illustrates the procedure for Time Stamp Insertion, which begins at start box 610 with a first step of retrieving the next packet at step 615. If the packet is a PAT packet at decision box 620, then the PMT PIDs for each program are extracted at step 625. If the packet is a known PMT packet at decision box 640, then the PIDs for each stream component of the particular program are extracted at step 645. If the packet is a known component of one of the programs at step 650, then it is determined at decision box 660 whether a DTS is present in the packet. If so, then at step 670, the DTS is replaced with a DTS received on an input port assigned to the particular program.
  • each output module 220 combines a plurality of programs selected from the common bus 215, into a single multiplexed output signal 110 propagated through an output interface 715.
  • a single Program Splicer 720 processes the audio and video corresponding to each output program 725.
  • the Program Splicer 720 receives the audio and video components of a current program and the audio and video components of a next program from a pair of demultiplexers
  • the schedule control information originates from the scheduler ( Figure 1) and includes the splice time for executing the next transition from the current program to the next program. After the transition has been completed, the next program becomes the current program and the scheduler sets a new next program and specifies the new splice time.
  • FIG. 8 illustrates an example from MPEG.
  • the frames of both the current stream and the next stream are shown sequenced in the order in which they are received (decoding order) .
  • the resulting stream that has been spliced according to the methods of the current invention. This result is shown in both decoding and presentation order.
  • the presentation order is specified by the index assigned to each frame. This index is also known as the temporal reference . Since the B-frames cannot be reconstructed without knowledge of the anchor frames (the closest I- or P- frames both preceding and following a B-frame) , both anchor frames must be sent first.
  • PTS DTS
  • an anchor frame is not presented before the next anchor frame is decoded.
  • all B-frames that are received must be decoded and presented first.
  • the first step in performing the splice is to compare the time stamps associated with each frame with the scheduled splice time.
  • the presentation time stamp PTS
  • the comparison test should only be performed when an I- or P- frame is received. This is because the stream should not be interrupted at a B-frame since a display artifact may occur if the receiver is sent an anchor frame, but not the in-between B-frame (s) that are to be displayed before it.
  • MPEG does not require that all frames be accompanied by time stamps . This is because the missing time stamps can always be extrapolated from a previous time stamp based on the known frame rate and frame- display intervals.
  • the next step is to insure that the next stream begins with an I-frame since only an I-frame can be decoded independently of other frames in the sequence.
  • I-frames are accompanied by various headers containing information needed to decode the bit stream. All such headers are herein considered to be part of the I-frame.
  • the first I-frame of the next stream may fail to include the headers necessary to convey the change in parameter settings. In that case, the splicing system should either insert a copy of the most recently received relevant header, or the splice can be deferred until such header information is received.
  • certain early-model MPEG receivers may be incapable of performing a seamless transition between two streams characterized by different encoding parameters such as those pertaining to image resolution.
  • Other MPEG receivers may require the insertion of a ⁇ Sequence End Code' into the compressed bitstream in order to execute the transition seamlessly. Therefore, it may be advantageous for the splicing system to insert the ⁇ Sequence End Code' at the point of transition, either unconditionally or whenever certain changes in parameter settings are detected.
  • the splicer In addition to aligning the most recent I-frame in the next video stream with the splice transition instant, the splicer must delete all B-frames encountered after the I-frame and before the next I- or P-frame. Since B-frames are dependent on two anchor frames, of which only one is retained after the splice, these first B-frames cannot be successfully reconstructed.
  • FIG. 9 is a block diagram of an exemplary Program Splicer 720 in accordance with the present invention.
  • the Program Splicer includes two program processors 910a and 910b.
  • only one Program Processor is active and supplying video and audio data to OR gates 915 and 920 respectively.
  • the other Program Processor remains in standby mode.
  • the video and audio outputs of the processor that is in standby mode are set to logic ⁇ 0' while the video and audio inputs are queued in preparation for the next splice.
  • the time of the next splice is supplied by Schedule Control input signal 925. This signal is always monitored by the active Program Processor.
  • Video FIFO 1010 and audio FIFO 1015 are used to align the output of a Program Processor with the output of the other
  • Latch 1035 specifies whether the processor is in standby or active mode. When in standby mode, Latch 1035 causes AND gates 1038 and 1039 to set audio and video outputs, respectively, to logic '0' . At the same time Latch 1035 will enable I-frame Detector 1040. Each time the beginning of an I- frame is detected on the video input signal, I-frame Detector 1040 will cause video FIFO 1010 to reset.
  • Latch 1025 becomes set, thereby allowing the I-frame data to be written into video FIFO 1010.
  • I-Frame Detector 1040 also sets Latch 1020, thereby causing B-frame Detector 1045 to begin detecting B-frames. If the beginning of a B-frame is detected, then Latch 1025 will be cleared by B-Frame Detector 1045, thereby preventing the B-frame data from being written into video FIFO 1010.
  • Latch 1025 now causes I/P- Frame Detector 1050 to begin detecting both I-frames and P- frames.
  • I/P-Frame Detector 1050 will again cause Latch 1025 to be set, allowing video FIFO 1010 to resume receiving data. I/P- Frame Detector 1050 also clears Latch 1020, thereby disabling the detection of B-frames by B-Frame Detector 1045. All video data is then written into video FIFO 1010, continuing until the next I-frame is detected by I-frame detector 1040. At this time, the video FIFO 1010 is again reset, and the initialization sequence repeats.
  • audio FIFO 1015 is reset by Latch 1030.
  • the resetting of a FIFO essentially erases its contents.
  • the FIFO will remain empty as long as it is in reset mode.
  • the audio FIFO 1015 will remain in reset mode until the beginning of a new audio frame is detected by Audio Frame
  • the Audio Frame Detector then clears Latch 1030, thereby allowing the detected audio frame to be written into audio FIFO 1015.
  • the Next Splice input signal must be continuously compared to the time stamps embedded in the audio and video streams in order to permit the changeover back to standby.
  • Audio time stamps are extracted by Time Stamp Extractor 1060. The extracted time stamp is then summed with the audio frame period to obtain the time of completion of the current audio frame. If Comparator 1065 determines that this frame completion time exceeds the scheduled time of the next splice, then it will cause AND gate 1038 to output a logic level of ⁇ 0', effectively deleting the current audio frame. This step is taken to insure that the splice occurs at audio frame boundaries without overlapping the audio segment of the current and next programs. In this preferred case, a short silent audio interval is accepted in return for a seamless video transition.
  • the video time stamps are extracted by Time Stamp
  • Extractor 1070 and compared to the next splice time by Comparator 1075. If an extracted time stamp is greater than or equal to the next splice time, then Comparator 1075 enables I/P frame Detector 1080. Upon detection of the beginning of the next I- or P-frame, I/P Detector 1080 issues a pulse on the Splice Command output signal. This pulse clears Latch 1035 and causes the Program Processor to return to standby mode. The Splice Command output pulse is also coupled to the Splice Command input signal of the other Program Processor, thereby allowing the other Program Processor to complete the splice.
  • Time Stamp Extractors 1060 and 1070 One method for implementing Time Stamp Extractors 1060 and 1070 was described with reference to Figure 5. Methods for implementing Frame Detectors 1040, 1045, 1050, 1055, and 1080 are well known to those familiar with the MPEG standard. The process of generating the time-shifted and edited sequence of audio and video frames for output by the Program Processor 910 is illustrated by the timing diagrams shown in Figure 11. The signals correspond to the operation of the circuit described with respect to Figure 10.
  • the Program Processor shown in Figure 9 is an example of a hardware implementation. Redundant active and standby Program Processors have been described in order to simplify the presentation of a complete solution capable of multiple and repetitive splices. However, it should be noted that the two- processor solution could be replaced by a single integrated processor with similar logic for processing the current program in active mode and the next program in standby mode. Then, upon receiving a splice command, the next stream would be switched to the current program logic and a new next program would be switched to the next program logic. This single-processor alternative is particularly well suited for software solutions which are typically implemented as sequential processes rather than multiple concurrent ones .
  • temporal reference This is because MPEG specifies that the temporal reference of the first frame to be displayed in each Group of Pictures (GOP) be assigned a temporal reference of zero. Assume, for example, that the next program following the splice begins with a new GOP. Then, using the example of Figure 8, the temporal references must be decremented by two since the two deleted B-frames received after the I- frame were intended to be the first two displayed frames of the GOP. This adjustment of the temporal references ceases once the next GOP header is received. The process is illustrated by the flow chart in Figure 12.
  • the process begins at start box 1210 once a Splice Command signal is detected. Then the first step 1220 is to extract the temporal reference for the next frame (I-frame) . At step 1230, a variable, TRc, is set equal to the extracted temporal reference of the I-frame. Then at step 1240, the temporal reference in the bit stream is changed to zero. If, at decision box 1250, it is determined that the GOP header is detected, then a closed-GOP bit is set at step 1260. Then, or otherwise, at step 1270, the procedure waits for a next frame.
  • next frame does not include a GOP header at decision box 1280
  • step 1290 the value of TRc is subtracted from the temporal reference in the bitstream and the procedure returns to step 1270 to wait for a next frame.
  • a GOP header is detected at decision box 1280, the procedure ends.
  • a closed GOP is a group of pictures wherein each of the frames comprising the group can be regenerated without referencing any frames outside of the group. If the program- splicing process is implemented according to the method and hardware depicted in Figure 9 and according to the example of Figure 8, then the first GOP after the splice point will always become closed even if it was initially open. As shown in the flowchart of Figure 12, the closed-GOP bit is set accordingly.
  • each decoder infers the temporal reference based on standardized rules for the decoding and display of MPEG sequences.
  • the parameters which cannot be ignored, however, are the decoding and presentation time stamps, and these parameters must also be adjusted whenever different programs are spliced.
  • a method for performing the second Time Stamp Correction 1300 is described with respect to the flow chart in Figure 13.
  • a correction value corresponding to the time duration of the delay incurred at the video FIFO 1010 ( Figure 10) is computed at steps 1335, 1340 and 1345 upon receipt at step 1330 of a first frame after the splice.
  • DTSi determined at step 1335, is the decoding time stamp which would have been assigned to the corresponding frame of the previous stream if the splice had not occurred.
  • DTS 2 computed at step 1340, is the decoding time stamp that was previously assigned to the first frame of the next stream prior to the delay introduced by video FIFO 1010. The difference between these two time stamps is the correction value TS C .
  • This correction value is then added to all of the PTS (step 1385) and DTS (step 1375) time stamps which follow.
  • the exception at step 1360 is the PTS of the I-frame used to initiate the next stream immediately after the splice.
  • the I- and P-anchor frames are presented exactly three frame intervals after they are decoded. The delay is to account for the in- between B-frames and the subsequent frame reordering. However, since the first two frames received after the splice are both anchor frames, and there are no B-frames to be shown between them, the reordering delay is reduced.
  • the I-frame (the first of the two anchor frames) would be displayed immediately after displaying the last frame (X8 P ) of the previous stream. Since the presentation time of this last frame is equal to the decoding time of the I-frame, the I-frame PTS can be derived by adding the display interval time (calculated at step 1325) of the preceding last frame to the I-frame DTS at step 1360.
  • the display interval should be determined by examining the codewords in the bit stream which contain this relevant information (specifically the MPEG frame rate value, the MPEG field/frame encoding mode, and the MPEG repeat-first- field flag) .
  • MPEG-2 definition is less strict.
  • the field which occurs earlier in time may be either the field with the odd lines or the field with the even lines.
  • a specific bitstream flag (known as top-field-first) is provided to make this distinction.
  • an MPEG-2 frame is permitted to span not only two field intervals, but occasionally three field intervals.
  • a flag is set (known as repeat-first-field) to indicate to the decoder that a third field is to be displayed and that it should be an exact copy of the first field. Note that the second field could not have been repeated instead, since consecutive fields must alternate between odd lines and even lines to avoid artifacts .
  • the current program ends with a field consisting of even lines and the next program begins with a field also consisting of even lines.
  • a spliced bitstream that results in two successive fields both with even (or odd) lines is not permitted by MPEG, as such streams cannot be displayed correctly by existing display devices. Different steps can be taken to prevent this situation.
  • the preferred solution is to edit the repeat-first-field flag corresponding to the last frame of the current stream prior to the splicing point. If the last displayed field of the current stream and the first displayed field of the next stream have the same odd/even line parity, then the repeat-first-field flag can be simply toggled.
  • a value of logic ⁇ 0' could be changed to a logic l' while a logic l' could be changed to a logic '0' .
  • the effect is to lengthen or shorten respectively the last frame of the current stream by exactly one field. In some cases, special steps may need to be taken to insure that this variation is properly accounted for when determining the time stamp correction factor (TS C ) .
  • one of the tasks of the remultiplexer may be to limit or regulate the data rate of the resultant multiplex.
  • This data rate regulation capability greatly improves the versatility of the switching system and allows the use of variable bit rate (VBR) streams.
  • VBR variable bit rate
  • a good statistical remultiplexer can efficiently combine multiple VBR streams into a single multiplex, but depending on the particular schedule in effect at any given time, the combined data rate may exceed the available capacity of the channel, in which case some data rate regulation capability will be needed. In such cases, it may be easier to implement some of the post- splice adjustments after the date rate has been limited to a more manageable rate.
  • a software-based remultiplexer may have sufficient reserve processing capacity to implement most if not all of the stream splicing functions once the data rate has been limited.
  • a remultiplexer 115 with data rate regulation capability is described in copending U.S. Patent Application Serial No. 08/631,084 entitled "Compressed-Video Reencoder System for Modifying the Compression Ratio of Digitally Encoded Video Programs," assigned to the assignee of the present invention.
  • Such a remultiplexer may be advantageously coupled to the output of an output module for a second reason. This is because the remultiplexer can insure that the receiver buffer occupancy level remains within limits after switching between two different streams. As discussed previously, the buffer could otherwise overflow or underflow even when both streams have been encoded at the same data rate.
  • the procedure 1400 for determining which packet to send next first determines, at decision box 1410, whether all receiver buffers are already full. If they are, then a null data packet is sent at step 1420. Otherwise, a data packet is selected for the stream which has the smallest DTS for its next frame at step 1430. If sending this data packet would overflow the receiving buffer (decision box 1440) , then at step 1450, the packet corresponding to the stream with the next smallest DTS for the next frame is selected. This process continues until a packet is selected that does not overflow the receiver's buffer. At that point, the next packet for the selected stream is transmitted at step 1460.
  • a second implementation variation would be to merge the remultiplexer 115 and output module 220 into a single unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
EP98943427A 1997-09-12 1998-08-27 Nahtlose verbindung von komprimierten videoprogrammen Withdrawn EP1013097A1 (de)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US92841497A 1997-09-12 1997-09-12
US928414 1997-09-12
PCT/US1998/017757 WO1999014955A1 (en) 1997-09-12 1998-08-27 Seamless splicing of compressed video programs

Publications (1)

Publication Number Publication Date
EP1013097A1 true EP1013097A1 (de) 2000-06-28

Family

ID=25456204

Family Applications (1)

Application Number Title Priority Date Filing Date
EP98943427A Withdrawn EP1013097A1 (de) 1997-09-12 1998-08-27 Nahtlose verbindung von komprimierten videoprogrammen

Country Status (5)

Country Link
EP (1) EP1013097A1 (de)
JP (1) JP2001517040A (de)
AU (1) AU9122998A (de)
CA (1) CA2303149C (de)
WO (1) WO1999014955A1 (de)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7031348B1 (en) * 1998-04-04 2006-04-18 Optibase, Ltd. Apparatus and method of splicing digital video streams
US6785289B1 (en) * 1998-06-05 2004-08-31 Sarnoff Corporation Method and apparatus for aligning sub-stream splice points in an information stream
AU4944699A (en) 1998-06-29 2000-01-17 Limt Technology Ab Method and apparatus for splicing data streams
US7068724B1 (en) 1999-10-20 2006-06-27 Prime Research Alliance E., Inc. Method and apparatus for inserting digital media advertisements into statistical multiplexed streams
DE60039861D1 (de) 1999-04-20 2008-09-25 Samsung Electronics Co Ltd Werbeverwaltungssystem für digitale videoströme
EP1056259B1 (de) * 1999-05-25 2005-09-14 Lucent Technologies Inc. Verfahren und Vorrichtung für Telekommunikationen mit Internet-Protokoll
US20020024610A1 (en) 1999-12-14 2002-02-28 Zaun David Brian Hardware filtering of input packet identifiers for an MPEG re-multiplexer
US20040148625A1 (en) 2000-04-20 2004-07-29 Eldering Charles A Advertisement management system for digital video streams
US7068719B2 (en) 2001-06-01 2006-06-27 General Instrument Corporation Splicing of digital video transport streams
JP2003204482A (ja) 2001-10-22 2003-07-18 Matsushita Electric Ind Co Ltd 放送装置
US9286214B2 (en) 2003-06-06 2016-03-15 Arris Enterprises, Inc. Content distribution and switching amongst data streams
US9456243B1 (en) 2003-06-06 2016-09-27 Arris Enterprises, Inc. Methods and apparatus for processing time-based content
CN101065963B (zh) * 2003-08-29 2010-09-15 Rgb网络有限公司 提供低延迟类vcr效果和节目改变的视频多路复用器系统
CA2534979A1 (en) * 2003-09-05 2005-03-17 General Instrument Corporation Methods and apparatus to improve the rate control during splice transitions
GB0413723D0 (en) 2004-06-18 2004-07-21 Nds Ltd A method of dvr seamless splicing
US8615038B2 (en) * 2004-12-06 2013-12-24 Nokia Corporation Video coding, decoding and hypothetical reference decoder
CN103024603B (zh) * 2012-12-27 2016-06-15 合一网络技术(北京)有限公司 一种用于解决播放网络视频时短时停顿的装置及方法
CN116364064B (zh) * 2023-05-19 2023-07-28 北京大学 一种音频拼接方法、电子设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR0121301B1 (ko) * 1992-09-30 1997-11-17 사또오 후미오 편집신호 디코딩 장치
JPH07212766A (ja) * 1994-01-18 1995-08-11 Matsushita Electric Ind Co Ltd 動画像圧縮データ切り換え装置
JP3575100B2 (ja) * 1994-11-14 2004-10-06 ソニー株式会社 データ送信/受信装置及び方法並びにデータ記録/再生装置及び方法
GB2307613B (en) * 1995-08-31 2000-03-22 British Broadcasting Corp Switching bit-rate reduced signals
WO1997046027A1 (en) * 1996-05-29 1997-12-04 Sarnoff Corporation Preserving synchronization of audio and video presentation
US6137834A (en) * 1996-05-29 2000-10-24 Sarnoff Corporation Method and apparatus for splicing compressed information streams
US5917830A (en) * 1996-10-18 1999-06-29 General Instrument Corporation Splicing compressed packetized digital video streams

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO9914955A1 *

Also Published As

Publication number Publication date
AU9122998A (en) 1999-04-05
CA2303149A1 (en) 1999-03-25
JP2001517040A (ja) 2001-10-02
WO1999014955A1 (en) 1999-03-25
CA2303149C (en) 2003-10-21

Similar Documents

Publication Publication Date Title
CA2303149C (en) Seamless splicing of compressed video programs
EP1397918B1 (de) Verbinden von digitalen videotransportströmen
US7027516B2 (en) Method and apparatus for splicing
US6993081B1 (en) Seamless splicing/spot-insertion for MPEG-2 digital video/audio stream
US6137834A (en) Method and apparatus for splicing compressed information streams
US6611624B1 (en) System and method for frame accurate splicing of compressed bitstreams
US6038000A (en) Information stream syntax for indicating the presence of a splice point
AU754879B2 (en) Processing coded video
EP0881838B1 (de) Korrektur der vorgegebenen Zeit
US6208759B1 (en) Switching between bit-rate reduced signals
US20060093045A1 (en) Method and apparatus for splicing
US20060034375A1 (en) Data compression unit control for alignment of output signal
WO1998032281A1 (en) Information stream syntax for indicating the presence of a splice point
JP3668556B2 (ja) ディジタル信号符号化方法
O'Grady et al. Real-time switching of MPEG-2 bitstreams
JP2823806B2 (ja) 画像復号装置
KR100517794B1 (ko) 압축된정보스트림을스플라이싱하는방법및장치
KR100345312B1 (ko) 개방된 지오피 구조를 가지는 비디오 스트림의 끊어 잇기방법
Wells Transparent concatenation of MPEG compression
Helms et al. Switching and Splicing of MPEG-2 Transport Streams

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20000410

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE

RIN1 Information on inventor provided before grant (corrected)

Inventor name: MONTA, PETER, A., TERAYON COMMUNICATION SYSTEMS

Inventor name: KRAUSE, EDWARD, A.

RIN1 Information on inventor provided before grant (corrected)

Inventor name: MONTA, PETER, A., TERAYON COMMUNICATION SYSTEMS

Inventor name: KRAUSE, EDWARD, A.

17Q First examination report despatched

Effective date: 20010619

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Withdrawal date: 20011219

RIN1 Information on inventor provided before grant (corrected)

Inventor name: MONTA, PETER, A., TERAYON COMMUNICATION SYSTEMS

Inventor name: KRAUSE, EDWARD, A.