WO2005020558A2 - Method and system for re-multiplexing of content-modified mpeg-2 transport streams using interpolation of packet arrival times - Google Patents

Method and system for re-multiplexing of content-modified mpeg-2 transport streams using interpolation of packet arrival times Download PDF

Info

Publication number
WO2005020558A2
WO2005020558A2 PCT/US2004/026124 US2004026124W WO2005020558A2 WO 2005020558 A2 WO2005020558 A2 WO 2005020558A2 US 2004026124 W US2004026124 W US 2004026124W WO 2005020558 A2 WO2005020558 A2 WO 2005020558A2
Authority
WO
WIPO (PCT)
Prior art keywords
stream
elementary stream
series
elementary
transport
Prior art date
Application number
PCT/US2004/026124
Other languages
French (fr)
Other versions
WO2005020558A3 (en
Inventor
Jeyendran Balakrishnan
Hemant Malhotra
Original Assignee
Skystream Networks Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/640,872 external-priority patent/US7342968B2/en
Priority claimed from US10/640,871 external-priority patent/US7693222B2/en
Priority claimed from US10/640,866 external-priority patent/US7227899B2/en
Priority claimed from US10/641,323 external-priority patent/US20050036557A1/en
Priority claimed from US10/641,322 external-priority patent/US7274742B2/en
Application filed by Skystream Networks Inc. filed Critical Skystream Networks Inc.
Priority to CA002535455A priority Critical patent/CA2535455A1/en
Publication of WO2005020558A2 publication Critical patent/WO2005020558A2/en
Publication of WO2005020558A3 publication Critical patent/WO2005020558A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • H04N21/23655Statistical multiplexing, e.g. by controlling the encoder to alter its bitrate to optimize the bandwidth utilization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2368Multiplexing of audio and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/242Synchronization processes, e.g. processing of PCR [Program Clock References]

Definitions

  • the present invention pertains to signals that are hierarchically organized into a systems layer stream and a lower layered elementary stream, where an elementary stream is streamed information of a component of a program, such as an audio signal or a video signal.
  • a n e xample o f a sy stems 1 ayer s tream i s a t ransport s tream.
  • t he invention pertains to selectively modifying one or more portions of an elementary stream and inserting the modified portions of the elementary stream into a modified systems layer stream.
  • the modified systems layer stream is configured so as to enable identification, extraction and real-time reproduction of its various portions.
  • a program signal is composed of one or more component signals referred to herein as elementary streams.
  • An example of an elementary stream can be one (natural or synthetic) audio signal, one (natural or synthetic) video signal, one closed captioning text signal, one private data signal, etc.
  • Several techniques are known for compressing, formatting, storing and conveying such elementary streams.
  • MPEG-1 For example, the MPEG-1, MPEG-2, MPEG-4, H.263, H.263++, H.26L, and H.264/MPEG-4 AVC standards provide well-known techniques for encoding (compressing and formatting) video.
  • MPEG-1 including the so-called "MP3"
  • MPEG-2 including the so-called "MP3”
  • MPEG-4 and Dolby AC- 3
  • MPEG-2 defines a technique for segmenting each elementary stream into packetized elementary stream (“PES") packets, where each PES packet includes a PES packet header and a segment of the elementary stream as the payload.
  • PES packets may be combined with "pack headers" and other pack specific information to form "packs".
  • the PES packets may be segmented into transport packets of a transport stream, where each transport packet has a transport packet header and a portion of a PES packet as payload.
  • These transport packets, as well as others are serially combined to form a transport stream.
  • elementary streams may be divided into "sync-layer" (or "SL”) packets, including SL packet headers.
  • SL packets may be combined with PES packet headers, to form PES packets, and these PES packets may be segmented and combined with transport packet headers to form transport packets.
  • transport packets are not used. Rather, elementary stream data is segmented and real-time protocol (“RTP") packet headers are appended to each segment to form RTP packets, hi addition, or instead, user datagram protocol (“UDP”) or transmission control protocol (“TCP”) packet headers may be appended to segmented data to form UDP or TCP packets.
  • RTP real-time protocol
  • UDP user datagram protocol
  • TCP transmission control protocol
  • MPEG-2 PES and transport streams encapsulating MPEG-2 video will be used herein as a model for illustrating the invention. Also, this invention is illustrated using a hierarchical signal, wherein elementary streams are carried as segments in packets or cells of one or more higher layers.
  • systems layer is herein used to refer to such higher layers.
  • the MPEG-2 PES streams and transport streams will be used as a specific example of the systems layer.
  • system layer need not be restricted to the "transport layer” according to the OSI seven layer model but can, if desired, include other layers such as the network layer (e.g., internet protocol or "IP"), the data link layer (e.g., ATM, etc.) and/or the physical layer.
  • IP internet protocol
  • data link layer e.g., ATM, etc.
  • other types of elementary streams such as encoded audio, MPEG-4 video, etc. may be used.
  • transmission is used herein but should be understood to mean the transfer of information under appropriate circumstances via a communications medium or storage medium to another device, such as an intermediate device or a receiver/decoder.
  • Audio-visual programs are obtained by using an appropriate combination of one or more elementary streams for storage or transmission of data.
  • one audio elementary stream and one video elementary stream may be combined, or one video elementary stream and multiple audio elementary streams may be combined.
  • the transport stream format enables both single program transport streams (SPTS) in which the elementary streams of a single audio-visual program are multiplexed together into a serial stream, and multiple program transport streams (MPTS), in which the component elementary streams of multiple audio-visual programs are all multiplexed together into a single serial stream.
  • SPTS single program transport streams
  • MPTS multiple program transport streams
  • each of N elementary streams 100 (including ESi, ES 2 , through ES N ) is first packetized into N packetized elementary streams of (PES) packets 110, independent of its underlying compression format.
  • PES packetized elementary streams of
  • Each PES packet is comprised of a PES packet header and a segment of a single elementary stream as a payload, which contains data for only a single elementary stream.
  • a PES packet may contain data for more than one decoding unit (e.g., data for more than one c ompressed p icture or for m ore than o ne c ompressed audio frame).
  • a v ariety o f packetization strategies for forming PES packets from an elementary stream are permitted.
  • PES packets from each elementary stream are further packetized into fixed size (188 byte) Transport Stream (TS) packets 120.
  • TS packet 120 As shown in FIG. 2, consists of a fixed 4-byte Packet Header 121, an optional Adaptation Field 122 of variable length, and the remaining bytes containing the PES packet data as Payload 123.
  • the fixed Packet Header 121 contains a field called Packet IDentifier (PID), which is a unique numeric identifier or tag for each Elementary Stream 100 carried in a Transport Stream 120. For example, one PID is assigned to a video ES of a particular program, a second, different PID is assigned to the audio ES of a particular program, etc.
  • PID Packet IDentifier
  • TS packets 120 from multiple underlying elementary streams 100 are then multiplexed together according to the rules for transport streams set forth in the MPEG-2 Systems specification. This includes insertion of special TS packets 130 containing System Information (SI), which include tables specifying the different programs within the transport stream as well the PIDs which belong to each program.
  • SI System Information
  • the transport stream format consists of a lower compression layer, comprising the component elementary streams, and a higher systems layer, comprising the PES and TS packets.
  • the systems layer contains important timing information which enables the receiver to play back the audio-visual information in a time-synchronized manner.
  • the PES packet header contains a Presentation Time Stamp (PTS) in the PES packet header which indicates the time instants at which the associated audio or video presentation unit (an audio or video frame) of a given audio-visual program should be decoded and presented to the user.
  • PTS Presentation Time Stamp
  • This PTS is relative to the System Time Clock used by the transmitting encoder.
  • T he T S p ackets also c any samples o f this encoder c lock c ailed Program Clock References (PCR) in a quasi-periodic manner to enable the receiver to synchronize its clock to that of the encoder.
  • PCR Program Clock References
  • a requirement for MPEG-2 transport streams is that the PCR for each program must be sent at least once every 100 ms.
  • these PCR packets are to be sent at least once every 40 ms.
  • PCR information is carried in the TS packet inside the Adaptation Field 122.
  • PCRs for a given program can be carried in the TS packets carrying any one of the component elementary streams 100 of that program (as identified by its PID), or they can be carried in separate TS packets with a unique PCR PID. Typically PCRs are carried in the video PID of a program.
  • a transcoder receives an already encoded elementary stream and re-encodes it, e.g., at a different bit rate, according to a different encoding standard, at a different resolution, using different encoding options, changing the audio sampling rate or video frame rate, etc. while maintaining the underlying content with as much fidelity as possible.
  • a splicer is a device that appends one signal to another, inserts that signal in the middle of the first, or replaces part of the signal at a given instant.
  • a splicer may append one encoded elementary stream at the end of another elementary stream in a program so that they will be presented seamlessly and in sequence.
  • the splicer could insert one program in the middle of another, e.g., in the case of inserting a commercial in the middle of a television show.
  • An editor is a device that edits (modifies) an elementary stream and produces an edited encoded elementary stream. Examples of these devices are described in U.S. Patent Nos. 6,141,447, 6,038,256, 6,094,457, 6,192,083, 6,005,621, 6,229,850, 6,310,915, and 5,859,660.
  • a system and method are described for re-multiplexing elementary streams that are modified by a stream processing device into a stream compliant with a particular standard, such as an MPEG-2 transport stream format.
  • the system may be implemented, for example, within a device such as a transcoder, splicer or editor.
  • Each incoming TS packet entering the system, whether or not it is to be modified is stamped with its time of arrival (TO A) using a local real-time clock, as well as its packet number in order of arrival within the full transport stream.
  • the local real-time clock that is used need not be at the same 27 MHz frequency as the encoder clock of the incoming programs to be processed.
  • Transport stream packets containing data to be modified are input to a stream processor, the stream processing algorithm is performed at the elementary stream level, and another sequence of transport stream packets are output.
  • the input arrival time stamps of incoming TS packets that are not modified are left unchanged.
  • a new set of TOA values are calculated for the output transport stream packets using TOA interpolation based on the TOA values in the transport stream packets in the input transport stream before the content modification.
  • TOA values for pre-determined synchronization points are used to assign TOA values to content-modified transport stream packets through interpolation.
  • These new TOA values can then be used to synchronize the output of data from the re-multiplexer.
  • the output multiplexer implements a simple algorithm which emits each outgoing TS packet after a constant delay past its corresponding arrival time stamp.
  • a compliant MPEG-2 transport stream is delivered.
  • the advantage of the new system and method for re-multiplexing is that it has a significantly lower computational requirement than implementing a full-fledged re- multiplexer. Further, re-multiplexers using the inventive method can operate with only TS packets as input, unlike conventional re-multiplexers that need to accept PES packets as input. This allows simple re-multiplexing implementations that can be used for both modified and unmodified transport streams, enabling the implementation of a single re- multiplexing device that can forward audio-visual programs with or without stream modification into a compliant single or multi program MPEG-2 transport stream. A system or apparatus to carry out stream processing and re-multiplexing using the inventive method is also described.
  • FIG. 1 is a schematic view of certain steps for processing raw compressed data into an MPEG-2 transport stream
  • FIG. 2 is a representation of a TS packet in the transport stream of FIG. 1 ;
  • FIG. 3 is a block diagram of a system in accordance with an embodiment of the present invention for modifying the content of an incoming transport stream and remultiplexing the modified content into an outgoing transport stream;
  • FIG. 4 shows a series of TS packets from the same elementary stream with their packet count and arrival time, in the incoming transport stream and the location of synchronization points in the packets in accordance with an embodiment of the present invention;
  • FIG. 5 is a flow chart of the initial steps undertaken for carrying out one embodiment of the inventive method with the system of FIG. 3;
  • FIG. 6 is a flow chart of subsequent steps undertaken for carrying out one embodiment of the inventive method with the system of FIG. 3; and
  • FIG. 7 is a flow chart describing the steps performed for calculating TO As to be stamped onto modified packets in the outgoing transport stream in accordance with an embodiment of the present invention.
  • FIG. 1 depicts, for purposes of illustration, the creation of a Single Program transport stream (SPTS) or a Multi-Program transport stream (MPTS).
  • SPTS Single Program transport stream
  • MPTS Multi-Program transport stream
  • N number of Elementary Streams 100 comprising one or more programs, are first packetized into N streams of PES packets 110. Those PES packets are then placed into TS packets 120.
  • SI packets 130 with system information, tables specifying the different programs within the transport stream as well as the PIDs which belong to each program, are also generated.
  • the TS packets 120 and the SI packets 130 are then multiplexed by Multiplexer 140 to generate a transport stream, TS.
  • FIG. 2 illustrates the format of each 188-Byte TS packet 120.
  • Header 121 is four bytes and contains the PID for the TS packet.
  • Some TS packets 120 contain an adaptation field 122 of variable length with PCR and other optional information.
  • the remaining bytes of TS packet 120 contain the Payload 123.
  • FIG. 3 depicts an illustrative System 200 that, in accordance with the invention, accepts compliant MPEG-2 transport streams, processes one or more of the constituent elementary streams via corresponding stream processors, and multiplexes the results to deliver a compliant MPEG-2 transport stream as taught by the invention.
  • a system may be implemented using a suitably programmed network of one or more Mediaplex-20TM or Source Media RoutersTM available from SkyStream Networks Inc., a company located in Sunnyvale California.
  • the basic architectures of these devices are described in U.S. Patent App. Ser. No. 10/159,787 and U.S. Patent 6,351,474, respectively
  • the illustrated System 200 functionally includes a System Input Subsystem 210, a Table Processor Subsystem 220, a Demultiplexer Subsystem 230, one or more Stream Processor Subsystems 240, one or more Packet Buffers 250, and a Multiplexer Subsystem 260.
  • Each stream processor 240 that modifies the content of a different elementary stream is equipped with a Timing Interpolation capability, which is explained below.
  • FIG. 5 and FIG. 6 are flowcharts for the first embodiment of the invention and illustrate the basic steps performed by the System 200 from the time that a transport stream is received through the time that a modified transport stream is output from System 200.
  • the System Input subsystem 210 receives each incoming TS packet, and stamps it with its packet sequence count and arrival time (TOA).
  • the TOA is determined by looking up a local real-time clock (called RTC), which need not be synchronized to the system time clocks (STC) of any of the programs in the incoming transport stream (in fact the local clock need not even have the same nominal 27 MHz frequency).
  • the packet count is determined in order of arrival, including any MPEG-2 null packets which may be present.
  • the received packets, along with the additional information (TOA and packet count) are sent to the Table Processor Subsystem 220.
  • the Table Processor 220 determines the different PIDs present in the transport stream, by parsing the tables present in the SI packets. Using this information, Table Processor 220 configures the Demultiplexer Subsystem 230 by informing it as to which PIDs are to be sent to which one or more Stream Processor 240, and which PIDs are not to be modified. In this embodiment, PCRs are never output in a packet with a PID corresponding to a modified elementary stream.
  • T able P rocessor 220 g enerates a n ew P CR P ID t hat i s d ifferent from a 11 other PIDs present in the input transport stream, and modifes the SI tables as shown at step 440.
  • Table Processor 220 inserts these SI packets having modified SI tables into the transport stream accordingly.
  • the Demultiplexer subsystem 230 extracts the PID of each TS packet and determines whether the TS packet is part of a stream that is to be modified. Any packet that is not part of a stream to be modified is sent to a Non-Modified Packet Buffer 250, as shown in step 470.
  • the Non-Modified Packet Buffer 250 is used to hold input TS packets that are not to be modified until the modified TS packets output by Stream Processors 240 are ready for multiplexing with the unmodified packets.
  • the Demultiplexer subsystem 230 in step 480 encounters a PCR in a packet with a PH) that is the same as that of an elementary stream that is to be modified, it extracts and copies the PCR into a new TS packet identified with the new PCR PID generated by the Table Processor 220, fills up the rest of this new PCR-bearing packet with stuffing bytes, and passes this packet to the Non-Modified Packet Buffer 250 as shown in step 490.
  • the PCR is removed from the original TS packet before the latter is forwarded in step 495 to the corresponding Stream Processor 240. All other TS packets that are to be modified and that do not contain PCRs bypass Step 490 and are forwarded directly to the corresponding Stream Processor 240.
  • each Stream Processor 240 receives its corresponding TS packets, extracts the elementary stream, processes the stream according to its specific processing algorithm, and generates new TS packets containing the modified elementary stream payload. Further, according to this invention, it stamps each generated TS packet with a TOA that is as close as possible to the actual TOA which would have been stamped had the modified TS packets been actually received at the input, by interpolating the input TOA values using an interpolation algorithm. These new modified TS packets with the generated TOA values are passed along to the Multiplexer 260.
  • the Multiplexer 260 receives TS packets from the Non-Modified TS Packet Buffer 250, as well as packets from Stream Processors 240. In all cases, TS packets received by the Multiplexer 260 contain corresponding TOA stamps. Using the TOA stamps, the Multiplexer 260 determines the time of departure for each outgoing TS packet using a suitable constant delay model, such as a constant delay model described for MPEG-2. According to this approach, the time of departure (TOD) for each outgoing TS packet is determined as:
  • TOD TOA + d, (1) where d is the constant delay through the system from the instant of arrival to the instant of departure.
  • Multiplexer 260 might incorporate PCR correction in case actual TS packet departure times differ from the ideal value in equation (1). Since each outgoing packet thus effectively undergoes a c onstant delay through the system, the outgoing transport stream will be a compliant MPEG-2 transport stream.
  • Multiplexer 260 may optionally remove TOA stamps after multiplexing.
  • the new transport stream is output.
  • a fundamental problem faced by the Stream Processor 240 in computing output TOAs is to determine how to associate TOA values for outgoing TS packets when the operation of stream processing destroys any connection between input and output bits.
  • Synchronization points in the TS packets are present in both the transport stream input to a stream processor 240 and the bits output from the stream processor 240. Such synchronization points are described in detail in related application Ser. No. referenced above (Attorney Docket No.: 68775-052) and described in some detail below.
  • synchronization points are either physical bit patterns or logical points in the input elementary stream that do not vary under the operation of stream processing, irrespective of any transcoding or splicing. Additional attributes possessed by such synchronization points are that they regularly recur in both the input and output elementary streams, and each such point corresponds to a unique instant in the encoder system time clock (STC).
  • STC system time clock
  • Synchronization points can be physical or virtual. Physical synchronization points consist of actual b it patterns (finite sequences of bits in the e lementary stream) which are present in the input as well as the output, and which are a-priori associated with a certain presentation time. Examples of these are the well-known start codes or syncwords found in all the international video and audio coding standards such as MPEG-1, MPEG-2, MPEG-4, H.261, H.263 and H.26L. For example, in the case of MPEG-1 and MPEG-2 video, these include the sequence header code, GOP start code, picture start code, slice start code, sequence end code and other extension and user data start codes. MPEG-4 video has equivalents of all these start codes except for slice start code.
  • All MPEG (1, 2 or 4) based video processing devices that do not alter the frame rate must output one picture start code for each one that is received; hence picture start codes are synchronization points for this application. Further, in the case of MPEG-2 video, all such devices must forward the slice start codes received at the beginning of each row of macroblocks; these provide a denser sequence of synchronization points in addition to picture start codes. In the case of MPEG-1 audio Layers 1 and 2, the syncword at the start of each audio frame provides a dense sequence of synchronization points.
  • synchronization points In order to carry out TOA interpolation, synchronization points must be selected beforehand during the design of the system. According to one method of selection, the synchronization points are selected such that there is at least one such point in every incoming TS packet carrying the elementary stream to be processed; this would ensure that there is a synchronization point for every incoming TOA stamp.
  • a less demanding method taught by the invention is to select a less frequent sequence of synchronization points and use interpolation to calculate TOA values for outgoing TS packets.
  • Fig. 4 and Fig. 7 illustrate how the inventive method uses the information System Input subsystem 210 stamps on each TS packet and synchronization points to calculate "arrival times" for modified packets.
  • the first step 510 in the inventive method is to determine the TOA for the start of each synchronization point in the input. This is the instant at which the first byte of the synchronization point entered the system. This step is carried out at the input to the Stream Processor 240.
  • the TOA (TOA SYNCX ) of a given synchronization point, SYNC X 331, is calculated as:
  • the effect of the above calculation is to translate the TOA from the start of the TS packet to the actual byte in the payload corresponding to the start of the synchronization point.
  • This step 510 is carried out at the input to the Stream Processor 240, before the stream undergoes alteration.
  • the crucial advantage achieved in this step is that by definition each synchronization point also appears in the output and thus the TOA is available for these points in the output.
  • the inventive method further teaches how to interpolate, from this sparse sequence of output TOA values, the appropriate TOA values for the start of each outgoing TS packet. This is achieved by first computing the gradient of the TOA (change in TOA per byte) between two successive synchronization points at the output, and using this gradient to stamp TOA values for each outgoing TS packet between these synchronization points.
  • the system next computes the output TOA gradient between every pair of successive synchronization points at the output. This is carried out at the time of output TS packet generation, and consists of two parts.
  • step 520 the input TOA gradients are calculated as follows:
  • ⁇ IN SYNCx (TOA SYNCx + , - TOASYNC X ) / (BC SY NC X +I - BC SYNC ⁇ ), (3)
  • TOASYN C X and TOA S Y NCX +I are the TOA values of two successive synchronization points, as computed using Equation (2)
  • BCSYN C X 361 and BC SYNCX + I 362 are their corresponding byte offsets in the input transport stream counting from the first byte in the input.
  • the byte offset of any synchronization point in the input may be calculated as
  • BCSYNC 188 * N SYN CPKT + B SYN c , (4)
  • N SYNCPKT is the packet sequence count of the input TS packet in which the synchronization point is contained
  • B SYNC is its byte offset from the start of that packet, as described in the explanation for Equation (2).
  • the ⁇ IN SYNC values correspond to the gradient of TOA at every output synchronization point, counted using input byte counts. But due to the modification of the underlying elementary stream by the stream processing algorithm, the number of input bytes between two synchronization points in the input may not match with the number of bytes between the same two synchronization points at the output. To account for this, the required output TOA gradient is computed in step 530 from the input gradient by multiplying the latter by the transmission ratio, which is the ratio of input bits to output bits resulting from the particular stream processing operation that is used. For example, in the case of transrating or reduction of bit rate, the transmission ratio would be equal to or greater than unity.
  • ⁇ OUTSYNCX ⁇ SYNCX * ⁇ INSYNCX , (5)
  • ⁇ SYNCX is the transmission ratio of the synchronization point SNYC X , and refers to the ratio of bytes between the synchronization point, SNYC X , and the subsequent synchronization point in the input, and the corresponding byte count between the same two points in the output.
  • the invention teaches that ideally, the value of U S Y NC should be recomputed for every synchronization point. However, the invention also teaches a less restrictive approach in which it is recomputed only once for every suitably defined group of synchronization points. For example, in case of video transcoding, all the synchronization points in a picture can have the same value of ⁇ s YNC , calculated using the input and output byte counts of a picture.
  • BOUT SYNC is the output byte offset of the synchronization point from the start of the packet.
  • TOA PK ⁇ is extrapolated from the TOA of the preceding TS packet containing a synchronization point (TOA SYNCPKT ), using the output TOA gradient:
  • TOAPKT TOA S Y NC PKT + 188 * N PKT * ⁇ OUT SYNC , (7)
  • N PKT is the distance (in output packet counts) of this TS packet from the last output TS packet containing a synchronization point.
  • Synchronization points are points or locations within a stream that can be used as a basis for identifying locations near which incoming ancillary data, such as PCR and TOA stamps, should be located in a new transport stream carrying a processed version of the incoming elementary stream.
  • synchronization points are locations in the elementary stream which are known to bear a clear and fixed timing relationship with the system time clock of the program comprising the elementary stream and therefore can serve as a basis for retiming or re- synchronizing ancillary data to the system time clock in a sufficiently accurate fashion.
  • ancillary data is initially located within the original systems layer stream, which in the embodiments discussed above is the transport stream, in a certain vicinity of a specific identifiable synchronization point in the elementary stream, prior to stream processing.
  • this ancillary data should be located within the new transport stream (more generally, the new systems layer stream) in a similar vicinity to the same synchronization point of the stream-processed elementary stream.
  • the same synchronization point must be present in the elementary stream both before stream processing and after stream processing.
  • (c) Continual Recurrence In The Elementary Stream Generally, ancillary data is expected to recur continually throughout the systems layer stream, or at least the sequence carrying the processed elementary stream. Likewise, the type of synchronization point chosen for use in the invention should also continually recur within the processed elementary stream. In other words, over the course of time, so long as information is being carried in the systems layer stream for the elementary stream to be stream processed, and so long as there is ancillary data to be retimed or re-synchronized, one should also expect to find synchronization points in the elementary stream. Otherwise, such candidate synchronizations point cannot provide a suitable reference by which to relate the ancillary data.
  • a type of synchronization point that occurs frequently within the elementary stream.
  • the higher the frequency of occurrence of the synchronization point the more accurate will be the retiming or re-synchronizing of the ancillary data in the new transport stream carrying the processed elementary stream.
  • two successive synchronization points define a temporal locale, which is a portion of an elementary stream corresponding to an elapsed duration in time of the system time clock of the program of which the elementary stream is a component.
  • ancillary data occurring in a given temporal locale (between two synchronization points) of an input systems layer stream is gathered prior to processing the systems layer stream, and the specific temporal locale in which the ancillary data was gathered, is noted.
  • the corresponding temporal locale in the processed elementary stream is located, and the ancillary data is inserted into the new systems layer stream, containing the processed elementary stream, at that identified temporal locale.
  • the amount of elementary stream data in a given temporal locale may change as a result of the stream processing.
  • the precise corresponding time of the systems time clock at which ancillary data may be inserted into the new systems layer stream will be different than the original time of the systems time clock of the location within the original systems layer stream from which the ancillary data was extracted.
  • This difference introduces an error or drift in the synchronism of the ancillary data relative to the original timing of such ancillary data in the systems layer stream before processing. It is desired to maintain such a synchronism error or drift within a tolerable range.
  • ancillary data located in the original systems layer stream at one end of a temporal locale is inserted into the new processed systems layer stream at the opposite end of the temporal locale (e.g., the earliest time, or beginning of the temporal locale).
  • the maximum error or drift in synchronism is approximately equal to the duration of the temporal locale.
  • the frequency of occurrence of the type of synchronization point is at least equal to the frequency of occurrence of the ancillary data to be retimed or re-synchronized.
  • a physical synchronization point which corresponds to a predefined, unvarying sequence of bits or code which can be identified in the bitstream.
  • any start code can serve as a synchronization point
  • each start code is a 32 bit code comprising a 23 bit start code prefix 0000 0000 0000 0000 0001 followed by one byte that distinguishes the type of start code from each other type.
  • the following are examples of MPEG-2 video start codes, and the distinguishing byte that identifies them:
  • the group_start_code, the picture_start_code and the slice_start_code are typically good candidates for use as synchronization points.
  • the group_start_code immediately precedes a group of pictures (GOP) within the video elementary stream.
  • GOP's are "entry points" i.e., random access points, at which a decoder can arbitrarily start decoding, e.g., in a trick mode operation (jump, fast forward, rewind, etc.).
  • Such an entry point may also be used by a decoder when it is powered on, or otherwise caused to tune to, a systems layer stream which is already in the middle of transfer.
  • the picture_start_code is required by MPEG-1, MPEG-2 and MPEG-4 (and optional in MPEG-4 part 10) to be present at the start of each encoded video picture. Depending on the type of stream processing, this start code will also be present in the video elementary stream after stream processing. Also, this start code is synchronized to the start of a video picture and therefore coincides with the true decoding time and presentation time of the picture (whether or not DTSs or PTSs representing the decoding time and/or presentation time are present in the systems layer stream). Generally speaking, picture_start_codes will occur at a higher frequency than group_start_codes. The slice_start_code is also a good candidate.
  • the slice_start_code is provided at the beginning of a slice, which (according to MPEG-1 and MPEG-2) includes all or part of the macroblocks of a given macroblock row of a video picture. (According to H.264, a slice can span more than one macroblock row.)
  • the particular macroblock row to which the slice_start_code pertains can be easily determined using a well-defined formula. Therefore, the slice_start_code coincides with the time of presentation of a decoded version of the corresponding slice location in the video picture. Generally speaking, slice_start_codes will occur at a much higher frequency that picture_start_codes.
  • a device that parses the elementary stream can determine the particular horizontal offset within the macroblock row at which the slice occurs. Therefore, the correspondence of the slice to the display time of information represented by the slice can be determined.
  • picture start codes might not occur frequently enough to provide a sufficiently accurate reference by which ancillary data, such as PCRs, can be resynchronized.
  • ancillary data such as PCRs
  • a virtual synchronization point might not correspond to a very explicitly predetermined code or sequence of bits. Rather, a virtual synchronization point might correspond to a bit, or sequence of bits, representing a well-defined, deterministically identifiable layer of the elementary stream, which may start with an arbitrary bit pattern not known ahead of time.
  • MPEG-2 video slices contain individual macroblocks, and each macroblock starts with a variable length code indicating the macroblock address increment.
  • variable length code representing the macroblock address increment is chosen from a table of multiple macroblock address increment codes. Such a variable length code can be easily identified, but it is not known ahead of time which specific one will be encountered; the specific code encountered will depend on the number of skipped macroblocks between the last encoded macroblock and the current encoded macroblock. Nevertheless, the location of the macroblock in a video picture can be determined with absolute accuracy and therefore so can the corresponding display time of the macroblock. Therefore, the start of a macroblock can provide a very effective virtual synchronization point because, generally, they occur at an even higher frequency than slices. As stream processing can include any combination of transcoding, editing or splicing, the amount of information in an elementary stream between two successive synchronization points may be changed.
  • the amount of information: (a) in a video picture, between video picture start codes; (b) in a slice, between slice start codes; or (c) in a sequence of one or more macroblocks, between successive macroblock address increment codes, can be changed.
  • the amount of elementary stream information between the picture start code of the original video picture preceding the insert, and the picture start code of the original video picture following the insert will increase. Nevertheless, the synchronization points will survive the stream processing operation.
  • systems layer stream information that was temporally located at a particular vicinity of one synchronization point in the original elementary stream should be temporally located as close as possible to that same synchronization point in the new systems layer stream containing the processed elementary stream.
  • the choice of synchronization point type(s) to be used is predetermined and remains fixed during operation.
  • the choice of synchronization point type may be chosen by an operator or automatically selected by the system according to the invention.
  • automatic adaptation is not only attractive (to minimize operator training and dependence) but also feasible. The reason is that the stream processor, and other devices that work with it, must be able to parse the incoming systems layer and elementary streams as well as to format them.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Television Systems (AREA)
  • Time-Division Multiplex Systems (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A system and method is provided for revising the time stamp information in an MPEG-2 encoded data transport stream after content in the transport stream is modified. Incoming packets are stamped with arrival times and packet sequence counts (210). Synchronization points within the elementary stream are identified and arrival times of those synchronization points are calculated. After elementary streams are modified, commensurate arrival times are calculated based on arrival times and packet sequence counts of incoming packet, as well as the arrival times of the identified synchronization points within the elementary stream and the bit ratio between the incoming packets and the modified outgoing packets (240). Calculated arrival times are stamped on the outgoing packets and used to time the output of the outgoing stream (240).

Description

[0001]
[0002] METHOD AND SYSTEM FOR RE-MULTIPLEXING OF CONTENT-MODIFIED MPEG-2 TRANSPORT STREAMS USING INTERPOLATION OF PACKET ARRIVAL TIMES
[0003] Related Applications The subject matter of this application is related to the subject matter of the following U.S. patent applications, all of which are commonly assigned to the same assignee as is this application: (1) U.S. Patent Application Ser. No. 10/640,872, (Docket No.: 68775-049) filed concurrently herewith for Jeyendran Balakrishnan and Shu Xiao and entitled Method And System For Modeling The Relationship Of The Bit Rate Of A Transport Stream And The Bit Rate Of An Elementary Stream Carried Therein; (2) U.S. Patent Application Ser. No. 10/641,322, (Docket No.: 68775-050) filed concurrently herewith for Jeyendran Balakrishnan and Shu Xiao and entitled Model And Model Update Technique In A System For Modeling The Relationship Of The Bit Rate Of A Transport Stream And The Bit Rate Of An Elementary Stream Carried Therein; (3) U.S. Patent Application Ser. No. 10/640,871, (Docket No.: 68775-051) filed concurrently herewith for Jeyendran Balakrishnan and Hemant Malhotra and entitled Method And System For Re-Multiplexing Of Content-Modified MPEG-2 Transport Streams Using PCR Interpolation; (4) U.S. Patent Application Ser. No. 10/641,323, (Docket No.: 68775-052) filed concurrently herewith for Jeyendran Balakrishnan and Hemant Malhotra and entitled Method and System for Time-Synchronized Forwarding of Ancillary Information in Stream Processed MPEG-2 Systems Streams; and (5) U.S. Patent Application Ser. No. 10/640,866, (Docket No.: 68775-055) filed concurrently herewith Jeyendran Balakrishnan and Hemant Malhotra and entitled Method and System for Re-multiplexing of Content Modified MPEG-2 Transport Streams using Interpolation of Packet Arrival Times.
The contents of the above-listed patent applications are incorporated herein by reference.
[0004] Field of the Invention
[0005] The present invention pertains to signals that are hierarchically organized into a systems layer stream and a lower layered elementary stream, where an elementary stream is streamed information of a component of a program, such as an audio signal or a video signal. A n e xample o f a sy stems 1 ayer s tream i s a t ransport s tream. In p articular, t he invention pertains to selectively modifying one or more portions of an elementary stream and inserting the modified portions of the elementary stream into a modified systems layer stream. The modified systems layer stream is configured so as to enable identification, extraction and real-time reproduction of its various portions.
[0006] Background of the Invention
[0007] This invention is described in the context of audio-video programs, which include at least one audio signal or one video signal. However, those of ordinary skill in the art will appreciate the applicability of this invention to other types of program signals. [0008] A program signal is composed of one or more component signals referred to herein as elementary streams. An example of an elementary stream can be one (natural or synthetic) audio signal, one (natural or synthetic) video signal, one closed captioning text signal, one private data signal, etc. Several techniques are known for compressing, formatting, storing and conveying such elementary streams. For example, the MPEG-1, MPEG-2, MPEG-4, H.263, H.263++, H.26L, and H.264/MPEG-4 AVC standards provide well-known techniques for encoding (compressing and formatting) video. Likewise, MPEG-1 (including the so-called "MP3"), MPEG-2, MPEG-4 and Dolby AC- 3, provide techniques for encoding audio.
[0009] In addition, there are several known techniques for combining elementary streams for storage or transmission. MPEG-2 defines a technique for segmenting each elementary stream into packetized elementary stream ("PES") packets, where each PES packet includes a PES packet header and a segment of the elementary stream as the payload. PES packets, in turn, may be combined with "pack headers" and other pack specific information to form "packs". Alternatively, the PES packets may be segmented into transport packets of a transport stream, where each transport packet has a transport packet header and a portion of a PES packet as payload. These transport packets, as well as others (e.g., transport packets carrying program specific information or DVB systems information, entitlement management messages, entitlement control messages, other private data, null transport packets, etc.) are serially combined to form a transport stream.
[0010] In another known technique according to MPEG-4 systems, elementary streams may be divided into "sync-layer" (or "SL") packets, including SL packet headers. SL packets may be combined with PES packet headers, to form PES packets, and these PES packets may be segmented and combined with transport packet headers to form transport packets. According to another technique, transport packets are not used. Rather, elementary stream data is segmented and real-time protocol ("RTP") packet headers are appended to each segment to form RTP packets, hi addition, or instead, user datagram protocol ("UDP") or transmission control protocol ("TCP") packet headers may be appended to segmented data to form UDP or TCP packets. Many combinations of the above are possible including formatting the elementary streams into SL packets first and then formatting the S L packets into RTP p ackets, encapsulating transport p ackets into TCP packets according to the so-called multi-protocol encapsulation("MPE"), etc. The MPEG-2 PES and transport streams encapsulating MPEG-2 video will be used herein as a model for illustrating the invention. Also, this invention is illustrated using a hierarchical signal, wherein elementary streams are carried as segments in packets or cells of one or more higher layers. The term "systems layer" is herein used to refer to such higher layers. The MPEG-2 PES streams and transport streams will be used as a specific example of the systems layer. However, those skilled in the art will appreciate that other kinds of hierarchical layers may be used interchangeably as the systems layer for the elementary stream, such as the SL layer, the RTP layer, etc. Furthermore, "systems layer" need not be restricted to the "transport layer" according to the OSI seven layer model but can, if desired, include other layers such as the network layer (e.g., internet protocol or "IP"), the data link layer (e.g., ATM, etc.) and/or the physical layer. Also, other types of elementary streams, such as encoded audio, MPEG-4 video, etc. may be used. In addition, the term "transmission" is used herein but should be understood to mean the transfer of information under appropriate circumstances via a communications medium or storage medium to another device, such as an intermediate device or a receiver/decoder.
[0012] Audio-visual programs are obtained by using an appropriate combination of one or more elementary streams for storage or transmission of data. For example, one audio elementary stream and one video elementary stream may be combined, or one video elementary stream and multiple audio elementary streams may be combined. The transport stream format enables both single program transport streams (SPTS) in which the elementary streams of a single audio-visual program are multiplexed together into a serial stream, and multiple program transport streams (MPTS), in which the component elementary streams of multiple audio-visual programs are all multiplexed together into a single serial stream.
[0013] Referring to FIG. 1, to form a transport stream, each of N elementary streams 100 (including ESi, ES2, through ESN) is first packetized into N packetized elementary streams of (PES) packets 110, independent of its underlying compression format. Each PES packet is comprised of a PES packet header and a segment of a single elementary stream as a payload, which contains data for only a single elementary stream. However, a PES packet may contain data for more than one decoding unit (e.g., data for more than one c ompressed p icture or for m ore than o ne c ompressed audio frame). A v ariety o f packetization strategies for forming PES packets from an elementary stream are permitted.
[0014] PES packets from each elementary stream are further packetized into fixed size (188 byte) Transport Stream (TS) packets 120. Each TS packet 120, as shown in FIG. 2, consists of a fixed 4-byte Packet Header 121, an optional Adaptation Field 122 of variable length, and the remaining bytes containing the PES packet data as Payload 123. The fixed Packet Header 121 contains a field called Packet IDentifier (PID), which is a unique numeric identifier or tag for each Elementary Stream 100 carried in a Transport Stream 120. For example, one PID is assigned to a video ES of a particular program, a second, different PID is assigned to the audio ES of a particular program, etc.
[0015] TS packets 120 from multiple underlying elementary streams 100 are then multiplexed together according to the rules for transport streams set forth in the MPEG-2 Systems specification. This includes insertion of special TS packets 130 containing System Information (SI), which include tables specifying the different programs within the transport stream as well the PIDs which belong to each program. Thus the transport stream format consists of a lower compression layer, comprising the component elementary streams, and a higher systems layer, comprising the PES and TS packets.
[0016] The systems layer contains important timing information which enables the receiver to play back the audio-visual information in a time-synchronized manner. The PES packet header contains a Presentation Time Stamp (PTS) in the PES packet header which indicates the time instants at which the associated audio or video presentation unit (an audio or video frame) of a given audio-visual program should be decoded and presented to the user. This PTS is relative to the System Time Clock used by the transmitting encoder. T he T S p ackets also c any samples o f this encoder c lock c ailed Program Clock References (PCR) in a quasi-periodic manner to enable the receiver to synchronize its clock to that of the encoder. This enables the receiver to decompress and present the audio and video data at the correct times, thereby recreating the original presentation. [0017] A requirement for MPEG-2 transport streams is that the PCR for each program must be sent at least once every 100 ms. In the case of the DVB extension (Specification of Service Information (SI) in DVB Systems, ETSI Standard EN 300 468, May 2000) to MPEG-2, these PCR packets are to be sent at least once every 40 ms. PCR information, along with other optional information, is carried in the TS packet inside the Adaptation Field 122. The PCRs for a given program can be carried in the TS packets carrying any one of the component elementary streams 100 of that program (as identified by its PID), or they can be carried in separate TS packets with a unique PCR PID. Typically PCRs are carried in the video PID of a program.
[0018] In the MPEG-2 context, there are many applications that require one or more audio-visual programs carried inside a MPEG-2 transport stream to be modified at the elementary stream level, using stream processing devices. The prior art teaches a number of "stream processors" or devices, such as transcoders, editors and splicers, that process previously generated transport streams. A transcoder receives an already encoded elementary stream and re-encodes it, e.g., at a different bit rate, according to a different encoding standard, at a different resolution, using different encoding options, changing the audio sampling rate or video frame rate, etc. while maintaining the underlying content with as much fidelity as possible. A splicer is a device that appends one signal to another, inserts that signal in the middle of the first, or replaces part of the signal at a given instant. For example, a splicer may append one encoded elementary stream at the end of another elementary stream in a program so that they will be presented seamlessly and in sequence. Alternatively, the splicer could insert one program in the middle of another, e.g., in the case of inserting a commercial in the middle of a television show. An editor is a device that edits (modifies) an elementary stream and produces an edited encoded elementary stream. Examples of these devices are described in U.S. Patent Nos. 6,141,447, 6,038,256, 6,094,457, 6,192,083, 6,005,621, 6,229,850, 6,310,915, and 5,859,660.
[0019] In such stream processing, the underlying bit positions of various parts of the elementary stream have been changed. For instance, video or audio transcoding tends to change the amount of information (number of bits) needed to represent each presentable portion of the video or audio. This is especially true for a transcoder that changes the bit rate of the output signal but is also true of a transcoder which, for example, re-encodes the elementary stream according to a different standard than it was originally prepared. Likewise, a splice or edit tends to change the relative location of two points (namely, the end point of the original encoded video signal portion that precedes the inserted elementary stream information and the beginning point of the original encoded video signal portion that follows the inserted elementary stream information) in the originally encoded video signal. Therefore, the modified elementary streams must be re-packetized and re-multiplexed into a syntax-compliant transport stream for serial transmission.
[0020] One of the critical requirements in transport stream output packetization and delivery is that the inherent information content in the outgoing elementary streams retain the same timing relationship as that of the input. This is required to enable the receiver to play back the underlying audio-visual presentation in a time-synchronized manner. Since the r elationship b etween i nput a nd o utput e lementary s tream b its i s i nvalidated b y t he process of stream processing, the output packetization process must somehow re-create the original timing relationship. [0021] Existing approaches to this problem address this by using a full-fledged multiplexer at the output. This involves first recovering the original encoder clock for each modified program using clock recovery techniques like phase locked loops. Thereafter, the presentation times and decoding times of each outgoing audio or video frame are determined and re-stamped and inserted into the PES packets, and each outgoing TS packet is emitted in a manner that complies with the T-STD buffer model. Finally, PCR values are inserted into the emitted TS packets at the required frequency by looking up the recovered encoder clock at the instant of departure of the PCR-bearing TS packets. Since the timing information is completely regenerated and inserted, non- modified elementary streams in any processed program need to be de-packetized to their elementary stream levels, re-packetized, and re-transmitted. All these tasks, especially the need to obey T-STD buffer model requirements, impose a large implementation overhead, thereby increasing the complexity and cost of the stream processing system.
[0022] Summary of the Invention
[0023] It is therefore an object of this invention to provide simplified methods for generating timing information to be included in a content-modified transport stream.
[0024] In accordance with a first embodiment of the invention, a system and method are described for re-multiplexing elementary streams that are modified by a stream processing device into a stream compliant with a particular standard, such as an MPEG-2 transport stream format. The system may be implemented, for example, within a device such as a transcoder, splicer or editor. Each incoming TS packet entering the system, whether or not it is to be modified, is stamped with its time of arrival (TO A) using a local real-time clock, as well as its packet number in order of arrival within the full transport stream. The local real-time clock that is used need not be at the same 27 MHz frequency as the encoder clock of the incoming programs to be processed. Transport stream packets containing data to be modified are input to a stream processor, the stream processing algorithm is performed at the elementary stream level, and another sequence of transport stream packets are output. The input arrival time stamps of incoming TS packets that are not modified are left unchanged.
[0025] Before outputting transport stream packets with modified data, a new set of TOA values are calculated for the output transport stream packets using TOA interpolation based on the TOA values in the transport stream packets in the input transport stream before the content modification. In particular, TOA values for pre-determined synchronization points are used to assign TOA values to content-modified transport stream packets through interpolation. These new TOA values can then be used to synchronize the output of data from the re-multiplexer. With TOA stamps now available for all outgoing TS packets, whether modified or unmodified, the output multiplexer implements a simple algorithm which emits each outgoing TS packet after a constant delay past its corresponding arrival time stamp. Thus, a compliant MPEG-2 transport stream is delivered.
[0026] The advantage of the new system and method for re-multiplexing is that it has a significantly lower computational requirement than implementing a full-fledged re- multiplexer. Further, re-multiplexers using the inventive method can operate with only TS packets as input, unlike conventional re-multiplexers that need to accept PES packets as input. This allows simple re-multiplexing implementations that can be used for both modified and unmodified transport streams, enabling the implementation of a single re- multiplexing device that can forward audio-visual programs with or without stream modification into a compliant single or multi program MPEG-2 transport stream. A system or apparatus to carry out stream processing and re-multiplexing using the inventive method is also described.
[0027] Brief Description of the Drawings
[0028] FIG. 1 is a schematic view of certain steps for processing raw compressed data into an MPEG-2 transport stream; [0029] FIG. 2 is a representation of a TS packet in the transport stream of FIG. 1 ;
[0030] FIG. 3 is a block diagram of a system in accordance with an embodiment of the present invention for modifying the content of an incoming transport stream and remultiplexing the modified content into an outgoing transport stream; [0031] FIG. 4 shows a series of TS packets from the same elementary stream with their packet count and arrival time, in the incoming transport stream and the location of synchronization points in the packets in accordance with an embodiment of the present invention; [0032] FIG. 5 is a flow chart of the initial steps undertaken for carrying out one embodiment of the inventive method with the system of FIG. 3; [0033] FIG. 6 is a flow chart of subsequent steps undertaken for carrying out one embodiment of the inventive method with the system of FIG. 3; and [0034] FIG. 7 is a flow chart describing the steps performed for calculating TO As to be stamped onto modified packets in the outgoing transport stream in accordance with an embodiment of the present invention.
[0035] Detailed Description of the Preferred Embodiments [0036] FIG. 1 depicts, for purposes of illustration, the creation of a Single Program transport stream (SPTS) or a Multi-Program transport stream (MPTS). N number of Elementary Streams 100, comprising one or more programs, are first packetized into N streams of PES packets 110. Those PES packets are then placed into TS packets 120. SI packets 130 with system information, tables specifying the different programs within the transport stream as well as the PIDs which belong to each program, are also generated. The TS packets 120 and the SI packets 130 are then multiplexed by Multiplexer 140 to generate a transport stream, TS.
[0037] FIG. 2 illustrates the format of each 188-Byte TS packet 120. Header 121 is four bytes and contains the PID for the TS packet. Some TS packets 120 contain an adaptation field 122 of variable length with PCR and other optional information. The remaining bytes of TS packet 120 contain the Payload 123.
[0038] FIG. 3 depicts an illustrative System 200 that, in accordance with the invention, accepts compliant MPEG-2 transport streams, processes one or more of the constituent elementary streams via corresponding stream processors, and multiplexes the results to deliver a compliant MPEG-2 transport stream as taught by the invention. Illustratively, such a system may be implemented using a suitably programmed network of one or more Mediaplex-20™ or Source Media Routers™ available from SkyStream Networks Inc., a company located in Sunnyvale California. The basic architectures of these devices are described in U.S. Patent App. Ser. No. 10/159,787 and U.S. Patent 6,351,474, respectively
[0039] The illustrated System 200 functionally includes a System Input Subsystem 210, a Table Processor Subsystem 220, a Demultiplexer Subsystem 230, one or more Stream Processor Subsystems 240, one or more Packet Buffers 250, and a Multiplexer Subsystem 260. Each stream processor 240 that modifies the content of a different elementary stream is equipped with a Timing Interpolation capability, which is explained below.
[0040] FIG. 5 and FIG. 6 are flowcharts for the first embodiment of the invention and illustrate the basic steps performed by the System 200 from the time that a transport stream is received through the time that a modified transport stream is output from System 200. At step 410 the System Input subsystem 210 receives each incoming TS packet, and stamps it with its packet sequence count and arrival time (TOA). The TOA is determined by looking up a local real-time clock (called RTC), which need not be synchronized to the system time clocks (STC) of any of the programs in the incoming transport stream (in fact the local clock need not even have the same nominal 27 MHz frequency). The packet count is determined in order of arrival, including any MPEG-2 null packets which may be present. The received packets, along with the additional information (TOA and packet count), are sent to the Table Processor Subsystem 220.
[0041] At step 420, the Table Processor 220 determines the different PIDs present in the transport stream, by parsing the tables present in the SI packets. Using this information, Table Processor 220 configures the Demultiplexer Subsystem 230 by informing it as to which PIDs are to be sent to which one or more Stream Processor 240, and which PIDs are not to be modified. In this embodiment, PCRs are never output in a packet with a PID corresponding to a modified elementary stream. If, at step 430, the Table Processor 220 determines that an incoming PCR PID is the same as that of an elementary stream to be m odified, T able P rocessor 220 g enerates a n ew P CR P ID t hat i s d ifferent from a 11 other PIDs present in the input transport stream, and modifes the SI tables as shown at step 440. At step 450 Table Processor 220 inserts these SI packets having modified SI tables into the transport stream accordingly.
[0042] At step 460, the Demultiplexer subsystem 230 extracts the PID of each TS packet and determines whether the TS packet is part of a stream that is to be modified. Any packet that is not part of a stream to be modified is sent to a Non-Modified Packet Buffer 250, as shown in step 470. The Non-Modified Packet Buffer 250 is used to hold input TS packets that are not to be modified until the modified TS packets output by Stream Processors 240 are ready for multiplexing with the unmodified packets.
[0043] If the Demultiplexer subsystem 230 in step 480 encounters a PCR in a packet with a PH) that is the same as that of an elementary stream that is to be modified, it extracts and copies the PCR into a new TS packet identified with the new PCR PID generated by the Table Processor 220, fills up the rest of this new PCR-bearing packet with stuffing bytes, and passes this packet to the Non-Modified Packet Buffer 250 as shown in step 490. The PCR is removed from the original TS packet before the latter is forwarded in step 495 to the corresponding Stream Processor 240. All other TS packets that are to be modified and that do not contain PCRs bypass Step 490 and are forwarded directly to the corresponding Stream Processor 240.
[0044] At step 500, each Stream Processor 240 receives its corresponding TS packets, extracts the elementary stream, processes the stream according to its specific processing algorithm, and generates new TS packets containing the modified elementary stream payload. Further, according to this invention, it stamps each generated TS packet with a TOA that is as close as possible to the actual TOA which would have been stamped had the modified TS packets been actually received at the input, by interpolating the input TOA values using an interpolation algorithm. These new modified TS packets with the generated TOA values are passed along to the Multiplexer 260. [0045] At step 600 the Multiplexer 260 receives TS packets from the Non-Modified TS Packet Buffer 250, as well as packets from Stream Processors 240. In all cases, TS packets received by the Multiplexer 260 contain corresponding TOA stamps. Using the TOA stamps, the Multiplexer 260 determines the time of departure for each outgoing TS packet using a suitable constant delay model, such as a constant delay model described for MPEG-2. According to this approach, the time of departure (TOD) for each outgoing TS packet is determined as:
[0046] TOD = TOA + d, (1) where d is the constant delay through the system from the instant of arrival to the instant of departure. Multiplexer 260 might incorporate PCR correction in case actual TS packet departure times differ from the ideal value in equation (1). Since each outgoing packet thus effectively undergoes a c onstant delay through the system, the outgoing transport stream will be a compliant MPEG-2 transport stream. [0047] At step 650 Multiplexer 260 may optionally remove TOA stamps after multiplexing. At step 700 the new transport stream is output.
[0048] Description of Output TOA Computation
[0049] A fundamental problem faced by the Stream Processor 240 in computing output TOAs is to determine how to associate TOA values for outgoing TS packets when the operation of stream processing destroys any connection between input and output bits. [0050] Synchronization points in the TS packets, however, are present in both the transport stream input to a stream processor 240 and the bits output from the stream processor 240. Such synchronization points are described in detail in related application Ser. No. referenced above (Attorney Docket No.: 68775-052) and described in some detail below.
[0051] As taught therein, synchronization points are either physical bit patterns or logical points in the input elementary stream that do not vary under the operation of stream processing, irrespective of any transcoding or splicing. Additional attributes possessed by such synchronization points are that they regularly recur in both the input and output elementary streams, and each such point corresponds to a unique instant in the encoder system time clock (STC).
[0052] Synchronization points can be physical or virtual. Physical synchronization points consist of actual b it patterns (finite sequences of bits in the e lementary stream) which are present in the input as well as the output, and which are a-priori associated with a certain presentation time. Examples of these are the well-known start codes or syncwords found in all the international video and audio coding standards such as MPEG-1, MPEG-2, MPEG-4, H.261, H.263 and H.26L. For example, in the case of MPEG-1 and MPEG-2 video, these include the sequence header code, GOP start code, picture start code, slice start code, sequence end code and other extension and user data start codes. MPEG-4 video has equivalents of all these start codes except for slice start code.
[0053] All MPEG (1, 2 or 4) based video processing devices that do not alter the frame rate must output one picture start code for each one that is received; hence picture start codes are synchronization points for this application. Further, in the case of MPEG-2 video, all such devices must forward the slice start codes received at the beginning of each row of macroblocks; these provide a denser sequence of synchronization points in addition to picture start codes. In the case of MPEG-1 audio Layers 1 and 2, the syncword at the start of each audio frame provides a dense sequence of synchronization points.
[0054] In order to carry out TOA interpolation, synchronization points must be selected beforehand during the design of the system. According to one method of selection, the synchronization points are selected such that there is at least one such point in every incoming TS packet carrying the elementary stream to be processed; this would ensure that there is a synchronization point for every incoming TOA stamp. A less demanding method taught by the invention is to select a less frequent sequence of synchronization points and use interpolation to calculate TOA values for outgoing TS packets.
[0055] Fig. 4 and Fig. 7 illustrate how the inventive method uses the information System Input subsystem 210 stamps on each TS packet and synchronization points to calculate "arrival times" for modified packets. The first step 510 in the inventive method is to determine the TOA for the start of each synchronization point in the input. This is the instant at which the first byte of the synchronization point entered the system. This step is carried out at the input to the Stream Processor 240. The TOA (TOASYNCX) of a given synchronization point, SYNCX 331, is calculated as:
[0056] TOASYNCX = TOASYNCXPKT + (TOASYNCXPKT +I - TOASYMCXPKT) * BSYNCX / (188 * (NSYNCχpκτ +ι - NSYNCXPKT)) (2) where TOASYNCXPKT 321 and TOASYNCXPKT +ι 322 are the input TOA stamps (as stamped by the System Input subsystem 210) of the incoming TS packet 311 containing Synchronization Point, SYNCx 331, and the TOA stamp of the next TS packet 312 with the same PID, NSYNCXPKT 341 and NSYNCXPKT +I 342 are the packet sequence counts (again as stamped by the System Input subsystem 210) of the above two TS packets with TOA = TOASYNCXPKT 321 and TOA = TOASYNCXPKT +ι 322, respectively, and BSYNCX 351 is the distance in bytes between the first byte of the Synchronization Point, SYNCx 331 and the start of the TS packet containing it.
[0057] The effect of the above calculation is to translate the TOA from the start of the TS packet to the actual byte in the payload corresponding to the start of the synchronization point. This step 510 is carried out at the input to the Stream Processor 240, before the stream undergoes alteration.
[0058] The crucial advantage achieved in this step is that by definition each synchronization point also appears in the output and thus the TOA is available for these points in the output. The inventive method further teaches how to interpolate, from this sparse sequence of output TOA values, the appropriate TOA values for the start of each outgoing TS packet. This is achieved by first computing the gradient of the TOA (change in TOA per byte) between two successive synchronization points at the output, and using this gradient to stamp TOA values for each outgoing TS packet between these synchronization points.
[0059] The system next computes the output TOA gradient between every pair of successive synchronization points at the output. This is carried out at the time of output TS packet generation, and consists of two parts. In the first part, step 520, the input TOA gradients are calculated as follows:
[0060] ΔINSYNCx = (TOASYNCx +, - TOASYNCX) / (BCSYNCX +I - BCSYNCχ), (3)
where TOASYNCX and TOASYNCX +I are the TOA values of two successive synchronization points, as computed using Equation (2), and BCSYNCX 361 and BCSYNCX+ I 362 are their corresponding byte offsets in the input transport stream counting from the first byte in the input. The byte offset of any synchronization point in the input may be calculated as
[0061] BCSYNC = 188 * NSYNCPKT + BSYNc , (4)
where NSYNCPKT is the packet sequence count of the input TS packet in which the synchronization point is contained, and BSYNC is its byte offset from the start of that packet, as described in the explanation for Equation (2).
[0062] The ΔINSYNC values correspond to the gradient of TOA at every output synchronization point, counted using input byte counts. But due to the modification of the underlying elementary stream by the stream processing algorithm, the number of input bytes between two synchronization points in the input may not match with the number of bytes between the same two synchronization points at the output. To account for this, the required output TOA gradient is computed in step 530 from the input gradient by multiplying the latter by the transmission ratio, which is the ratio of input bits to output bits resulting from the particular stream processing operation that is used. For example, in the case of transrating or reduction of bit rate, the transmission ratio would be equal to or greater than unity. However, in the case of splicing, where a portion of the input stream is replaced by a second stream, this ratio can be less than unity. Further, most stream processing operations modify the input bit counts in a variable manner, resulting in a variable transmission ratio; hence the latter must be recomputed for each synchronization point. The output TOA gradient, ΔOUTSYNCX, at a given synchronization point, SYNCx 331, is thus calculated as:
[0063] ΔOUTSYNCX = η SYNCX * ΔINSYNCX , (5)
where η SYNCX is the transmission ratio of the synchronization point SNYCX, and refers to the ratio of bytes between the synchronization point, SNYCX, and the subsequent synchronization point in the input, and the corresponding byte count between the same two points in the output. The invention teaches that ideally, the value of USYNC should be recomputed for every synchronization point. However, the invention also teaches a less restrictive approach in which it is recomputed only once for every suitably defined group of synchronization points. For example, in case of video transcoding, all the synchronization points in a picture can have the same value of ηsYNC, calculated using the input and output byte counts of a picture. [0064] The final s tep, s tep 540, i s t o d etermine and s tamp t he o utput T OA v alues for each outgoing TS packet. This is achieved as follows. For each outgoing TS packet containing a synchronization point, the TOA (TOASYNCPKT) is calculated using the TOA of the synchronization point, the output byte offset and output TOA gradient: [0065] TOASYNCPKT = TOASYNC - BOUTSYNC * ΔOUTSYNC , (6)
where BOUTSYNC is the output byte offset of the synchronization point from the start of the packet. For all other packets, the TOA (TOAPKτ) is extrapolated from the TOA of the preceding TS packet containing a synchronization point (TOASYNCPKT), using the output TOA gradient:
[0066] TOAPKT = TOASYNCPKT + 188 * NPKT * ΔOUTSYNC , (7)
where NPKT is the distance (in output packet counts) of this TS packet from the last output TS packet containing a synchronization point. [0067] As described earlier, the output Multiplexer 260, in step 550, uses the TOA of outgoing TS packets to determine their multiplexing order and departure times using a constant delay approach, thus delivering a compliant MPEG-2 transport stream.
[0068] Selecting Synchronization Points
[0069] As explained above, the input transport stream is parsed to identify "synchronization points" in the elementary stream it carries. Synchronization points are points or locations within a stream that can be used as a basis for identifying locations near which incoming ancillary data, such as PCR and TOA stamps, should be located in a new transport stream carrying a processed version of the incoming elementary stream. In principle, synchronization points are locations in the elementary stream which are known to bear a clear and fixed timing relationship with the system time clock of the program comprising the elementary stream and therefore can serve as a basis for retiming or re- synchronizing ancillary data to the system time clock in a sufficiently accurate fashion.
[0070] The types of synchronization points used according to the invention illustratively meet all of the following criteria:
[0071] (a) System Time Clock Correspondence: An important underpinning of the invention is that ancillary data can be re-timed or re-synchronized in the new systems layer stream produced after stream processing by locating the ancillary data in a certain vicinity of a synchronization point of the elementary stream after stream processing ("processed elementary stream"). That is, in lieu of determining the location by direct reference to the system time clock (which would require recovery of the system time clock), the ancillary data is located in a vicinity of a synchronization point of the elementary stream (which in turn, is in synchronism with the system time clock of the program comprising the elementary stream). Therefore, the type of point chosen for use as a synchronization point must correspond with a particular determinable time of the system time clock of the program comprising the elementary stream, even though this particular time need not be explicitly determined.
[0072] (b) Invariance to Stream Processing: According to the invention, ancillary data is initially located within the original systems layer stream, which in the embodiments discussed above is the transport stream, in a certain vicinity of a specific identifiable synchronization point in the elementary stream, prior to stream processing. Likewise, after stream processing, this ancillary data should be located within the new transport stream (more generally, the new systems layer stream) in a similar vicinity to the same synchronization point of the stream-processed elementary stream. In order to enable re- locating the ancillary data in the new elementary stream, the same synchronization point must be present in the elementary stream both before stream processing and after stream processing.
[0073] (c) Continual Recurrence In The Elementary Stream: Generally, ancillary data is expected to recur continually throughout the systems layer stream, or at least the sequence carrying the processed elementary stream. Likewise, the type of synchronization point chosen for use in the invention should also continually recur within the processed elementary stream. In other words, over the course of time, so long as information is being carried in the systems layer stream for the elementary stream to be stream processed, and so long as there is ancillary data to be retimed or re-synchronized, one should also expect to find synchronization points in the elementary stream. Otherwise, such candidate synchronizations point cannot provide a suitable reference by which to relate the ancillary data.
[0074] In addition to the above criteria, it is preferable to choose a type of synchronization point that occurs frequently within the elementary stream. As will be appreciated from the description below, the higher the frequency of occurrence of the synchronization point, the more accurate will be the retiming or re-synchronizing of the ancillary data in the new transport stream carrying the processed elementary stream. More specifically, two successive synchronization points define a temporal locale, which is a portion of an elementary stream corresponding to an elapsed duration in time of the system time clock of the program of which the elementary stream is a component. According to the invention, ancillary data occurring in a given temporal locale (between two synchronization points) of an input systems layer stream is gathered prior to processing the systems layer stream, and the specific temporal locale in which the ancillary data was gathered, is noted. After stream processing, the corresponding temporal locale in the processed elementary stream is located, and the ancillary data is inserted into the new systems layer stream, containing the processed elementary stream, at that identified temporal locale. However, the amount of elementary stream data in a given temporal locale may change as a result of the stream processing. As such, the precise corresponding time of the systems time clock at which ancillary data may be inserted into the new systems layer stream will be different than the original time of the systems time clock of the location within the original systems layer stream from which the ancillary data was extracted. This difference introduces an error or drift in the synchronism of the ancillary data relative to the original timing of such ancillary data in the systems layer stream before processing. It is desired to maintain such a synchronism error or drift within a tolerable range. In a worst case scenario, ancillary data located in the original systems layer stream at one end of a temporal locale (e.g., at the latest time or end of the temporal locale) is inserted into the new processed systems layer stream at the opposite end of the temporal locale (e.g., the earliest time, or beginning of the temporal locale). As can be appreciated, the maximum error or drift in synchronism is approximately equal to the duration of the temporal locale. Therefore, by increasing the frequency of synchronization points, the duration of temporal locales is shortened and the maximum possible error or drift in synchronism of ancillary data is reduced, h any event, it is generally preferred for the frequency of occurrence of the type of synchronization point to be at least equal to the frequency of occurrence of the ancillary data to be retimed or re-synchronized. [0075] Considering these criteria, there are two classes of synchronization points that can be used, as discussed above. One is a physical synchronization point, which corresponds to a predefined, unvarying sequence of bits or code which can be identified in the bitstream. For example, in the case of an MPEG-1, MPEG-2 or MPEG-4 elementary stream, any start code can serve as a synchronization point, hi the MPEG-1, MPEG-2 and MPEG-4 standards, each start code is a 32 bit code comprising a 23 bit start code prefix 0000 0000 0000 0000 0000 0001 followed by one byte that distinguishes the type of start code from each other type. The following are examples of MPEG-2 video start codes, and the distinguishing byte that identifies them:
[0076]
Figure imgf000027_0001
[0077] Of these, the group_start_code, the picture_start_code and the slice_start_code are typically good candidates for use as synchronization points. The group_start_code immediately precedes a group of pictures (GOP) within the video elementary stream. GOP's are "entry points" i.e., random access points, at which a decoder can arbitrarily start decoding, e.g., in a trick mode operation (jump, fast forward, rewind, etc.). Such an entry point may also be used by a decoder when it is powered on, or otherwise caused to tune to, a systems layer stream which is already in the middle of transfer. The picture_start_code is required by MPEG-1, MPEG-2 and MPEG-4 (and optional in MPEG-4 part 10) to be present at the start of each encoded video picture. Depending on the type of stream processing, this start code will also be present in the video elementary stream after stream processing. Also, this start code is synchronized to the start of a video picture and therefore coincides with the true decoding time and presentation time of the picture (whether or not DTSs or PTSs representing the decoding time and/or presentation time are present in the systems layer stream). Generally speaking, picture_start_codes will occur at a higher frequency than group_start_codes. The slice_start_code is also a good candidate. The slice_start_code is provided at the beginning of a slice, which (according to MPEG-1 and MPEG-2) includes all or part of the macroblocks of a given macroblock row of a video picture. (According to H.264, a slice can span more than one macroblock row.) The particular macroblock row to which the slice_start_code pertains can be easily determined using a well-defined formula. Therefore, the slice_start_code coincides with the time of presentation of a decoded version of the corresponding slice location in the video picture. Generally speaking, slice_start_codes will occur at a much higher frequency that picture_start_codes. Typically, there will be at least one slice per macroblock row, and a device that parses the elementary stream can determine the particular horizontal offset within the macroblock row at which the slice occurs. Therefore, the correspondence of the slice to the display time of information represented by the slice can be determined. hi some circumstances, it is difficult to choose an actual physical synchronization point that meets all of the above criteria. For example, in transcoding an MPEG-2 video signal to an MPEG-4 video signal, slices may appear in the MPEG-2 video signal but not the MPEG-4 video signal. In the alternative, the physical synchronization points that do appear might not recur at a sufficiently high enough frequency to provide a good reference for retiming or re-synchronizing the ancillary data. For example, picture start codes might not occur frequently enough to provide a sufficiently accurate reference by which ancillary data, such as PCRs, can be resynchronized. In such a case, it may be desirable to choose a virtual synchronization point. Unlike a physical synchronization point, a virtual synchronization point might not correspond to a very explicitly predetermined code or sequence of bits. Rather, a virtual synchronization point might correspond to a bit, or sequence of bits, representing a well-defined, deterministically identifiable layer of the elementary stream, which may start with an arbitrary bit pattern not known ahead of time. For example, MPEG-2 video slices contain individual macroblocks, and each macroblock starts with a variable length code indicating the macroblock address increment. The variable length code representing the macroblock address increment is chosen from a table of multiple macroblock address increment codes. Such a variable length code can be easily identified, but it is not known ahead of time which specific one will be encountered; the specific code encountered will depend on the number of skipped macroblocks between the last encoded macroblock and the current encoded macroblock. Nevertheless, the location of the macroblock in a video picture can be determined with absolute accuracy and therefore so can the corresponding display time of the macroblock. Therefore, the start of a macroblock can provide a very effective virtual synchronization point because, generally, they occur at an even higher frequency than slices. As stream processing can include any combination of transcoding, editing or splicing, the amount of information in an elementary stream between two successive synchronization points may be changed. For example, in transcoding, the amount of information: (a) in a video picture, between video picture start codes; (b) in a slice, between slice start codes; or (c) in a sequence of one or more macroblocks, between successive macroblock address increment codes, can be changed. Likewise, consider the case of a splice where several video pictures are inserted between two video pictures of an original elementary stream. By definition, the amount of elementary stream information between the picture start code of the original video picture preceding the insert, and the picture start code of the original video picture following the insert, will increase. Nevertheless, the synchronization points will survive the stream processing operation. Moreover, systems layer stream information that was temporally located at a particular vicinity of one synchronization point in the original elementary stream should be temporally located as close as possible to that same synchronization point in the new systems layer stream containing the processed elementary stream. As can be appreciated from the discussion above, many factors influence the choice of types of synchronization point to be used to retime or re-synchronize the ancillary data. According to one embodiment, the choice of synchronization point type(s) to be used is predetermined and remains fixed during operation. However, it is preferable to adapt the choice of synchronization point type, either once for each elementary stream, or dynamically in real-time, to suit the particular stream processing, types of elementary stream(s) to be processed and types of ancillary data to be retimed or re-synchronized. Illustratively, the choice of synchronization type may be chosen by an operator or automatically selected by the system according to the invention. Generally, automatic adaptation is not only attractive (to minimize operator training and dependence) but also feasible. The reason is that the stream processor, and other devices that work with it, must be able to parse the incoming systems layer and elementary streams as well as to format them. It is not too much effort to also provide circuitry or software instructions which can determine the relative frequencies of occurrence of different types of ancillary data, synchronization points, etc. to facilitate automatic selection of synchronization point type(s). Note also that more than one type of synchronization point type may be used simultaneously; the synchronization point types need only occur serially in the elementary stream. In addition, it is sometimes desirable to use both physical synchronization points, such as start codes, and virtual synchronization points, such as the points in the bit stream corresponding to macroblocks, simultaneously. This would ensure that synchronization points occur in the bit stream with a sufficiently high frequency of occurrence and regularity. The above discussion is intended to be merely illustrative of the invention. Those having ordinary skill in the art may devise numerous alternative embodiments of the methods and systems described above without departing from the spirit and scope of the following claims.

Claims

[0082] Claims
[0083] The claimed invention is:
[0084] 1. A method of processing a series of original elementary stream segments within an original systems stream, the original systems stream comprising a series of systems stream segments, each stream segment comprising a systems layer specific segment of information and one elementary stream segment, the series of systems stream segments comprising the series of original elementary stream segments to be processed, the method comprising: (a) recording the time of arrival and the segment count of each systems stream segment of the original systems stream; (b) identifying a first and subsequent synchronization points within the series of original elementary stream segments to be processed, wherein the synchronization points are a type of sequential location of the elementary stream (1) which recurs continually throughout the elementary stream, (2) is synchronized to a systems time clock of the elementary stream, and (3) is always present in the elementary stream both prior to and after the processing; (c) calculating an arrival time for the first and the subsequent synchronization points; (d) processing elementary stream information within the series of original elementary stream segments to produce a modified sequence of elementary stream information to be carried between the first and the subsequent synchronization points; (e) inserting the modified sequence of elementary stream information into a series of new elementary new stream segments; (f) calculating a new arrival time for each elementary stream segment within the series of new elementary stream segments based on the arrival times and segment counts of segments within the series of original elementary stream segments, and arrival times of the first and the subsequent synchronization points; and (g) inserting the series of new elementary stream segments into a new systems stream.
[0085]
2. The method of claim 1 wherein the series of original elementary stream segments contains an intervening segment with no synchronization point between the first and the subsequent synchronization points.
[0086]
3. The method of claim 1 wherein the new series of elementary stream segments comprises an elementary stream segment containing the first synchronization point, an intervening elementary stream segment with no synchronization point, and an elementary stream segment containing the subsequent synchronization point, wherein interpolation is used to calculate an arrival time for the intervening elementary stream segment with no synchronization point.
[0087]
4. The method of claim 1 wherein the time of arrival for the first synchronization point is calculated based on the arrival time of the segment containing the first synchronization point and the position of the synchronization point within the segment containing the synchronization point relative to the start of the segment.
[0088]
5. The method of claim 1 wherein the calculation of a new arrival time for each elementary stream segment within the series of new elementary stream segments is further based on the ratio of the number of bits in a plurality of original elementary stream segments to the number of bits in a plurality of new elementary stream segments.
[0089]
6. The method of claim 1 , wherein each of said systems stream segments in the original systems stream has an identifier, wherein said systems stream segments comprising said elementary system segments belonging to a particular elementary stream are identified by the same identifier, and wherein a first series of program clock references are included in systems stream segments identified by the same identifier used to identify a particular elementary stream, said method further comprising: assigning a new identifier for said first series of program clock references; and inserting said first series of program clock references into systems stream segments identified by said new identifier.
[0090]
7. The method of claim 6 wherein said newly-identified system stream segments comprising said series of program clock references are forwarded for insertion into said new systems stream.
8. A method of processing one elementary stream in an original transport stream of multiple elementary streams, the one elementary stream to be consumed according to a predefined and deterministic schedule relative to a particular system clock of a program that comprises the elementary stream, the original transport stream comprising a series of transport packets, each transport packet comprising a systems layer specific segment of information and one elementary stream segment, the series of transport stream packets comprising an original series of transport stream packets containing the one elementary stream, the method comprising: (a) recording the time of arrival and the packet count of each transport packet in the original transport stream; (b) identifying a first and subsequent synchronization points within the original series of transport packets containing the one elementary stream, wherein the synchronization points are a type of sequential location of the one elementary stream (1) which recurs continually throughout the elementary stream, (2) is synchronized to a systems time clock of the elementary stream, and (3) is always present in the elementary stream both prior to and after the processing; (c) calculating an arrival time for the first and the subsequent synchronization points; (d) processing elementary stream information within the original series of transport packets containing the one elementary stream to produce a modified sequence of elementary stream information to be carried between the first and the subsequent synchronization points; (e) inserting the modified sequence of elementary stream information into a series of new transport packets; (f) calculating a new arrival time for each packet within the series of new transport packets based on the arrival times and packet counts of transport packets within the original series of transport packets containing the one elementary stream, and arrival times of the first and the subsequent synchronization points; and (g) inserting the series of new transport packets into a new systems stream.
[0092] 9. The method of claim 8 wherein the series of original transport packets containing the one elementary stream contains an intervening packet with no synchronization point between the first and the subsequent synchronization point.
[0093] 10. The method of claim 8 wherein the series of new transport packets comprises a transport packet containing the first synchronization point, an intervening transport packet with no synclironization point, and a transport packet containing the subsequent synchronization point, wherein interpolation is used to calculate an arrival time for the intervening transport packet with no synchronization point.
[0094] 11. The method of claim 8 wherein the time of arrival for the first synchronization point is calculated based on the arrival time of the transport packet containing the first synchronization point and the position of the synchronization point within the transport packet containing the synchronization point relative to the start of the transport packet. [0095] 12. The method of claim 8, wherein each of said transport packets in the original systems stream has an identifier, wherein said transport packets comprising said elementary system segments belonging to a particular elementary stream are identified by the same identifier, and wherein a first series of program clock references are included in transport packets identified by the same identifier used to identify a particular elementary stream, said method further comprising: assigning a new identifier for said first series of program clock references; and inserting said first series of program clock references into transport packets identified by said new identifier.
[0096] 13. The method of claim 12 wherein said newly-identified transport packets comprising said series of program clock references are forwarded for insertion into said new systems stream.
PCT/US2004/026124 2003-08-13 2004-08-12 Method and system for re-multiplexing of content-modified mpeg-2 transport streams using interpolation of packet arrival times WO2005020558A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CA002535455A CA2535455A1 (en) 2003-08-13 2004-08-12 Method and system for re-multiplexing of content-modified mpeg-2 transport streams using interpolation of packet arrival times

Applications Claiming Priority (10)

Application Number Priority Date Filing Date Title
US10/640,866 2003-08-13
US10/640,872 US7342968B2 (en) 2003-08-13 2003-08-13 Method and system for modeling the relationship of the bit rate of a transport stream and the bit rate of an elementary stream carried therein
US10/640,871 US7693222B2 (en) 2003-08-13 2003-08-13 Method and system for re-multiplexing of content-modified MPEG-2 transport streams using PCR interpolation
US10/641,322 2003-08-13
US10/641,323 2003-08-13
US10/640,866 US7227899B2 (en) 2003-08-13 2003-08-13 Method and system for re-multiplexing of content-modified MPEG-2 transport streams using interpolation of packet arrival times
US10/641,323 US20050036557A1 (en) 2003-08-13 2003-08-13 Method and system for time synchronized forwarding of ancillary information in stream processed MPEG-2 systems streams
US10/640,872 2003-08-13
US10/640,871 2003-08-13
US10/641,322 US7274742B2 (en) 2003-08-13 2003-08-13 Model and model update technique in a system for modeling the relationship of the bit rate of a transport stream and the bit rate of an elementary stream carried therein

Publications (2)

Publication Number Publication Date
WO2005020558A2 true WO2005020558A2 (en) 2005-03-03
WO2005020558A3 WO2005020558A3 (en) 2006-02-16

Family

ID=34222664

Family Applications (4)

Application Number Title Priority Date Filing Date
PCT/US2004/026082 WO2005020557A2 (en) 2003-08-13 2004-08-12 Method and system for modeling the relationship of the bit of a transport stream and the bit rate of an elementary stream carried therein
PCT/US2004/026125 WO2005020559A2 (en) 2003-08-13 2004-08-12 Method and system for time synchronized forwarding of ancillary information in stream processed mpeg-2 systems streams
PCT/US2004/026124 WO2005020558A2 (en) 2003-08-13 2004-08-12 Method and system for re-multiplexing of content-modified mpeg-2 transport streams using interpolation of packet arrival times
PCT/US2004/026164 WO2005019999A2 (en) 2003-08-13 2004-08-12 Method and system for re-multiplexing of content-modified mpeg-2 transport streams using pcr interpolation

Family Applications Before (2)

Application Number Title Priority Date Filing Date
PCT/US2004/026082 WO2005020557A2 (en) 2003-08-13 2004-08-12 Method and system for modeling the relationship of the bit of a transport stream and the bit rate of an elementary stream carried therein
PCT/US2004/026125 WO2005020559A2 (en) 2003-08-13 2004-08-12 Method and system for time synchronized forwarding of ancillary information in stream processed mpeg-2 systems streams

Family Applications After (1)

Application Number Title Priority Date Filing Date
PCT/US2004/026164 WO2005019999A2 (en) 2003-08-13 2004-08-12 Method and system for re-multiplexing of content-modified mpeg-2 transport streams using pcr interpolation

Country Status (2)

Country Link
CA (4) CA2535455A1 (en)
WO (4) WO2005020557A2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100391249C (en) * 2005-09-28 2008-05-28 西安通视数据有限责任公司 Digital video frequency broadcasting switching method and apparatus thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5537408A (en) * 1995-02-03 1996-07-16 International Business Machines Corporation apparatus and method for segmentation and time synchronization of the transmission of multimedia data
US5566174A (en) * 1994-04-08 1996-10-15 Philips Electronics North America Corporation MPEG information signal conversion system
US5596581A (en) * 1994-04-08 1997-01-21 Philips Electronics North America Corporation Recording and reproducing an MPEG information signal using tagged timing information
US5703877A (en) * 1995-11-22 1997-12-30 General Instrument Corporation Of Delaware Acquisition and error recovery of audio data carried in a packetized data stream
US5835668A (en) * 1994-11-14 1998-11-10 Sony Corporation Transmission, recording and reproduction of digital data and time information in transport packets using a compression ratio
US6002687A (en) * 1996-01-02 1999-12-14 Divicon, Inc. MPEG transport stream remultiplexer
US20030002587A1 (en) * 2000-05-31 2003-01-02 Next Level Communications, Inc. Method for dealing with missing or untimely synchronization signals in digital communications systems
US20030043924A1 (en) * 2001-08-31 2003-03-06 Haddad Semir S. Apparatus and method for synchronizing video and audio MPEG streams in a video playback device

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH09139937A (en) * 1995-11-14 1997-05-27 Fujitsu Ltd Moving image stream converter
US5793425A (en) * 1996-09-13 1998-08-11 Philips Electronics North America Corporation Method and apparatus for dynamically controlling encoding parameters of multiple encoders in a multiplexed system
US6167084A (en) * 1998-08-27 2000-12-26 Motorola, Inc. Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals
US6330286B1 (en) * 1999-06-09 2001-12-11 Sarnoff Corporation Flow control, latency control, and bitrate conversions in a timing correction and frame synchronization apparatus
US7088725B1 (en) * 1999-06-30 2006-08-08 Sony Corporation Method and apparatus for transcoding, and medium
JP2001251616A (en) * 2000-03-02 2001-09-14 Media Glue Corp Method and device for converting multiplexed sound/ moving picture compressing-coded signal, and medium recorded with conversion program
US6868125B2 (en) * 2001-11-29 2005-03-15 Thomson Licensing S.A. Transport stream to program stream conversion

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5566174A (en) * 1994-04-08 1996-10-15 Philips Electronics North America Corporation MPEG information signal conversion system
US5596581A (en) * 1994-04-08 1997-01-21 Philips Electronics North America Corporation Recording and reproducing an MPEG information signal using tagged timing information
US5835668A (en) * 1994-11-14 1998-11-10 Sony Corporation Transmission, recording and reproduction of digital data and time information in transport packets using a compression ratio
US5537408A (en) * 1995-02-03 1996-07-16 International Business Machines Corporation apparatus and method for segmentation and time synchronization of the transmission of multimedia data
US5703877A (en) * 1995-11-22 1997-12-30 General Instrument Corporation Of Delaware Acquisition and error recovery of audio data carried in a packetized data stream
US6002687A (en) * 1996-01-02 1999-12-14 Divicon, Inc. MPEG transport stream remultiplexer
US20030002587A1 (en) * 2000-05-31 2003-01-02 Next Level Communications, Inc. Method for dealing with missing or untimely synchronization signals in digital communications systems
US20030043924A1 (en) * 2001-08-31 2003-03-06 Haddad Semir S. Apparatus and method for synchronizing video and audio MPEG streams in a video playback device

Also Published As

Publication number Publication date
WO2005020558A3 (en) 2006-02-16
WO2005020557A2 (en) 2005-03-03
WO2005020557A3 (en) 2008-11-13
WO2005020559A2 (en) 2005-03-03
WO2005020559A3 (en) 2007-01-25
CA2535455A1 (en) 2005-03-03
CA2535457A1 (en) 2005-03-03
CA2535453A1 (en) 2005-03-03
CA2535306A1 (en) 2005-03-03
WO2005019999A2 (en) 2005-03-03
CA2535453C (en) 2014-04-15
CA2535457C (en) 2013-04-23
WO2005019999A3 (en) 2008-11-06

Similar Documents

Publication Publication Date Title
US7227899B2 (en) Method and system for re-multiplexing of content-modified MPEG-2 transport streams using interpolation of packet arrival times
US6912251B1 (en) Frame-accurate seamless splicing of information streams
US5703877A (en) Acquisition and error recovery of audio data carried in a packetized data stream
US8781003B2 (en) Splicing of encrypted video/audio content
US8503541B2 (en) Method and apparatus for determining timing information from a bit stream
US7274862B2 (en) Information processing apparatus
US6947448B2 (en) Data transmission device and data transmission method
US6101195A (en) Timing correction method and apparatus
JP4503858B2 (en) Transition stream generation / processing method
US7693222B2 (en) Method and system for re-multiplexing of content-modified MPEG-2 transport streams using PCR interpolation
US7433946B2 (en) Mechanism for transmitting elementary streams in a broadcast environment
JP3666625B2 (en) Data recording method and data recording apparatus
KR100308704B1 (en) Multiplexed data producing apparatus, encoded data reproducing apparatus, clock conversion apparatus, encoded data recording medium, encoded data transmission medium, multiplexed data producing method, encoded data reproducing method, and clock conversion method
US7228055B2 (en) Recording apparatus, video camera and computer program
EP1095521A2 (en) Method and apparatus for splicing
JP2001513606A (en) Processing coded video
CA2579981C (en) Ts transmission system, transmitting apparatus, receiving apparatus, and ts transmission method
US20050036557A1 (en) Method and system for time synchronized forwarding of ancillary information in stream processed MPEG-2 systems streams
US20060088052A1 (en) Decentralized method for generating an MPEG-2 multiprogram transport stream
CA2535457C (en) Method and system for re-multiplexing of content-modified mpeg-2 transport streams using pcr interpolation
Chen Examples of Video Transport Multiplexer
Reid An MPEG-2 digital decoder design: A practical approach with emphasis on elementary stream data flows
Oguz et al. Seamless audio splicing for ISO/IEC 13818 transport streams
JP2018137654A (en) Communication apparatus, communication program, and communication method
WO2000062551A1 (en) Frame-accurate seamless splicing of information streams

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref document number: 2535455

Country of ref document: CA

122 Ep: pct application non-entry in european phase