US20030112758A1 - Methods and systems for managing variable delays in packet transmission - Google Patents
Methods and systems for managing variable delays in packet transmission Download PDFInfo
- Publication number
- US20030112758A1 US20030112758A1 US10/084,559 US8455902A US2003112758A1 US 20030112758 A1 US20030112758 A1 US 20030112758A1 US 8455902 A US8455902 A US 8455902A US 2003112758 A1 US2003112758 A1 US 2003112758A1
- Authority
- US
- United States
- Prior art keywords
- delay
- variance
- packet
- media
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000005540 biological transmission Effects 0.000 title claims abstract description 46
- 230000001934 delay Effects 0.000 title claims abstract description 17
- 239000000872 buffer Substances 0.000 claims abstract description 143
- 238000012545 processing Methods 0.000 claims abstract description 141
- 238000004891 communication Methods 0.000 claims abstract description 125
- 230000015654 memory Effects 0.000 claims description 98
- 230000006870 function Effects 0.000 claims description 46
- 238000004364 calculation method Methods 0.000 claims description 15
- 230000003044 adaptive effect Effects 0.000 claims description 8
- 230000007423 decrease Effects 0.000 claims description 8
- 230000003139 buffering effect Effects 0.000 claims description 7
- 238000009499 grossing Methods 0.000 claims description 4
- 230000003111 delayed effect Effects 0.000 claims description 3
- 230000002829 reductive effect Effects 0.000 claims description 2
- 230000011664 signaling Effects 0.000 description 34
- 238000010586 diagram Methods 0.000 description 29
- 238000007726 management method Methods 0.000 description 29
- 238000012546 transfer Methods 0.000 description 24
- 238000013459 approach Methods 0.000 description 21
- 230000008569 process Effects 0.000 description 15
- 230000000694 effects Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 235000019800 disodium phosphate Nutrition 0.000 description 8
- 238000005538 encapsulation Methods 0.000 description 6
- 230000000977 initiatory effect Effects 0.000 description 6
- 238000013507 mapping Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000009131 signaling function Effects 0.000 description 5
- 230000001360 synchronised effect Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 4
- 241001522296 Erithacus rubecula Species 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 238000002592 echocardiography Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 239000003999 initiator Substances 0.000 description 3
- RPOCQUTXCSLYFJ-UHFFFAOYSA-N n-(4-ethylphenyl)-2-(2-methyl-3,5-dioxothiomorpholin-4-yl)acetamide Chemical compound C1=CC(CC)=CC=C1NC(=O)CN1C(=O)C(C)SCC1=O RPOCQUTXCSLYFJ-UHFFFAOYSA-N 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 230000037361 pathway Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 2
- RGNPBRKPHBKNKX-UHFFFAOYSA-N hexaflumuron Chemical compound C1=C(Cl)C(OC(F)(F)C(F)F)=C(Cl)C=C1NC(=O)NC(=O)C1=C(F)C=CC=C1F RGNPBRKPHBKNKX-UHFFFAOYSA-N 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000013139 quantization Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- HRANPRDGABOKNQ-ORGXEYTDSA-N (1r,3r,3as,3br,7ar,8as,8bs,8cs,10as)-1-acetyl-5-chloro-3-hydroxy-8b,10a-dimethyl-7-oxo-1,2,3,3a,3b,7,7a,8,8a,8b,8c,9,10,10a-tetradecahydrocyclopenta[a]cyclopropa[g]phenanthren-1-yl acetate Chemical compound C1=C(Cl)C2=CC(=O)[C@@H]3C[C@@H]3[C@]2(C)[C@@H]2[C@@H]1[C@@H]1[C@H](O)C[C@@](C(C)=O)(OC(=O)C)[C@@]1(C)CC2 HRANPRDGABOKNQ-ORGXEYTDSA-N 0.000 description 1
- 241000985610 Forpus Species 0.000 description 1
- 108700010388 MIBs Proteins 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 239000007853 buffer solution Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012432 intermediate storage Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000036962 time dependent Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/78—Architectures of general purpose stored program computers comprising a single central processing unit
- G06F15/7839—Architectures of general purpose stored program computers comprising a single central processing unit with memory
- G06F15/7842—Architectures of general purpose stored program computers comprising a single central processing unit with memory on one IC chip (single chip microcontrollers)
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J1/00—Frequency-division multiplex systems
- H04J1/02—Details
- H04J1/16—Monitoring arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/062—Synchronisation of signals having the same nominal but fluctuating bit rates, e.g. using buffers
- H04J3/0632—Synchronisation of packets and cells, e.g. transmission of voice via a packet network, circuit emulation service [CES]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/14—Monitoring arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L1/00—Arrangements for detecting or preventing errors in the information received
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04J—MULTIPLEX COMMUNICATION
- H04J3/00—Time-division multiplex systems
- H04J3/02—Details
- H04J3/06—Synchronising arrangements
- H04J3/0635—Clock or time synchronisation in a network
- H04J3/0685—Clock or time synchronisation in a node; Intranode synchronisation
- H04J3/0697—Synchronisation in a packet node
Definitions
- the present invention relates generally to a method and system for the communication of digital signals, and more particularly to a method and system for managing delays in packet transmission, e.g. managing jitter, using a buffering procedure, and to a media gateway deploying the jitter management methods and systems.
- Media communication devices comprise hardware and software systems that utilize interdependent processes to enable the processing and transmission of analog and digital signals substantially seamlessly across and between circuit switched and packet switched networks.
- a voice over packet gateway enables the transmission of human voice from a conventional public switched network to a packet switched network, possibly traveling simultaneously over a single packet network line with both fax information and modem data, and back again.
- Benefits of unifying communication of different media across different networks include cost savings and the delivery of new and/or improved communication services such as web-enabled call centers for improved customer support and more efficient personal productivity tools.
- VoIP voice-over-IP
- VoIP requires notably less average bandwidth than a traditional circuit-switched connection for several reasons.
- Second, the digital audio bit stream utilized by VoIP may be significantly compressed before transmission using a codec (compression/decompression) scheme.
- a telephone conversation that would require two 64 kbps (one each way) channels over a circuit-switched network may utilize a data rate of roughly 8 kbps with VoIP.
- Jitter is the variable delay experienced in the course of packet transmission, resulting in varied packet arrival times, and is caused by networks providing different waiting times for different packets or cells. It may also be caused by lack of synchronization, which results from mechanical or electrical changes. Given the real time nature of a live connection, jitter buffer management policies have a large effect on the overall data quality. If the data is in the form of a voice, actual sound losses range from a syllable to a word, depending on how much data is in a given packet.
- a receiver may include a buffer to store packets for an amount of time sufficient to allow sequenced, regular playout of the packets.
- an efficient technique is needed to determine the receiver buffer playout length and timing in real-time data communications such as VoIP. If the buffer delay or length is too short, “slower” packets will not arrive before their designated playout time and playout quality suffers. If the buffer delay is very long, it conspicuously disrupts interactive communications. Accurate knowledge of actual packet delays is necessary to determine optimal packet buffer delay for real-time communications.
- One approach to devising an appropriate buffer is to construct and maintain a distribution of the number of packets received by a system over time, namely a histogram.
- a buffer may then be constructed by equating the buffer length to the entire length of the histogram and equating the buffer initiation point to the time when the first packet is received, e.g., the minimum delay.
- a graph 100 a depicts a histogram 101 a of a number of packets received relative to time.
- the x-axis 102 a represents the delay experienced by packets and the y-axis 103 a represents the number of packet samples received.
- the vertical bars 104 a show the number of packets received in a defined span of time.
- a curve 105 a connects the central point of tops of the bars 104 a of the histogram 101 a .
- the curve 105 a depicts the distribution of the arrival time of packets. This curve is called the packet delay distribution (PDD) curve.
- PDD packet delay distribution
- PDD curves are often skewed earlier in time due to less delay experienced by most of the packets and, therefore, are often not symmetrical around the peak.
- One of ordinary skill in the art would be familiar with methods of creating histograms.
- the present invention provides improved methods and systems for the determination of jitter buffers.
- the present invention enables the generation of buffers having sizes and delays such that, as designed, the buffers capture a substantial majority of packets while not being resource intensive.
- a packet delay histogram is estimated using any one of several delay estimation techniques.
- the histogram represents the distribution of the number of packets received by a system over a defined time.
- a playout delay evaluator calculates a plurality of variances, centered around a distribution peak, or mean average delay, and applies those variances to determine the buffer size and delay.
- the playout buffer monitor uses this calculated buffer size and delay to select, store and playout packets at their adjusted playout time.
- the present invention may be employed in a media gateway that enables data communications among heterogeneous networks.
- Media gateways provide media processing functions, data packet encapsulation, and maintain a quality of service level, among other functions.
- a gateway When a gateway operates as a receiver of voice data traffic, it buffers voice packets and outputs a continuous digital or analog stream.
- the present invention may be deployed to manage jitter experienced in the course of receiving packetized data and processing the data for further transmission through a packet-based or circuit-switched network.
- FIG. 1 a is a histogram depicting packets received by a system over time
- FIG. 1 b is a block diagram of a system that employs a first-in, first-out (FIFO) buffer and a numerically controlled oscillator (NCO) for jitter correction;
- FIFO first-in, first-out
- NCO numerically controlled oscillator
- FIG. 1 c is a schematic waveform representation of jitter
- FIG. 1 d is a diagram illustrating timings associated with the sending and receiving a packet
- FIG. 1 e depicts a histogram calculation employed in one approach of designing a buffer
- FIG. 1 f depicts a histogram calculation employed in a preferred embodiment of the present invention
- FIG. 1 g is an embodiment of the adaptive playout-buffering process of the present invention.
- FIG. 1 h is an arrangement of a playout delay evaluator and buffer monitor used in the present invention
- FIG. 2 a is a block diagram of a first embodiment of a hardware system architecture for a media gateway
- FIG. 2 b is a block diagram of a second embodiment of a hardware system architecture for a media gateway
- FIG. 3 is a diagram of a packet having a header and user data
- FIG. 4 is a block diagram of a third embodiment of a hardware system architecture for a media gateway
- FIG. 5 is a block diagram of one logical division of the software system of the present invention.
- FIG. 6 is a block diagram of a first physical implementation of the software system of FIG. 5;
- FIG. 7 is a block diagram of a second physical implementation of the software system of FIG. 5;
- FIG. 8 is a block diagram of a third physical implementation of the software system of FIG. 5;
- FIG. 9 is a block diagram of a first embodiment of the media engine component of the hardware system of the present invention.
- FIG. 10 is a block diagram of a preferred embodiment of the media layer component of the hardware system of the present invention.
- FIG. 10 a is a block diagram representation of a preferred architecture for the media layer component of the media engine of FIG. 10;
- FIG. 11 is a block diagram representation of a first preferred processing unit
- FIG. 12 is a time-based schematic of the pipeline processing conducted by the first preferred processing unit
- FIG. 13 is a block diagram representation of a second preferred processing unit
- FIG. 13 a is a time-based schematic of the pipeline processing conducted by the second preferred processing unit
- FIG. 14 is a block diagram representation of a preferred embodiment of the packet processor component of the hardware system of the present invention.
- FIG. 15 is a schematic representation of one embodiment of the plurality of network interfaces in the packet processor component of the hardware system of the present invention.
- FIG. 16 is a block diagram of a plurality of PCI interfaces used to facilitate control and signaling functions for the packet processor component of the hardware system of the present invention
- FIG. 18 is a schematic diagram of preferred components comprising the media processing subsystem of the software system of the present invention.
- FIG. 20 is a schematic diagram of preferred components comprising the packetization processing subsystem of the software system of the present invention.
- FIG. 21 is a schematic diagram of preferred components comprising the signaling subsystem of the software system of the present invention.
- FIG. 22 is a block diagram of a host application operative on a physical DSP.
- FIG. 23 is a block diagram of a host application operative on a virtual DSP.
- a clock is derived from a digital data signal and the data signal is stored in a buffer.
- the derived clock is input to an input counter, which counts a predetermined number of degrees out of phase with an output counter.
- the input counter may be initialized 180 degrees out of phase with the output counter.
- the output counter value is adjusted in accordance with the information processed from a look-up table, preferably a read-only table. This table outputs a coefficient to a numerically controlled oscillator (NCO).
- the NCO includes a low frequency portion that adds the coefficient successively to itself and outputs a carry out (CO) signal.
- a high frequency clock is fed to the high frequency portion of the NCO, which preferably divides down the high frequency clock to a clock frequency that is centered at the desired output frequency.
- the high frequency portion preferably includes an edge detect circuit that receives the CO signal and adjusts the frequency of the output clock to produce a compensation clock.
- the compensation clock adjusts the output counter, which causes the output buffer to delay a packet of data for a pre-determined amount of time, thereby outputting a digital signal that is substantially free of jitter.
- FIG. 1 b a block diagram of a system 100 b that employs a FIFO buffer 104 b and a numerically controlled oscillator (NCO) 107 b for jitter correction is provided. It includes an input counter 101 b , an output counter 102 b , an AND gate 103 b , a buffer 104 b , a phase detection latch 105 b , a read only memory (ROM) 106 b , an input data line 109 b , an output line 111 b producing jitter free data, a numerically controlled oscillator (NCO) 107 b , and a high frequency clock 110 b in communication with the NCO 107 b .
- Input counter 101 b is coupled to an input clock signal line 108 b.
- Variation in packet delay is not a static process.
- algorithmic approaches are required to estimate packet delay statistics with time-based estimates such as packet mean arrival time and variances from mean arrival time.
- Dynamic play-out delay adaptation algorithms rely for their adaptive adjustments on the statistics obtained from the timestamp and variable delay histories of the packets received.
- Such information such as timing and stream (continuous data packets after a break) number information, may be gathered from streams of data, and future network delay values are predicted by constructing a measured packet-delay distribution curve.
- the system maintains a delay histogram, each storing the relative frequency with which a particular delay value is expected to occur among the arriving packets. The histogram is then used to approximate the distribution in the form of a curve.
- the jitter buffer system incorporates a method that uses a linear recursive filter and is characterized by the weighting factor alpha.
- the delay estimate is computed as:
- ⁇ is a weighting factor
- d i is the amount of time from when the ith packet is generated by the source until it is played out at the destination host
- n i is the total delay introduced by the network
- v i is the variable delay experienced by packet i as it is sent from the source to the destination host.
- a second approach adapts more quickly to the short burst of packets incurring long delays by using a weighting mechanism which incorporates two values into the weighting factor, one indicative of increasing trends in the delay and one indicative of decreasing trends.
- a third approach calculates the delay estimate as:
- S i is the set of all packets received during the talk spurt prior to the one initiated by packet i.
- a fourth approach adapts to sudden, large increases in the end-to-end network delay followed by a series of packets arriving almost simultaneously, referred to herein as spikes.
- the detection of the beginning of a spike is done by checking the delay between consecutive packets at the receiver so that the delay is large enough for it to constitute a spike. For example:
- a packet delay histogram may be constructed.
- the packet delay histogram may be used to determine the required buffer size and delay by, for example, equating the buffer length to the length of the histogram and the buffer delay to the minimum delay experienced by the received packets, represented by the first data points on the histogram.
- One approach is to calculate the variance of the histogram, specifically the standard deviation around when the peak number of packets arrive, and add that variance to a minimum delay experienced by the system. For example, if the variance is 60 ms and the minimum delay is 30 ms, then the buffer begins storing packets at 30 ms point and continues storing packets for 60 ms.
- the variance used to determine the buffer parameters can be a calculated variance derived by multiplying the variance of the histogram by a multiplier (k).
- the histogram peak may be calculated by computing the mean, or the average delay of the histogram. In calculating the peak, it is preferred to first eliminate a portion of the histogram tail to avoid having the trailing portion of the histogram excessively skew the calculation. The average is then calculated and associated with the peak. Using the peak, the variance of the histogram may be calculated. Once the peak and variance of the histogram is calculated, the buffer size of the histogram is obtained.
- the variance used to determine the buffer parameters is a calculated variance derived by multiplying the variance of the histogram by a multiplier (k).
- the graph represents histogram 101 e of a packet stream, specifically a depiction of the number of packets received at different points in time by the system.
- the x-axis 102 e represents the delay experienced by packets and the y-axis 103 e represents the number of packet samples received.
- the vertical bars 104 e show the number of packets received in a defined span of time.
- a curve 105 e connects the central point of tops of the bars 104 e of the histogram 101 e .
- the curve 105 e depicts the distribution of the arrival time of packets.
- the tail is eliminated at a defined point 106 e , which in this example is 270 ms on the x-axis 102 e . Therefore, the histogram area to the right of point 106 e is discarded.
- the mean is 150 ms and the variance is 90 ms.
- the buffer size may be defined as k*Var, where k can be any number, but is preferably in the range of 2 to 8 and more preferably either 2, 4 or 8, and the buffer begins accepting packets at the point defined by
- the buffer accepts packets from 60 ms to 240 ms.
- the graph represents histogram 101 f of a packet stream received by a system.
- the x-axis 102 f represents the delay experienced by packets and the y-axis 103 f represents the number of packet samples received.
- the vertical bars 104 f show the number of packets received in a defined span of time.
- a curve 105 f connects the central point of tops of the bars 104 f of the histogram 101 f .
- the curve 105 f depicts the distribution of the arrival time of packets.
- the tail is eliminated at a defined point 106 f , which in this example is 270 ms on the x-axis 102 f . Therefore, the histogram area to the right of point 106 f is discarded.
- M is the mean
- x 1 represents the amount of delay experienced by packets arriving in a particular window of time i
- N is the total number of samples.
- the preferred embodiment of the invention utilizes at least two separately calculated variances to better estimate the buffer size and delay based upon the estimated histogram.
- the histogram is conceptually divided into two portions, a portion encompassing the packets arriving after the mean delay and a portion encompassing packets that arrived prior to the mean delay. Where i packets have been received and the mean delay is associated with packet m, then the two histogram portions are defined by D 0 to D m ⁇ 1 and the second defined by D m+1 to D 1 , or the final packet.
- j extends from m+1 to i and the total number of samples includes those sample from m+1 to i.
- the two separately calculated variances are calculated using one sample set of packets arriving before the mean delay and one sample set of packets arriving after the mean delay, one would appreciate that the sample set of packets can be calculated using sample sets that overlap or that, when taken together, comprise a subset of packets received.
- the two variances are not equal because the histogram is asymmetrical. As shown in FIG. 1 f , Var 1 115 f is less than Var 2 117 f , reflective of the asymmetrical nature of the histogram and better approximating the actual distribution of packets received. This approach therefore represents an improved approach to ascertaining the size and placement of the buffer more accurately while optimizing computational resources.
- Var 1 can be calculated from Var 2 , or vice versa, using pre-defined equations.
- Var 1 could be a multiple or factor of Var 2 , i.e., Var 1 *C Var 2 , where C is a constant that is determined experimentally.
- Var 1 could be a fixed value depending on whether Var 2 exceeds or does not exceed certain threshold value.
- the buffer size and timing can be determined.
- the buffer starts accepting packets at delay d, which is determined by subtracting Var 1 115 f from the mean 107 f.
- the buffer starts accepting packets at 90 ms and continues accepting for period T of 165 ms, or up to 255 ms.
- the variances used to determine the buffer parameters can also be calculated variances derived by multiplying Var 1 and/or Var 2 by a multiplier (k) where the multiplier any number, but preferably in the range of 2-7, and more preferably around 2, 4 or 8.
- FIG. 1 g depicts a block diagram of an adaptive process used for jitter correction using the above-described buffering method.
- the system comprises a sender 101 g and a receiver 102 g , which is comprised of a subtractor 103 g , a delay evaluator 104 g , a playout delay evaluator 106 g , and a playout buffer monitor 107 g .
- the packet is then sent to playout unit 112 g.
- Packet i is sent from the sender 101 g with a timestamp t i and reaches the receiver at time a i .
- the subtractor 103 g uses the timestamp to subtract a i from t i to produce the delay n i for the packet i.
- the delay evaluator 104 g analyzes this value and performs one of the aforementioned delay evaluation techniques to generate the distribution of delays that comprise a packet delay histogram.
- the estimated packet delay histogram is communicated by the delay evaluator 104 g to the playout delay evaluator 106 g which, based upon a portion of the communicated histogram, determines the size and delay of the buffer employed by the playout buffer monitor 107 g .
- the receiver 102 g in accordance with the adjusted playout time, outputs packets to the playout unit 112 g for the final playout of the packet.
- delay smoothing is applied to the actual playout of packets by a delay smoother. While mean delay and variance are used to determine a calculated playout time, the use of delay smoothing further controls changes in playout time to specifically improve voice quality. Increases in playout time are increased to larger steps while decreases in playout time are limited to smaller steps. If the calculated playout time calls for an increase in buffer delay, buffer delay is increased by an amount greater than requested. If the calculated playout time calls for a decrease in buffer delay, buffer delay is decreased by an amount less than requested.
- the playout delay evaluator 100 h and playout buffer monitor 103 h are shown in communication with an output device 114 h and data input 104 h .
- the playout delay evaluator 100 h preferably comprises a control circuit 101 h and packet delay distribution system 102 h for the calculation of buffer size and delay characteristics.
- the playout buffer monitor 103 h preferably comprises a packet data storage memory 112 h , buffer control circuit 107 h , delay timer 108 h , pointer list 109 h , and input and output controllers 111 h and 113 h respectively. It also contains stream parameter block 105 h and drift control block 106 h .
- the calculation of the mean delay and variances used to determine the buffer size and delay characteristics may be performed by the delay evaluator or by the playout delay evaluator 100 h , based upon data received from the delay evaluator.
- control circuit 101 h manages the calculation, and communication of, a set of buffer configuration parameters for each data stream and allocates buffer resources for each stream.
- Control circuit 101 h calculates the buffer size requirements for the stream using the packet size S(p), in bytes, and the packet rate T(r), e.g. one packet every 10 milliseconds. Dividing the buffer delay, BD, by the packet rate T(r) yields the number of packets PS that the buffer needs to accommodate i.e., the number of packet slots in the buffer 103 h.
- the buffer size, S(B) is then the product of packet size S(p) and the number of packet slots PS.
- Control circuit 101 h allocates a block of memory 112 h having S(B) bytes and a pointer list 109 h having PS slots for buffering each stream. Control circuit 101 h also initializes buffer control circuits 107 h for the stream. As shown in FIG. 1 h , an input controller 111 h and an output controller 113 h are allocated to the buffer 103 h . Input and output controllers 111 h and 113 h transfer data between the data input 104 h or output device 113 h , respectively, and the buffer memory 112 h . Buffer control 107 h contains all the logic circuits necessary to oversee operation of buffer 103 h and provide updated information to control circuit 101 h.
- Buffer control 107 h maintains a packet pointer for each data packet stored in buffer 103 h .
- Each packet pointer contains the starting address of its respective packet contained in memory 112 h .
- the pointers are stored by buffer control 107 h in pointer list 109 h , which has a fixed number of slots, equal to PS, for storing packet pointers.
- Buffer control 107 h manipulates pointer list 109 h as a shift register with PS slots, numbered 0 through PS- 1 .
- Slot 0 contains the pointer for the packet, which is to be output next. The contents of each slot is shifted into the next adjacent slot towards the output slot 0 , at the packet rate, namely, every T(r) seconds.
- the buffer delay of a packet is determined by the position of its pointer in the pointer list 109 h . A packet whose pointer is in the 3 rd slot will experience a buffer delay of 3*T(r) seconds.
- buffer control circuit 107 h which passes a packet pointer, i.e., a starting address for the location in the memory where the packet data will be stored, to input circuit 111 h .
- Input circuit 111 h stores the packet data in memory starting at the pointer address as the data is received from network 104 h.
- the starting address is also stored as a packet pointer in the pointer list 109 h at a slot location determined by the buffer control circuit 107 h .
- the pointers may be placed in the pointer list at slot locations determined by the packet sequence. Thus, if packet i+2 is received after the first packet i, it is placed 2 slots higher in the list than the present location of the pointer for packet i, provided that packet I ⁇ 2 is not earlier in the sequence than the packet last output by output circuit 113 h .
- the use of packet sequence information to select slot locations helps out of order packets to be re-ordered without moving packet data.
- Control circuit 101 h checks the sequence number of each packet being received against the sequence number of the packet last output by output circuit 113 h . If the sequence number of the incoming packet is lower than the packet last output by the buffer, the packet being received is discarded because it has arrived to late to be output in sequence. Buffer control 107 h maintains a last-played register to keep track of the last packet output for this purpose.
- buffer control 107 h In response to a signal from timer 108 h , buffer control 107 h sends the pointer contents of the output slot 0 in the pointer list 108 h to output control 113 h , which then moves the packet data, stored at the respective memory location to the output device 114 h . With each signal from timer 108 h , buffer control 107 h also shifts each pointer down one slot in the pointer list as described above. Normally, timer 108 h is set to generate a signal at the packet rate, i.e., every T[r] seconds, to ensure that the playout rate for packets is same as the packet rate.
- the packet delay distribution system 102 h provides information to the control circuit 101 h and buffer control 107 h concerning the delay experienced by packets in the network. Also control circuit 101 h may provide the feedback to reflect changing network operating characteristics. Control circuit 101 h may also update the buffer characteristics, i.e., buffer size and pointer list in response to changing packet delay distribution.
- Drift control 106 h maintains stream synchronization in the presence of such clock drifts by discarding a packet periodically to prevent buffer overflow. If the receiver clock is faster than the transmitter, drift control circuit 106 h causes a packet to be repeated periodically or outputs a blank or dummy packet so that the output device 114 h always has a packet to process.
- the present invention can be used to enable the operation of a novel media gateway.
- the hardware system architecture of the said novel gateway is comprised of a plurality of distributed processing layer processors, referred to as Media Engines, that are in communication with a data bus and interconnected with a Host Processor or a Packet Engine which, in turn, is in communication with interfaces to networks, preferably an asynchronous transfer mode (ATM) physical device or gigabit media independent interface (GMII) physical device.
- ATM asynchronous transfer mode
- GMII gigabit media independent interface
- a data bus 205 a is connected to interfaces 210 a existent on a first novel Media Engine Type I 215 a and on a second novel Media Engine Type I 220 a .
- the first novel Media Engine Type I 215 a and second novel Media Engine Type I 220 a are connected through a second set of communication buses 225 a to a novel Packet Engine 230 a which, in turn, is connected through interfaces 235 a to outputs 240 a , 245 a .
- each of the Media Engines Type I 215 a , 220 a is in communication with a SRAM 246 a and SDRAM 247 a.
- the data bus 205 a be a time-division multiplex (TDM) bus.
- TDM bus is a pathway for the transmission of a number of separate voice, fax, modem, video, and/or other data signals simultaneously over a single communication medium.
- the separate signals are transmitted by interleaving a portion of each signal with each other, thereby enabling one communications channel to handle multiple separate transmissions and avoiding having to dedicate a separate communication channel to each transmission.
- Existing networks use TDM to transmit data from one communication device to another.
- the interfaces 210 a existent on the first novel Media Engine Type I 215 a and second novel Media Engine Type I 220 a comply with H.100, a hardware specification that details the necessary information to implement a CT bus interface at the physical layer for the PCI computer chassis card slot, independent of software specifications.
- the CT bus defines a single isochronous communications bus across certain PC chassis card slots and allows for the relatively fluid inter-operation of components. It is appreciated that interfaces abiding by different hardware specifications could be used to receive signals from the data bus 205 a.
- each of the two novel Media Engines Type I 215 a , 220 a can support a plurality of channels for processing media, such as voice.
- the specific number of channels supported is dependent upon the features required, such as the extent of echo cancellation, and type of codec supported.
- each Media Engine Type I 215 a , 220 a can support the processing of around 256 voice channels or more.
- Each Media Engine Type I 215 a , 220 a is in communication with the Packet Engine 230 a through a communication bus 225 a , preferably a peripheral component interconnect (PCI) communication bus.
- PCI peripheral component interconnect
- a PCI communication bus serves to deliver control information and data transfers between the Media Engine Type I chip 215 a , 220 a and the Packet Engine chip 230 a . Because Media Engine Type I 215 a , 220 a was designed to support the processing of lower data volumes, relative to Media Engine Type II described below, a single PCI communication bus can effectively support the transfer of both control and data between the designated chips. It is appreciated, however, that where data traffic becomes too great, the PCI communication bus must be supplemented with a second inter-chip communication bus.
- the Packet Engine 230 a is in communication with an ATM physical device 240 a and GMII physical device 245 a .
- the ATM physical device 240 a is capable of receiving processed and packetized data, as passed from the Media Engines Type I 215 a , 220 a through the Packet Engine 230 a , and transmitting it through a network operating on an asynchronous transfer mode (an ATM network).
- an ATM network automatically adjusts the network capacity to meet the system needs and can handle voice, modem, fax, video and other data signals.
- Each ATM data cell, or packet consists of five octets of header field plus 48 octets for user data.
- the header contains data that identifies the related cell, a logical address that identifies the routing, header error correction bits, plus bits for priority handling and network management functions.
- An ATM network is a wideband, low delay, connection-oriented, packet-like switching and multiplexing network that allows for relatively flexible use of the transmission bandwidth.
- the GMII physical device 245 a operates under a standard for the receipt and transmission of a certain amount of data, irrespective of the media types involved.
- OC-1 Optical Carrier Level 1
- STS-1 synchronous transport signal
- FIG. 2 b an embodiment supporting data rates up to OC-3 is shown, referred to herein as an OC-3 Tile 200 b .
- a data bus 205 b is connected to interfaces 210 b existent on a first novel Media Engine Type II 215 b and on a second novel Media Engine Type II 220 b .
- the first novel Media Engine Type II 215 b and second novel Media Engine Type II 220 b are connected through a second set of communication buses 225 b , 227 b to a novel Packet Engine 230 b which, in turn, is connected through interfaces 260 b , 265 b to outputs 240 b , 245 b and through interface 250 b to a Host Processor 255 b.
- the data bus 205 b be a time-division multiplex (TDM) bus and that the interfaces 210 b existent on the first novel Media Engine Type II 215 b and second novel Media Engine Type II 220 b comply with the H.100 a hardware specification. It is again appreciated that interfaces abiding by different hardware specifications could be used to receive signals from the data bus 205 b.
- TDM time-division multiplex
- Each of the two novel Media Engines Type II 215 b , 220 b can support a plurality of channels for processing media, such as voice.
- the specific number of channels supported is dependent upon the features required, such as the extent of echo cancellation, and type of codec implemented.
- codecs having relatively low processing power requirements, such as G.711, and where the extent of echo cancellation required is 128 milliseconds each Media Engine Type II can support the processing of approximately 2016 channels of voice. With two Media Engines Type II providing the processing power, this configuration is capable of supporting data rates of OC-3.
- the Media Engines Type II 215 b , 220 b are implementing a codec requiring higher processing power, such as G.729A, the number of supported channels decreases.
- the number of supported channels decreases from 2016 per Media Engine Type II when supporting G.711 to approximately 672 to 1024 channels when supporting G.729A.
- an additional Media Engine Type II can be connected to the Packet Engine 230 b via the common communication buses 225 b , 227 b.
- Each Media Engine Type II 215 b , 220 b is in communication with the Packet Engine 230 b through communication buses 225 b , 227 b , preferably a peripheral component interconnect (PCI) communication bus 225 b and a UTOPIA II/POS II communication bus 227 b .
- PCI peripheral component interconnect
- the PCI communication bus 225 b must be supplemented with a second communication bus 227 b .
- the second communication bus 227 b is a UTOPIA II/POS-II bus and serves as the data path between Media Engines Type II 215 b , 220 b and the Packet Engine 230 b .
- a POS (Packet over SONET) bus represents a high-speed means for transmitting data through a direct connection, allowing the passing of data in its native format without the addition of any significant level of overhead in the form of signaling and control information.
- UTOPIA Universal Test and Operations Interface for ATM refers to an electrical interface between the transmission convergence and physical medium dependent sublayers of the physical layer and acts as the interface for devices connecting to an ATM network.
- each packet 300 contains a header 305 with a plurality of information fields and user data 310 .
- each header 305 contains information fields including packet type 315 (e.g., RTP, raw encoded voice, AAL2), packet length 320 (total length of the packet including information fields), and channel identification 325 (identifies the physical channel, namely the TDM slot for which the packet is intended or from which the packet came).
- packet type 315 e.g., RTP, raw encoded voice, AAL2
- packet length 320 total length of the packet including information fields
- channel identification 325 identifies the physical channel, namely the TDM slot for which the packet is intended or from which the packet came).
- coder/decoder type 330 When dealing with encoded data transfers between a Media Engine Type II 215 b , 220 b and Packet Engine 230 b , it is further preferred to include coder/decoder type 330 , sequence number 335 , and voice activity detection decision 340 in the header 305 .
- the Packet Engine 230 b is in communication with the Host Processor 255 b through a PCI target interface 250 b .
- the Packet Engine 230 b preferably includes a PCI to PCI bridge [not shown] between the PCI interface 226 b to the PCI communication bus 225 b and the PCI target interface 250 b .
- the PCI to PCI bridge serves as a link for communicating messages between the Host Processor 255 b and two Media Engines Type II 215 b , 220 b.
- the novel Packet Engine 230 b receives processed data from each of the two Media Engines Type II 215 b , 220 b via the communication buses 225 b , 227 b . While theoretically able to connect to a plurality of Media Engines Type II, it is preferred that the Packet Engine 230 b be in communication with no more than three Media Engines Type II 215 b , 220 b [only two are shown in FIG. 2 b ].
- Packet Engine 230 b provides cell and packet encapsulation for data channels, up to 2048 channels when implementing a G.711 codec, quality of service functions for traffic management, tagging for differentiated services and multi-protocol label switching, and the ability to bridge cell and packet networks.
- the Packet Engine 230 b is in communication with an ATM physical device 240 b and GMII physical device 245 b through a UTOPIA II/POS II compatible interface 260 b and GMII compatible interface respectively 265 b .
- the Packet Engine 230 b also preferably has another GMII interface [not shown] in the MAC layer of the network, referred to herein as the MAC GMII interface.
- MAC is a media specific access control protocol defining the lower half of the data link layer that defines topology dependent access control protocols for industry standard local area network specifications.
- the Packet Engine 230 b is designed to enable ATM-IP internetworking. Telecommunication service providers have built independent networks operating on an ATM or IP protocol basis. Enabling ATM-IP internetworking permits service providers to support the delivery of substantially all digital services across a single networking infrastructure, thereby reducing the complexities introduced by having multiple technologies/protocols operative throughout a service provider's entire network.
- the Packet Engine 230 b is therefore designed to enable a common network infrastructure by providing for the internetworking between ATM modes and IP modes.
- the novel Packet Engine 230 b supports the internetworking of ATM AALs (ATM Adaptation Layers) to specific IP protocols.
- AAL accomplishes conversion from the higher layer, native data format and service specifications into the ATM layer.
- the process includes segmentation of the original and larger set of data into the size and format of an ATM cell, which comprises 48 octets of data payload and 5 octets of overhead.
- the AAL accomplishes reassembly of the data.
- IP Internet Protocol
- RTP Realtime Transport Protocol
- TCP Transmission Control Protocol
- TCP is a transport layer, connection oriented, end-to-end protocol that provides relatively reliable, sequenced, and unduplicated delivery of bytes to a remote or a local user.
- UDP User Datagram Protocol
- FIG. 2 it is preferred that ATM AAL-1 be internetworked with RTP, UDP, and IP protocols, AAL-2 be internetworked with UDP and IP protocols, and AAL-5 be internetworked with UDP and IP protocols or TCP and IP protocols.
- OC-3 tiles as presented in FIG. 2 b , can be interconnected to form a tile supporting higher data rates.
- four OC-3 tiles 405 can be interconnected, or “daisy chained”, together to form an OC-12 tile 400 .
- Daisy chaining is a method of connecting devices in a series such that signals are passed through the chain from one device to the next. By enabling daisy chaining, the present invention provides for currently unavailable levels of scalability in data volume support and hardware implementation.
- a Host Processor 455 is connected via communication buses 425 , preferably PCI communication buses, to the PCI interface 435 on each of the OC-3 tiles 405 .
- Enabling daisy chaining eliminates the need for an external aggregator to interface the GMII interfaces on each of the OC-3 tiles in order to enable integration.
- the final OC-3 tile 405 is in communication with a GMII physical device 417 via the PHY GMII interface 410 .
- the logical system of FIG. 5 can be physically deployed in a number of ways, depending on processing needs, due, in part, to the novel software architecture, to be described below.
- one physical embodiment of the software system described in FIG. 5 is to be on a single chip 600 , where the media processing block 610 , packetization block 620 , and management block 630 are all operative on the same chip. If processing needs increase, thereby requiring more chip power be dedicated to media processing, the software system can be physically implemented such that the media processing block 710 and packetization block 720 operate on a DSP 715 that is in communication via a data bus 770 with the management block 730 that operates on a separate host processor 735 , as depicted in FIG. 7.
- the media processing block 810 and packetization block 820 can be implemented on separate DSPs 860 , 865 and communicate via data buses 870 with each other and with the management block 830 that operates on a separate host processor 835 , as depicted in FIG. 8.
- the modules can be physically separated onto different processors to enable for a high degree of system scalability.
- each OC-3 tile is configured to perform media processing and packetization tasks.
- the IC card has four OC-3 tiles in communication via databuses.
- the OC-3 tiles each have three Media Engine II processors in communication via interchip communication buses with a Packet Engine processor.
- the Packet Engine processor has a MAC and PHY interface by which communications external to the OC-3 tiles are performed.
- the PHY interface of the first OC-3 tile is in communication with the MAC interface of the second OC-3 tile.
- each Media Engine II processor implements the Media Processing Subsystem of the present invention, shown in FIG. 5 as 505 .
- Each Packet Engine processor implements the Packetization Subsystem of the present invention, shown in FIG. 5 as 540 .
- the host processor implements the Management Subsystem, shown in FIG. 5 as 570 .
- Both Media Engine I and Media Engine II are types of DPLPs and therefore comprise a layered architecture wherein each layer encodes and decodes up to N channels of voice, fax, modem, or other data depending on the layer configuration.
- Each layer implements a set of pipelined processing units specially designed through substantially optimal hardware and software partitioning to perform specific media processing functions.
- the processing units are special-purpose digital signal processors that are each optimized to perform a particular signal processing function or a class of functions.
- Media Engine 1900 comprises a plurality of Media Layers 905 each in communication with a central direct memory access (DMA) controller 910 via communication data buses 920 .
- DMA direct memory access
- Each Media Layer 905 further comprises an interface to the DMA 925 interconnected with the communication data buses 920 .
- the DMA interface 925 is in communication with each of a plurality of pipelined processing units (PUs) 930 via communication data buses 920 and a plurality of program and data memories 940 , via communication data buses 920 , that are situated between the DMA interface 925 and each of the PUs 930 .
- the program and data memories 940 are also in communication with each of the PUs 930 via data buses 920 .
- each PU 930 can access at least one program memory and at least one data memory unit 940 .
- FIFO first-in, first-out
- the layered architecture of the present invention is not limited to a specific number of Media Layers, certain practical limitations may restrict the number of Media Layers that can be stacked into a single Media Engine I. As the number of Media Layers increase, the memory and device input/output bandwidth may increase to such an extent that the memory requirements, pin count, density, and power consumption are adversely affected and become incompatible with application or economic requirements. Those practical limitations, however, do not represent restrictions on the scope and substance of the present invention.
- Media Layers 905 are in communication with an interface to the central processing unit 950 (CPU IF) through communication buses 920 .
- the CPU IF 950 transmits and receives control signals and data from an external scheduler 955 , the DMA controller 910 , a PCI interface (PCI IF) 960 , a SRAM interface (SRAM IF) 975 , and an interface to an external memory, such as an SDRAM interface (SDRAM IF) 970 through communication buses 920 .
- the PCI IF 960 is preferably used for control signals.
- the SDRAM IF 970 connects to a synchronized dynamic random access memory module whereby the memory access cycles are synchronized with the CPU clock in order to eliminate wait time associated with memory fetching between random access memory (RAM) and the CPU.
- the SDRAM IF 970 that connects the processor with the SDRAM supports 133 MHz synchronous DRAM and asynchronous memory. It supports one bank of SDRAM (64 Mbit/256 Mbit to 256 MB maximum) and 4 asynchronous devices (8/16/32 bit) with a data path of 32 bits and fixed length as well as undefined length block transfers and accommodates back-to-back transfers. Eight transactions may be queued for operation.
- the SDRAM [not shown] contains the states of the PUs 930 .
- One of ordinary skill in the art would appreciate that, although not preferred, other external memory configurations and types could be selected in place of the SDRAM and, therefore, that another type of memory interface could be used in place of the SDRAM IF 970 .
- the SDRAM IF 970 is further in communication with the PCI IF 960 , DMA controller 910 , the CPU IF 950 , and, preferably, the SRAM interface (SRAM IF) 975 through communication buses 920 .
- the SRAM [not shown] is a static random access memory that is a form of random access memory that retains data without constant refreshing, offering relatively fast memory access.
- the SRAM IF 975 is also in communication with a TDM interface (TDM IF) 980 , the CPU IF 950 , the DMA controller 910 , and the PCI IF 960 via data buses 920 .
- the TDM IF 980 for the trunk side is preferably H.100/H.110 compatible and the TDM bus 981 operates at 8.192 MHz. Enabling the Media Engine I 900 to provide 8 data signals, therefore delivering a capacity up to 512 full duplex channels, the TDM IF 980 has the following preferred features: a H.100/H.110 compatible slave, frame size can be set to 16 or 20 samples and the scheduler can program the TDM IF 980 to store a specific buffer or frame size, programmable staggering points for the maximum number of channels.
- the TDM IF interrupts the scheduler after every N samples of 8,000 Hz clock with the number N being programmable with possible values of 2, 4, 6, and 8.
- the TDM IF 980 preferably does not transfer the pulse code modulation (PCM) data to memory on a sample-by-sample basis, but rather buffers 16 or 20 samples, depending on the frame size which the encoders and decoders are using, of a channel and then transfers the voice data for that channel to memory.
- PCM pulse code modulation
- the PCI IF 960 is also in communication with the DMA controller 910 via communication buses 920 .
- External connections comprise connections between the TDM IF 980 and a TDM bus 981 , between the SRAM IF 975 and a SRAM bus 976 , between the SDRAM IF 970 and a SDRAM bus 971 , preferably operating at 32 bit@133 MHz, and between the PCI IF 960 and a PCI 2.1 Bus 961 also preferably operating at 32 bit@133 MHz.
- the scheduler 955 maps the channels to the Media Layers 905 for processing. When the scheduler 955 is processing a new channel, it assigns the channel to one of the layers, depending upon processing resources available per layer 905 . Each layer 905 handles the processing of a plurality of channels such that the processing is performed in parallel and is divided into fixed frames, or portions of data.
- the scheduler 955 communicates with each Media Layer 905 through the transmission of data, in the form of tasks, to the FIFO task queues wherein each task is a request to the Media Layer 905 to process a plurality of data portions for a particular channel.
- the scheduler 955 it is therefore preferred for the scheduler 955 to initiate the processing of data from a channel by putting a task in a task queue, rather than programming each PU 930 individually. More specifically, it is preferred to have the scheduler 955 initiate the processing of data from a channel by putting a task in the task queue of a particular PU 930 and having the Media Layer's 905 pipeline architecture manage the data flow to subsequent PUs 930 .
- the scheduler 955 should manage the rate by which each of the channels is processed. In an embodiment where the Media Layer 905 is required to accept the processing of data from M channels and each of the channels uses a frame size of T msec, then it is preferred that the scheduler 955 processes one frame of each of the M channels within each T msec interval. Further, in a preferred embodiment, the scheduling is based upon periodic interrupts, in the form of units of samples, from the TDM IF 980 . As an example, if the interrupt period is 2 samples then it is preferred that the TDM IF 980 interrupts the scheduler every time it gathers two new samples of all channels.
- the scheduler preferably maintains a ‘tick-count’, which is incremented on every interrupt and reset to 0 when time equal to a frame size has passed.
- the mapping of channels to time slots is preferably not fixed. For example, in voice applications, whenever a call starts on a channel, the scheduler dynamically assigns a layer to a provisioned time slot channel. It is further preferred that the data transfer from a TDM buffer to the memory is aligned with the time slot in which this data is processed, thereby staggering the data transfer for different channels from TDM to memory, and vice-versa, in a manner that is equivalent to the staggering of the processing of different channels.
- the TDM IF 980 maintains a tick count variable wherein there is some synchronization between the tick counts of TDM and scheduler 955 .
- the tick count variable is set to zero on every 2 ms or 2.5 ms depending on the buffer size.
- Media Engine II 1000 comprises a plurality of Media Layers 1005 each in communication with processing layer controller 1007 , referred to herein as a Media Layer Controller 1007 , and central direct memory access (DMA) controller 1010 via communication data buses and an interface 1015 .
- Each Media Layer 1005 is in communication with a CPU interface 1006 which, in turn, is in communication with a CPU 1004 .
- a plurality of pipelined processing units (PUs) 1030 are in communication with a plurality of program memories 1035 and data memories 1040 , via communication data buses.
- each PU 1030 can access at least one program memory 1035 and one data memory 1040 .
- each of the PUs 1030 , program memories 1035 , and data memories 1040 is in communication with an external memory 1047 via the Media Layer Controller 1007 and DMA 1010 .
- each Media Layer 1005 comprises four PUs 1030 , each of which is in communication with a single program memory 1035 and data memory 1040 , wherein the each of the PUs 1031 , 1032 , 1033 , 1034 is in communication with each of the other PUs 1031 , 1032 , 1033 , 1034 in the Media Layer 1005 .
- a program memory 1005 a preferably 512 ⁇ 64, operates in conjunction with a controller 1010 a and data memory 1015 a to deliver data and instructions to a data register file 1017 a , preferably 16 ⁇ 32, and address register file 1020 a , preferably 4 ⁇ 12.
- the data register file 1017 a and address register file 1020 a are in communication with functional units such as an adder/MAC 1025 a , logical unit 1027 a , and barrel shifter 1030 a and with units such as a request arbitration logic unit 1033 a and DMA channel bank 1035 a.
- the MLC 1007 arbitrates data and program code transfer requests to and from the program memories 1035 and data memories 1040 in a round robin fashion. On the basis of this arbitration the MLC 1007 fills the data pathways that define how units directly access memory, namely the DMA channels [not shown].
- the MLC 1007 is capable of performing instruction decoding to route an instruction according to its dataflow and keep track of the request states for all PUs 1030 , such as the state of a read-in request, a write-back request and an instruction forwarding.
- the MLC 1007 is further capable of conducting interface related functions, such as programming DMA channels, starting signal generation, maintaining page states for PUs 1030 in each Media Layer 1005 , decoding of scheduler instructions, and managing the movement of data from and into the task queues of each PU 1030 .
- interface related functions such as programming DMA channels, starting signal generation, maintaining page states for PUs 1030 in each Media Layer 1005 , decoding of scheduler instructions, and managing the movement of data from and into the task queues of each PU 1030 .
- the DMA controller 1010 is a multi-channel DMA unit for handling the data transfers between the local memory buffer PUs and external memories, such as the SDRAM.
- DMA channels are programmed dynamically.
- PUs 1030 generate independent requests, each having an associated priority level, and send them to the MLC 1007 for reading or writing.
- the MLC 1007 programs the DMA channel accordingly.
- the DMA Controller 1010 provides hardware support for round robin request arbitration across the PUs 1030 and Media Layers 1005 .
- a DMA channel is generated and receives this information from 2, 32 bit registers residing in the DMA.
- a third register exchanges control information between the DMA and each PU which contains the current status of the DMA transfer.
- arbitration is performed among the following requests: 1 structure read, 4 data read and 4 data write requests from each Media Layer, approximately 90 data requests in total, and 4 program code fetch requests from each Media Layer, approximately 40 program code fetch requests in total.
- the DMA Controller 1010 is preferably further capable of arbitrating priority for program code fetch requests, conducting link list traversal and DMA channel information generation, and performing DMA channel prefetch and done signal generation.
- the MLC 1007 and DMA Controller 1010 are in communication with a CPU IF 1006 through communication buses.
- the PCI IF 1060 is in communication with an external memory interface (such as a SDRAM IF) 1070 and with the CPU IF 1006 via communication buses.
- the external memory interface 1070 is further in communication with the MLC 1007 and DMA Controller 1010 and a TDM IF 1080 through communication buses.
- the SDRAM IF 1070 is in communication with a packet processor interface, such as a UTOPIA II/POS compatible interface (U2/POS IF), 1090 via communication data buses.
- U2/POS IF 1090 is also preferably in communication with the CPU IF 1006 .
- the TDM IF 1080 have all 32 serial data signals implemented, thereby supporting at least 2048 full duplex channels.
- External connections comprise connections between the TDM IF 1080 and a TDM bus 1081 , between the external memory 1070 and a memory bus 1071 , preferably operating at 64 bit@133 MHz, between the PCI IF 1060 and a PCI 2.1 Bus 1061 also preferably operating at 32 bit@133 MHz, and between the U2/POS IF 1090 and a UTOPIA II/POS connection 1091 preferably operative at 622 megabits per second.
- the TDM IF 1080 for the trunk side is preferably H.100/H.110 compatible and the TDM bus 1081 operates at 8.192 MHz, as previously discussed in relation to the Media Engine I.
- the present invention utilizes a plurality of pipelined PUs specially designed for conducting a defined set of processing tasks.
- the PUs are not general purpose processors and can not be used to conduct any processing task.
- a survey and analysis of specific processing tasks yielded certain functional unit commonalities that, when combined, yield a specialized PU capable of optimally processing the universe of those specialized processing tasks.
- the instruction set architecture of each PU yields compact code. Increased code density results in a decrease in required memory and, consequently, a decrease in required area, power, and memory traffic.
- the pipeline architecture also improves performance.
- Pipelining is an implementation technique whereby multiple instructions are overlapped in execution.
- each step in the pipeline completes a part of an instruction.
- different steps are completing different parts of different instructions in parallel.
- Each of these steps is called a pipe stage or a data segment.
- the stages are connected on to the next to form a pipe.
- instructions enter the pipe at one end, progress through the stages, and exit at the other end.
- the throughput of an instruction pipeline is determined by how often an instruction exits the pipeline.
- EC PU one type of PU
- EC PU has been specially designed to perform, in a pipeline architecture, a plurality of media processing functions, such as echo cancellation (EC), voice activity detection (VAD), and tone signaling (TS) functions.
- Echo cancellation removes from a signal echoes that may arise as a result of the reflection and/or retransmission of modified input signals back to the originator of the input signals.
- echoes occur when signals that were emitted from a loudspeaker are then received and retransmitted through a microphone (acoustic echo) or when reflections of a far end signal are generated in the course of transmission along hybrids wires (line echo).
- Tone signaling comprises the processing of supervisory, address, and alerting signals over a circuit or network by means of tones.
- Supervising signals monitor the status of a line or circuit to determine if it is busy, idle, or requesting service.
- Alerting signals indicate the arrival of an incoming call.
- Addressing signals comprise routing and destination information.
- the LEC, VAD, and TS functions can be efficiently executed using a PU having several single-cycle multiply and accumulate (MAC) units operating with an Address Generation Unit and an Instruction Decoder.
- Each MAC unit includes a compressor, sum and carry registers, an adder, and a saturation and rounding logic unit.
- this PU 1100 comprises a load store architecture with a single Address Generation Unit (AGU) 1105 , supporting zero over-head looping and branching with delay slots, and an Instruction Decoder 1106 .
- AGU Address Generation Unit
- the plurality of MAC units 1110 operate in parallel on two 16-bit operands and perform the following function:
- Guard bits are appended with sum and carry registers to facilitate repeated MAC operations.
- a scale unit prevents accumulator overflow.
- Each MAC unit 1110 may be programmed to perform round operations automatically. Additionally, it is preferred to have an addition/subtraction unit [not shown] as a conditional sum adder with both the input operands being 20 bit values and the output operand being a 16-bit value.
- the EC PU performs tasks in a pipeline fashion.
- a first pipeline stage comprises an instruction fetch wherein instructions are fetched into an instruction register from program memory.
- a second pipeline stage comprises an instruction decode and operand fetch wherein an instruction is decoded and stored in a decode register.
- the hardware loop machine is initialized in this cycle. Operands from the data register files are stored in operand registers.
- the AGU operates during this cycle. The address is placed on data memory address bus. In the case of a store operation, data is also placed on the data memory data bus. For post increment or decrement instructions, the address is incremented or decremented after being placed on the address bus. The result is written back to address register file.
- the third pipeline stage comprises the operation on the fetched operands by the Addition/Subtraction Unit and MAC units.
- the status register is updated and the computed result or data loaded from memory is stored in the data/address register files.
- the states and history information required for the EC PU operations are fetched through a multi-channel DMA interface, as previously shown in each Media Layer.
- the EC PU configures the DMA controller registers directly.
- the EC PU loads the DMA chain pointer with the memory location of the head of the chain link.
- the EC PU reduces wait time for processing incoming media, such as voice.
- an instruction fetch task (IF) is performed for processing data from channel 1 1250 .
- the IF task is performed for processing data from channel 2 1255 while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data from channel 1 1250 .
- IDOF instruction decode and operand fetch
- an IF task is performed for processing data from channel 3 1260 while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data from channel 2 1255 and an Execute (EX) task is performed for processing data from channel 1 1250 .
- IDOF instruction decode and operand fetch
- EX Execute
- channels are dynamically generated, the channel numbering may not reflect the actual location and assignment of a task. Channel numbering here is used to simply indicate the concept of pipelining across multiple channels and not to represent actual task locations.
- a second type of PU (referred to herein as CODEC PU) has been specially designed to perform, in a pipeline architecture, a plurality of media processing functions, such as encoding and decoding signals in accordance with certain standards and protocols, including standards promoted by the International Telecommunication Union (ITU) such as voice standards, including G.711, G.723.1, G.726, G.728, G.729A/B/E, and data modem standards, including V.17, V.34, and V.90, among others (referred to herein as Codecs), and performing comfort noise generation (CNG) and discontinuous transmission (DTX) functions.
- ITU International Telecommunication Union
- voice standards including G.711, G.723.1, G.726, G.728, G.729A/B/E
- data modem standards including V.17, V.34, and V.90, among others
- Codecs comfort noise generation
- DTX discontinuous transmission
- the various Codecs are used to encode and decode voice signals with differing degrees of complexity
- the Codecs, CNG, and DTX functions can be efficiently executed using a PU having an Arithmetic and Logic Unit (ALU), MAC unit, Barrel Shifter, and Normalization Unit.
- ALU Arithmetic and Logic Unit
- MAC unit MAC unit
- Barrel Shifter MAC unit
- Normalization Unit a unit having an Arithmetic and Logic Unit (ALU), MAC unit, Barrel Shifter, and Normalization Unit.
- ALU Arithmetic and Logic Unit
- MAC unit MAC unit
- Barrel Shifter Barrel Shifter
- Normalization Unit a preferred embodiment, shown in FIG. 13, the CODEC PU 1300 comprises a load store architecture with a single Address Generation Unit (AGU) 1305 , supporting zero over-head looping and zero overhead branching with delay slots, and an Instruction Decoder 1306 .
- AGU Address Generation Unit
- each MAC unit 1310 includes a compressor, sum and carry registers, an adder, and a saturation and rounding logic unit.
- the MAC unit 1310 is implemented as a compressor with feedback into the compression tree for accumulation.
- One preferred embodiment of a MAC 1310 has a latency of approximately 2 cycles with a throughput of 1 cycle.
- the MAC 1310 operates on two 17-bit operands, signed or unsigned. The intermediate results are kept in sum and carry registers. Guard bits are appended to the sum and carry registers for repeated MAC operations.
- the saturation logic converts the Sum and Carry results to 32 bit values.
- the rounding logic rounds a 32 bit to a 16 bit number. Division logic is also implemented in the MAC unit 1310 .
- the ALU 1320 includes a 32 bit adder and a 32 bit logic circuit capable of performing a plurality of operations, including add, add with carry, subtract, subtract with borrow, negate, AND, OR, XOR, and NOT.
- One of the inputs to the ALU 1320 has an XOR array, which operates on 32-bit operands.
- the ALU's 1320 absolute unit drives this array.
- the input operand is either XORed with one or zero to perform negation on the input operands.
- the Barrel Shifter 1330 is placed in series with the ALU 1320 and acts as a pre-shifter to operands requiring a shift operation followed by any ALU operations.
- One type of preferred Barrel Shifter can perform a maximum of 9-bit left or 26-bit right arithmetic shifts on 16-bit or 32-bit operands.
- the output of the Barrel Shifter is a 32-bit value, which is accessible to both the inputs of the ALU 1320 .
- the Normalization unit 1340 counts the redundant sign bits in the number. It operates on 2's complement 16-bit numbers. Negative numbers are inverted to compute the redundant sign bits. The number to be normalized is fed into the XOR array. The other input comes from the sign bit of the number. Where the media being processed is voice, it is preferred to have an interface to the EC PU.
- the EC PU uses VAD to determine whether a frame being received comprises silence or speech. The VAD decision is preferably communicated to the CODEC PU so that it may determine whether to implement a Codec or DTX function.
- the CODEC PU performs tasks in a pipeline fashion.
- a first pipeline stage comprises an instruction fetch wherein instructions are fetched into an instruction register from program memory. At the same time, the next program counter value is computed and stored in the program counter. In addition, loop and branch decisions are taken in the same cycle.
- a second pipeline stage comprises an instruction decode and operand fetch wherein an instruction is decoded and stored in a decode register. The instruction decode, register read and branch decisions happen in the instruction decode stage.
- the Execute 1 stage, the Barrel Shifter and the MAC compressor tree complete their computation. Addresses to data memory are also applied in this stage.
- the Execute 2 stage, the ALU, normalization unit, and the MAC adder complete their computation.
- Register write-back and address registers are updated at the end of the Execute- 2 stage.
- the states and history information required for the CODEC PU operations are fetched through a multi-channel DMA interface, as previously shown in each Media Layer.
- the CODEC PU reduces wait time for processing incoming media, such as voice.
- an instruction fetch task (IF) is performed for processing data from channel 1 1350 a .
- the IF task is performed for processing data from channel 2 1355 a while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data from channel 1 1350 a .
- IDOF instruction decode and operand fetch
- an IF task is performed for processing data from channel 3 1360 a while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data from channel 2 1355 a and an Execute 1 (EX 1 ) task is performed for processing data from channel 1 1350 a .
- an IF task is performed for processing data from channel 4 1370 a while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data from channel 3 1360 a
- an Execute 1 (EX 1 ) task is performed for processing data from channel 2 1355 a
- an Execute 2 (EX 2 ) task is performed for processing data from channel 1 1350 a .
- the channel numbering may not reflect the actual location and assignment of a task. Channel numbering here is used to simply indicate the concept of pipelining across multiple channels and not to represent actual task locations.
- the pipeline architecture of the present invention is not limited to instruction processing within PUs, but also exists on a PU to PU architecture level. As shown in FIG. 13 b , multiple PUs may operate on a data set N in a pipeline fashion to complete the processing of a plurality of tasks where each task comprises a plurality of steps.
- a first PU 1305 b may be capable of performing echo cancellation functions, labeled task A.
- a second PU 1310 b may be capable of performing tone signaling functions, labeled task B.
- a third PU 1315 b may be capable of performing a first set of encoding functions, labeled task C.
- a fourth PU 1320 b may be capable of performing a second set of encoding functions, labeled task D.
- time slot 1 1350 b the first PU 1305 b performs task A 1 1380 b on data set N.
- time slot 2 1355 b the first PU 1305 b performs task A 2 1381 b on data set N and the second PU 1310 b performs task B 1 1387 b on data set N.
- time slot 3 1360 b the first PU 1305 b performs task A 3 1382 b on data set N, the second PU 1310 b performs task B 2 1388 b on data set N, and the third PU 1315 b performs task C 1 1394 b on data set N.
- the first PU 1305 b performs task A 4 1383 b on data set N
- the second PU 1310 b performs task B 3 1389 b on data set N
- the third PU 1315 b performs task C 2 1395 b on data set N
- the fourth PU 1320 b performs task D 1 1330 on data set N.
- the first PU 1305 b performs task A 5 1384 b on data set N
- the second PU 1310 b performs task B 4 1390 b on data set N
- the third PU 1315 b performs task C 3 1396 b on data set N
- the fourth PU 1320 b performs task D 2 1331 on data set N.
- the first PU 1305 b performs task A 5 1385 b on data set N
- the second PU 1310 b performs task B 4 1391 b on data set N
- the third PU 1315 b performs task C 3 1397 b on data set N
- the fourth PU 1320 b performs task D 2 1332 on data set N.
- One of ordinary skill in the art would appreciate how the pipeline processing would further progress.
- the combination of specialized PUs with a pipeline architecture enables the processing of greater channels on a single media layer.
- each channel implements a G.711 codec and 128 ms of echo tail cancellation with DTMF detection/generation, voice activity detection (VAD), comfort noise generation (CNG), and call discrimination
- VAD voice activity detection
- CNG comfort noise generation
- call discrimination the media engine layer operates at 1.95 MHz per channel.
- the resulting channel power consumption is at or about 6 mW per channel using 0.13 ⁇ standard cell technology.
- the Packet Engine of the present invention is a communications processor that, in a preferred embodiment, supports the plurality of interfaces and protocols used in media gateway processing systems between circuit-switched networks, packet-based IP networks, and cell-based ATM networks.
- the Packet Engine comprises a unique architecture capable of providing a plurality of functions for enabling media processing, including, but not limited to, cell and packet encapsulation, quality of service functions for traffic management and tagging for the delivery of other services and multi-protocol label switching, and the ability to bridge cell and packet networks.
- the Packet Engine 1400 is configured to handle data rate up to and around OC-12. It is appreciated by one of ordinary skill in the art that certain modifications can be made to the fundamental architecture to increase the data handling rates beyond OC-12.
- the Packet Engine 1400 comprises a plurality of processors 1405 , a host processor 1430 , an ATM engine 1440 , in-bound DMA channel 1450 , out-bound DMA channel 1455 , a plurality of network interfaces 1460 , a plurality of registers 1470 , memory 1480 , an interface to external memory 1490 , and a means to receive control and signaling information 1495 .
- the processors 1405 comprise an internal cache 1407 , central processing unit interface 1409 , and data memory 1411 .
- the processors 1405 comprise 32-bit reduced instruction set computing (RISC) processors with a 16 Kb instruction cache and a 12 Kb local memory.
- the central processing unit interface 1409 permits the processor 1405 to communicate with other memories internal to, and external to, the Packet Engine 1400 .
- the processors 1405 are preferably capable of handling both in-bound and out-bound communication traffic. In a preferred implementation, generally half of the processors handle in-bound traffic while the other half handle out-bound traffic.
- the memory 1411 in the processor 1405 is preferably divided into a plurality of banks such that distinct elements of the Packet Engine 1400 can access the memory 1411 independently and without contention, thereby increasing overall throughput.
- the memory is divided into three banks, such that the in-bound DMA channel can write to memory bank one, while the processor is processing data from memory bank two, while the out-bound DMA channel is transferring processed packets from memory bank three.
- the ATM engine 1440 comprises two primary subcomponents, referred to herein as the ATMRx Engine and the ATMTx Engine.
- the ATMRx Engine processes an incoming ATM cell header and transfers the cell for corresponding AAL protocol, namely AAL1, AAL2, AAL5, processing in the internal memory or to another cell manager, if external to the system.
- the ATMTx Engine processes outgoing ATM cells and requests the outbound DMA channel to transfer data to a particular interface, such as the UTOPIAII/POSII interface. Preferably, it has separate blocks of local memory for data exchange.
- the ATM engine 1440 operates in combination with data memory 1483 to map an AAL channel, namely AAL2, to a corresponding channel on the TDM bus (where the Packet Engine 1400 is connected to a Media Engine) or to a corresponding IP channel identifier where internetworking between IP and ATM systems is required.
- the internal memory 1480 utilizes an independent block to maintain a plurality of tables for comparing and/or relating channel identifiers with virtual path identifiers (VPI), virtual channel identifiers (VCI), and compatibility identifiers (CID).
- VPI is an eight-bit field in the ATM cell header which indicates the virtual path over which the cell should be routed.
- a VCI is the address or label of a virtual channel comprised of a unique numerical tag, defined by a 16 bit field in the ATM cell header, that identifies a virtual channel over which a stream of cells is to travel during the course of a session between devices.
- the plurality of tables are preferably updated by the host processor 1430 and are shared by the ATMRx and ATMTx engines.
- the host processor 1430 is preferably a RISC processor with an instruction cache 1431 .
- the host processor 1430 communicates with other hardware blocks through a CPU interface 1432 which is capable of managing communications with Media Engines over a bus, such as a PCI bus, and with a host, such as a signaling host through a PCI-PCI bridge.
- the host processor 1430 is capable of being interrupted by other processors 1405 through their transmission of interrupts which are handled by an interrupt handler 1433 in the CPU interface.
- the host processor 1430 be capable of performing the following functions: 1) boot-up processing, including loading code from a flash memory to an external memory and starting execution, initializing interfaces and internal registers, acting as a PCI host, and appropriately configuring them, and setting up inter-processor communications between a signaling host, the packet engine itself, and media engines, 2) DMA configuration, 3) certain network management functions, 4) handling exceptions, such as the resolution of unknown addresses, fragmented packets, or packets with invalid headers, 4) providing intermediate storage of tables during system shutdown, 5) IP stack implementation, and 6) providing a message-based interface for users external to the packet engine and for communicating with the packet engine through the control and signaling means, among others.
- boot-up processing including loading code from a flash memory to an external memory and starting execution, initializing interfaces and internal registers, acting as a PCI host, and appropriately configuring them, and setting up inter-processor communications between a signaling host, the packet engine itself, and media engines, 2) DMA configuration, 3) certain network management functions
- two DMA channels are provided for data exchange between different memory blocks via data buses.
- the in-bound DMA channel 1450 is utilized to handle incoming traffic to the Packet Engine 1400 data processing elements and the out-bound DMA channel 1455 is utilized to handle outgoing traffic to the plurality of network interfaces 1460 .
- the in-bound DMA channel 1450 handles all of the data coming into the Packet Engine 1400 .
- the Packet Engine 1400 has a plurality of network interfaces 1460 that permit the Packet Engine to compatibly communicate over networks.
- the network interfaces comprise a GMII PHY interface 1562 , a GMII MAC interface 1564 , and two UTOPIAII/POSII interfaces 1566 in communication with 622 Mbps ATM/SONET connections 1568 to receive and transmit data.
- the Packet Engine [not shown] supports MAC and emulates PHY layers of the Ethernet interface as specified in IEEE 802.3.
- the gigabit Ethernet MAC 1570 comprises FIFOs 1503 and a control state machine 1525 .
- the transmit and receive FIFOs 1503 are provided for data exchange between the gigabit Ethernet MAC 1570 and bus channel interface 1505 .
- the bus channel interface 1505 is in communication with the outbound DMA channel 1515 and in-bound DMA channel 1520 through bus channel.
- the MAC 1570 preferably sends a request to the DMA 1520 for data movement.
- the DMA 1520 preferably checks the task queue [not shown] in the MAC interface 1564 and transfers the queued packets.
- the task queue in the MAC interface is a set of 64 bit registers containing a data structure comprising: length of data, source address, and destination address.
- the destination address will not be used.
- the DMA 1520 will move the data over the bus channel to memories located within the processors and will write the number of tasks at a predefined memory location. After completing writing of all tasks, the DMA 1520 will write the total number of tasks transferred to the memory page.
- the processor will process the received data and will write a task queue for an outbound channel of the DMA.
- the outbound DMA channel 1515 will check the number of frames present in the memory locations and, after reading the task queue, will move the data either to a POSII interface of the Media Engine Type I or II or to an external memory location where IP to ATM bridging is being performed.
- the Packet Engine supports two configurable UTOPIAII/POSII interfaces 1566 which provides an interface between the PHY and upper layer for IP/ATM traffic.
- the UTOPIAII/POSII 1580 comprises FIFOs 1504 and a control state machine 1526 .
- the transmit and receive FIFOs 1504 are provided for data exchange between the UTOPIAII/POSII 1580 and bus channel interface 1506 .
- the bus channel interface 1506 is in communication with the outbound DMA channel 1515 and in-bound DMA channel 1520 through bus channel.
- the UTOPIA II/POS II interfaces 1566 may be configured in either UTOPIA level II or POS level II modes.
- the UTOPIAII/POSII interface 1566 When data is received on the UTOPIAII/POSII interface 1566 , data will push existing tasks in the task queue forward and request the DMA 1520 to move the data.
- the DMA 1520 will read the task queue from the UTOPIAII/POSII interface 1566 which contains a data structure comprising: length of data, source address, and type of interface.
- the in-bound DMA channel 1520 will send the data either to the plurality of processors [not shown] or to the ATMRx engine [not shown]. After data is written into the ATMRx memory, it is processed by the ATM engine and passed to the corresponding AAL layer.
- the ATMTx engine inserts the desired ATM header at the beginning of the cell and will request the outbound DMA channel 1515 to move the data to the UTOPIAII/POSII interface 1566 having a task queue with the following data structure: length of data and source address.
- novel software architecture enables the logical system, presented in FIG. 5, to be physically deployed in a number of ways, depending on processing needs.
- APIs application program interfaces
- the first interface 1720 a and second interface 1725 a implement interfacing tasks through queues 1721 a , 1726 a in shared memory. While the interfaces 1720 a , 1725 a are no longer limited to function mapping and messaging, the components 1705 a , 1710 a , 1715 a continue to use the same APIs to conduct inter-component communication.
- the consistent use of a standard API enables the porting of various components to different hardware architectures in a distributed processing environment by relying on modified interfaces or drivers where necessary and without modifications in the components themselves.
- the software system 1800 is divided into three subsystems, a Media Processing Subsystem 1805 , a Packetization Subsystem 1840 , and a Signaling/Management Subsystem (hereinafter referred to as the Signaling Subsystem) 1870 .
- the Media Processing Subsystem 1805 sends encoded data to the Packetization Subsystem 1840 for encapsulation and transmission over the network and receives network data from the Packetization Subsystem 1840 to be decoded and played out.
- the Signaling Subsystem 1870 communicates with the Packetization Subsystem 1840 to get status information such as the number of packets transferred, to monitor the quality of service, control the mode of particular channels, among other functions.
- the Signaling Subsystem 1870 also communicates with the Packetization Subsystem 1840 to control establishment and destruction of packetization sessions for the origination and termination of calls.
- Each subsystem 1805 , 1840 , 1870 further comprises a series of components 1820 designed to perform different tasks in order to effectuate the processing and transmission of media.
- Each of the components 1820 conducts communications with any other module, subsystem, or system through APIs that remain substantially constant and consistent irrespective of whether the components reside on a hardware element or across multiple hardware elements, as previously discussed.
- the Media Processing Subsystem 1905 comprises a system API component 1907 , media API component 1909 , real-time media kernel 1910 , and voice processing components, including line echo cancellation component 1911 , components dedicated to performing voice activity detection 1913 , comfort noise generation 1915 , and discontinuous transmission management 1917 , a component 1919 dedicated to handling tone signaling functions, such as dual tone (DTMF/MF), call progress, call waiting, and caller identification, and components for media encoding and decoding functions for voice 1927 , fax 1929 , and other data 1931 .
- tone signaling functions such as dual tone (DTMF/MF)
- DTMF/MF dual tone
- call progress call waiting
- caller identification components for media encoding and decoding functions for voice 1927 , fax 1929 , and other data 1931 .
- the system API component 1907 should be capable of providing a system wide management and enabling the cohesive interaction of individual components, including establishing communications between external applications and individual components, managing run-time component addition and removal, downloading code from central servers, and accessing the MIBs of components upon request from-other components.
- the media API component 1909 interacts with the real time media kernel 1910 and individual voice processing components.
- the real time media kernel 1910 allocates media processing resources, monitors resource utilization on each media-processing element, and performs load balancing to substantially maximize density and efficiency.
- the voice processing components can be distributed across multiple processing elements.
- the line echo cancellation component 1911 deploys adaptive filter algorithms to remove from a signal echoes that may arise as a result of the reflection and/or retransmission of modified input signals back to the originator of the input signals.
- the line echo cancellation component 1911 has been programmed to implement the following filtration approach: An adaptive finite impulse response (FIR) filter of length N is converged using a convergence process, such as a least means square approach.
- the adaptive filter generates a filtered output by obtaining individual samples of the far-end signal on a receive path, convolving the samples with the calculated filter coefficients, and then subtracting, at the appropriate time, the resulting echo estimate from the received signal on the transmit channel.
- FIR finite impulse response
- the filter is then converted to an infinite impulse response (IIR) filter using a generalization of the ARMA-Levinson approach.
- IIR infinite impulse response
- data is received from an input source and used to adapt the zeroes of the IIR filter using the LMS approach, keeping the poles fixed.
- the adaptation process generates a set of converged filter coefficients that are then continually applied to the input signal to create a modified signal used to filter the data.
- the error between the modified signal and actual signal received is monitored and used to further adapt the zeroes of the IIR filter. If the measured error is greater than a pre-determined threshold, convergence is re-initiated by reverting back to the FIR convergence step.
- the voice activity detection component 1913 receives incoming data and determines whether voice or another type of signal, i.e. noise, is present in the received data, based upon an analysis of certain data parameters.
- the comfort noise generation component 1915 operates to send a Silence Insertion Descriptor (SID) containing information that enables a decoder to generate noise corresponding to the background noise received from the transmission.
- SID Silence Insertion Descriptor
- An overlay of audible but non-obtrusive noise has been found to be valuable in helping users discern whether a connection is live or dead.
- the SID frame is typically small, i.e. approximately 15 bits under the G.729 B codec specification.
- updated SID frames are sent to the decoder whenever there has been sufficient change in the background noise.
- the tone signaling component 1919 including recognition of DTMF/MF, call progress, call waiting, and caller identification, operates to intercept tones meant to signal a particular activity or event, such as the conducting of two-stage dialing (in the case of DTMF tones), the retrieval of voice-mail, and the reception of an incoming call (in the case of call waiting), and communicate the nature of that activity or event in an intelligent manner to a receiving device, thereby avoiding the encoding of that tone signal as another element in a voice stream.
- a particular activity or event such as the conducting of two-stage dialing (in the case of DTMF tones), the retrieval of voice-mail, and the reception of an incoming call (in the case of call waiting), and communicate the nature of that activity or event in an intelligent manner to a receiving device, thereby avoiding the encoding of that tone signal as another element in a voice stream.
- the tone-signaling component 1919 is capable of recognizing a plurality of tones and, therefore, when one tone is received, send a plurality of RTP packets that identify the tone, together with other indicators, such as length of the tone. By carrying the occurrence of an identified tone, the RTP packets convey the event associated with the tone to a receiving unit.
- the tone-signaling component 1919 is capable of generating a dynamic RTP profile wherein the RTP profile carries information detailing the nature of the tone, such as the frequency, volume, and duration. By carrying the nature of the tone, the RTP packets convey the tone to the receiving unit and permit the receiving unit to interpret the tone and, consequently, the event or activity associated with it.
- codecs Components for the media encoding and decoding functions for voice 1927 , fax 1929 , and other data 1931 , referred to as codecs, are devised in accordance with International Telecommunications Union (ITU) standard specifications, such as G.711 for the encoding and decoding of voice, fax, and other data.
- ITU International Telecommunications Union
- G.711 An exemplary codec for voice, data, and fax communications is ITU standard G.711, often referred to as pulse code modulation.
- G.711 is a waveform codec with a sampling rate of 8,000 Hz. Under uniform quantization, signal levels would typically require at least 12 bits per sample, resulting in a bit rate of 96 kbps.
- ITU standards G.723.1, G.726, and G.729 A/B/E all of which would be known and appreciated by one of ordinary skill in the art.
- ITU standards supported by the fax media processing component 1929 preferably include T.38 and standards falling within V.xx, such as V.17, V.90, and V.34.
- Exemplary codecs for fax include ITU standard T.4 and T.30.
- T.4 addresses the formatting of fax images and their transmission from sender to receiver by specifying how the fax machine scans documents, the coding of scanned lines, the modulation scheme used, and the transmission scheme used.
- Other codecs include ITU standards T.38.
- the Packetization Subsystem 2040 comprises a system API component 2043 , packetization API component 2045 , POSIX API 2047 , real-time operating system (RTOS) 2049 , components dedicated to performing such quality of service functions as buffering and traffic management 2050 , a component for enabling IP communications 2051 , a component for enabling ATM communications 2053 , a component for resource-reservation protocol (RSVP) 2055 , and a component for multi-protocol label switching (MPLS) 2057 .
- RTOS real-time operating system
- the Packetization Subsystem 2040 facilitates the encapsulation of encoded voice/data into packets for transmission over ATM and IP networks, manages certain quality of service elements, including packet delay, packet loss, and jitter management, and implements trafficshaping to control network traffic.
- the packetization API component 2045 provides external applications facilitated access to the Packetization Subsystem 2040 by communicating with the Media Processing Subsystem [not shown] and Signaling Subsystem [not shown].
- the POSIX API 2047 layer isolated the operating system (OS) from the components and provides the components with a consistent OS API, thereby insuring that components above this layer do not have to be modified if the software is ported to another OS platform.
- the RTOS 2049 acts as the OS facilitating the implementation of software code into hardware instructions.
- the IP communications component 2051 supports packetization for TCP/IP, UDP/IP, and RTP/RTCP protocols.
- the ATM communications component 2053 supports packetization for AAL1, AAL2, and AAL5 protocols. It is preferred that the RTP/UDP/IP stack be implemented on the RISC processors of the Packet Engine. A portion of the ATM stack is also preferably implemented on the RISC processors with more computationally intensive parts of the ATM stack implemented on the ATM engine.
- the component for RSVP 2055 specifies resource-reservation techniques for IP networks.
- the RSVP protocol enables resources to be reserved for a certain session (or a plurality of sessions) prior to any attempt to exchange media between the participants.
- Two levels of service are generally enabled, including a guaranteed level which emulates the quality achieved in conventional circuit switched networks, and controlled load which is substantially equal to the level of service achieved in a network under best-effort and no-load conditions.
- a sending unit issues a PATH message to a receiving unit via a plurality of routers.
- the PATH message contains a traffic specification (Tspec) that provides details about the data that the sender expects to send, including bandwidth requirement and packet size.
- Tspec traffic specification
- Each RSVP-enabled router along the transmission path establishes a path state that includes the previous source address of the PATH message (the prior router).
- the receiving unit responds with a reservation request (RESV) that includes a flow specification having the Tspec and information regarding the type of reservation service requested, such as controlled-load or guaranteed service.
- RESV reservation request
- the RESV message travels back, in reverse fashion, to the sending unit along the same router pathway.
- the requested resources are allocated, provided such resources are available and the receiver has authority to make the request.
- the RESV eventually reaches the sending unit with a confirmation that the requisite resources have been reserved.
- One function that could be provided in either the Media Processing Subsystem or the Packetization Subsystem is jitter buffer management.
- an embodiment of the present invention operates by estimating a packet delay histogram that may be used to determine the required buffer size and minimum delay.
- the preferred method of determining the buffer size and minimum delay comprises the selection of an area of the histogram, the calculation of the mean delay based upon the selected area, the calculation of a plurality of variances based upon the mean delay, and the use of the variances to determine buffer size and minimum delay.
- the graph represents histogram 101 f of a packet stream received by a media gateway, more specifically the Media Processing Subsystem or Packetization Subsystem.
- the x-axis 102 f represents the delay experienced by packets and the y-axis 103 f represents the number of packet samples received.
- the vertical bars 104 f show the number of packets received in a defined span of time.
- a curve 105 f connects the central point of tops of the bars 104 f of the histogram 101 f .
- the curve 105 f depicts the distribution of the arrival time of packets.
- the tail is eliminated at a defined point 106 f , which in this example is 270 ms on the x-axis 102 f . Therefore, the histogram area to the right of point 106 f is discarded.
- M is the mean
- x i represents the delay experienced by packets arriving in a particular window of time i
- N is the total number of samples.
- the preferred embodiment of the invention utilizes at least two separately calculated variances to better estimate the buffer size and delay based upon the estimated histogram.
- the histogram is conceptually divided into two portions, a portion encompassing the packets arriving after the mean delay and a portion encompassing packets that arrived prior to the mean delay. Where i packets have been received and the mean delay is associated with packet m, then the two histogram portions are defined by D 0 to D m ⁇ 1 and the second defined by D m+1 to D i , or the final packet.
- sample set of packets can be calculated using sample sets that overlap or that, when taken together, comprise a subset of packets received.
- the two variances are not equal because the histogram is asymmetrical. As shown in FIG. 1 f , Var 1 115 f is less than Var 2 117 f , reflective of the asymmetrical nature of the histogram and better approximating the actual distribution of packets received. This approach therefore represents an improved approach to ascertaining the size and placement of the buffer more accurately while optimizing computational resources.
- Var 1 can be calculated from Var 2 , or vice versa, using pre-defined equations.
- Var 1 could be a fixed value depending on whether Var 2 exceeds or does not exceed certain threshold value.
- the buffer size and timing can be determined.
- the buffer starts accepting packets at delay d, which is determined by subtracting Var 1 115 f from the mean 107 f.
- the buffer starts accepting packets at 90 ms and continues accepting for period T of 165 ms, or up to 255 ms.
- the variances used to determine the buffer parameters can also be calculated variances derived by multiplying Var 1 and/or Var 2 by a multiplier (k) where the multiplier is any number, but preferably in the range of 2-8, and more preferably around 2, 4 or 8. Utilizing this approach, the Media Processing Subsystem or Packetization Subsystem is better able to manage jitter in packets received by the Media Gateway system.
- the Signaling Subsystem 2170 comprises a user application API component 2173 , system API component 2175 , POSIX API 2177 , real-time operating system (RTOS) 2179 , a signaling API 2181 , components dedicated to performing such signaling functions as signaling stacks for ATM networks 2183 and signaling stacks for IP networks 2185 , and a network management component 2187 .
- the signaling API 2181 provides facilitated access to the signaling stacks for ATM networks 2183 and signaling stacks for IP networks 2185 .
- the signaling API 2181 comprises a master gateway and sub-gateways of N number. A single master gateway can have N sub-gateways associated with it.
- the master gateway performs the demultiplexing of incoming calls arriving from an ATM or IP network and routes the calls to the sub-gateway that has resources available.
- the sub-gateways maintain the state machines for all active terminations.
- the sub-gateways can be replicated to handle many terminations. Using this design, the master gateway and sub-gateways can reside on a single processor or across multiple processors, thereby enabling the simultaneous processing of signaling for a large number of terminations and the provision of substantial scalability.
- the user application API component 2173 provides a means for external applications to interface with the entire software system, comprising each of the Media Processing Subsystem, Packetization Subsystem, and Signaling Subsystem.
- the network management component 2187 supports local and remote configuration and network management through the support of simple network management protocol (SNMP).
- SNMP simple network management protocol
- the configuration portion of the network management component 2187 is capable of communicating with any of the other components to conduct configuration and network management tasks and can route remote requests for tasks, such as the addition or removal of specific components.
- the signaling stacks for ATM networks 2183 include support for User Network Interface (UNI) for the communication of data using AAL1, AAL2, and AAL5 protocols.
- User Network Interface comprises specifications for the procedures and protocols between the gateway system, comprising the software system and hardware system, and an ATM network.
- the signaling stacks for IP networks 2185 include support for a plurality of accepted standards, including media gateway control protocol (MGCP), H.323, session initiation protocol (SIP), H.248, and network-based call signaling (NCS).
- MGCP specifies a protocol converter, the components of which may be distributed across multiple distinct devices.
- MGCP enables external control and management of data communications equipment, such as media gateways, operating at the edge of multi-service packet networks.
- H.323 standards define a set of call control, channel set up, and codec specifications for transmitting real time voice and video over networks that do not necessarily provide a guaranteed level of service, such as packet networks.
- SIP is an application layer protocol for the establishment, modification, and termination of conferencing and telephony sessions over an IP-based network and has the capability of negotiating features and capabilities of the session at the time the session is established.
- H.248 provides recommendations underlying the implementation of MGCP.
- a host application 2205 interacts with a DSP 2210 via an interrupt capability 2220 and shared memory 2230 .
- the same functionality can be achieved by a simulation execution through the operation of a virtual DSP program 2310 as a separate independent thread on the same processor 2315 as the application code 2320 .
- This simulation run is enabled by a task queue mutex 2330 and a condition variable 2340 .
- the task queue mutex 2330 protects the data shared between the virtual DSP program 2310 and a resource manager [not shown].
- the condition variable 2340 allows the application to synchronize with the virtual DSP 2310 in a manner similar to the function of the interrupt 2220 in FIG. 22.
- the present methods and systems provide for an improved jitter buffer management method and system by basing playout buffer adjustments on computed minimum delays and buffer sizes with reference to a plurality of variances derived from an estimated histogram. While various embodiments of the present invention have been shown and described, it would be apparent to those skilled in the art that many modifications are possible without departing from the inventive concept disclosed herein. For example, it would be apparent that the plurality of variances can be calculated by determining a first variance from an estimated histogram and then deriving subsequent variances through any pre-defined equation incorporating the first variance.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
- This is a continuation-in-part of copending patent application Ser. No. 10/004,753, for DISTRIBUTED PROCESSING ARCHITECTURE WITH SCALABLE PROCESSING LAYERS, filed Dec. 3, 2001.
- The present invention relates generally to a method and system for the communication of digital signals, and more particularly to a method and system for managing delays in packet transmission, e.g. managing jitter, using a buffering procedure, and to a media gateway deploying the jitter management methods and systems.
- Media communication devices comprise hardware and software systems that utilize interdependent processes to enable the processing and transmission of analog and digital signals substantially seamlessly across and between circuit switched and packet switched networks. As an example, a voice over packet gateway enables the transmission of human voice from a conventional public switched network to a packet switched network, possibly traveling simultaneously over a single packet network line with both fax information and modem data, and back again. Benefits of unifying communication of different media across different networks include cost savings and the delivery of new and/or improved communication services such as web-enabled call centers for improved customer support and more efficient personal productivity tools.
- Such media over packet communication devices (e.g., Media Gateways) require substantial processing power with sophisticated software controls and applications to enable the effective transmission of data from circuit switched to packet switched networks and back again. One form of media transmission, referred to as voice-over-IP (VoIP), is the transport of voice traffic through the use of the Internet protocol. VoIP requires notably less average bandwidth than a traditional circuit-switched connection for several reasons. First, by detecting when voice activity is present, VoIP can choose to send little or no data when a speaker on one end of a conversation is silent, whereas a conventional, circuit-switched telephone connection continues to transmit during periods of silence. Second, the digital audio bit stream utilized by VoIP may be significantly compressed before transmission using a codec (compression/decompression) scheme. Using current technology, a telephone conversation that would require two 64 kbps (one each way) channels over a circuit-switched network may utilize a data rate of roughly 8 kbps with VoIP.
- In the transmission of digital data between a source and a destination apparatus, frequency distortion known as jitter may be introduced. Jitter is the variable delay experienced in the course of packet transmission, resulting in varied packet arrival times, and is caused by networks providing different waiting times for different packets or cells. It may also be caused by lack of synchronization, which results from mechanical or electrical changes. Given the real time nature of a live connection, jitter buffer management policies have a large effect on the overall data quality. If the data is in the form of a voice, actual sound losses range from a syllable to a word, depending on how much data is in a given packet.
- To rectify the problem of jitter, a receiver may include a buffer to store packets for an amount of time sufficient to allow sequenced, regular playout of the packets. However, an efficient technique is needed to determine the receiver buffer playout length and timing in real-time data communications such as VoIP. If the buffer delay or length is too short, “slower” packets will not arrive before their designated playout time and playout quality suffers. If the buffer delay is very long, it conspicuously disrupts interactive communications. Accurate knowledge of actual packet delays is necessary to determine optimal packet buffer delay for real-time communications.
- One approach to devising an appropriate buffer is to construct and maintain a distribution of the number of packets received by a system over time, namely a histogram. A buffer may then be constructed by equating the buffer length to the entire length of the histogram and equating the buffer initiation point to the time when the first packet is received, e.g., the minimum delay.
- Referring to FIG. 1a, a
graph 100 a depicts ahistogram 101 a of a number of packets received relative to time. Thex-axis 102 a represents the delay experienced by packets and the y-axis 103 a represents the number of packet samples received. Thevertical bars 104 a show the number of packets received in a defined span of time. Acurve 105 a connects the central point of tops of thebars 104 a of thehistogram 101 a. Thecurve 105 a depicts the distribution of the arrival time of packets. This curve is called the packet delay distribution (PDD) curve. Typically, in telecommunications applications, PDD curves are often skewed earlier in time due to less delay experienced by most of the packets and, therefore, are often not symmetrical around the peak. One of ordinary skill in the art would be familiar with methods of creating histograms. - Despite existing jitter buffering methods, an improved method and system for playing out packets from media gateways by adaptively adjusting the buffer size delay is needed. More specifically, hardware and software systems and methods are needed that can adaptively determine the buffer size and the buffer initiation point while not being substantially resource intensive.
- The present invention provides improved methods and systems for the determination of jitter buffers. The present invention enables the generation of buffers having sizes and delays such that, as designed, the buffers capture a substantial majority of packets while not being resource intensive.
- In a first embodiment, a packet delay histogram is estimated using any one of several delay estimation techniques. The histogram represents the distribution of the number of packets received by a system over a defined time. With the distribution in delay determined, a playout delay evaluator calculates a plurality of variances, centered around a distribution peak, or mean average delay, and applies those variances to determine the buffer size and delay. The playout buffer monitor uses this calculated buffer size and delay to select, store and playout packets at their adjusted playout time.
- The present invention may be employed in a media gateway that enables data communications among heterogeneous networks. Media gateways provide media processing functions, data packet encapsulation, and maintain a quality of service level, among other functions. When a gateway operates as a receiver of voice data traffic, it buffers voice packets and outputs a continuous digital or analog stream. The present invention may be deployed to manage jitter experienced in the course of receiving packetized data and processing the data for further transmission through a packet-based or circuit-switched network.
- These and other features and advantages of the present invention will be appreciated, as they become better understood by reference to the following Detailed Description when considered in connection with the accompanying drawings, wherein:
- FIG. 1a is a histogram depicting packets received by a system over time;
- FIG. 1b is a block diagram of a system that employs a first-in, first-out (FIFO) buffer and a numerically controlled oscillator (NCO) for jitter correction;
- FIG. 1c is a schematic waveform representation of jitter;
- FIG. 1d is a diagram illustrating timings associated with the sending and receiving a packet;
- FIG. 1e depicts a histogram calculation employed in one approach of designing a buffer;
- FIG. 1f depicts a histogram calculation employed in a preferred embodiment of the present invention;
- FIG. 1g is an embodiment of the adaptive playout-buffering process of the present invention;
- FIG. 1h is an arrangement of a playout delay evaluator and buffer monitor used in the present invention;
- FIG. 2a is a block diagram of a first embodiment of a hardware system architecture for a media gateway;
- FIG. 2b is a block diagram of a second embodiment of a hardware system architecture for a media gateway;
- FIG. 3 is a diagram of a packet having a header and user data;
- FIG. 4 is a block diagram of a third embodiment of a hardware system architecture for a media gateway;
- FIG. 5 is a block diagram of one logical division of the software system of the present invention;
- FIG. 6 is a block diagram of a first physical implementation of the software system of FIG. 5;
- FIG. 7 is a block diagram of a second physical implementation of the software system of FIG. 5;
- FIG. 8 is a block diagram of a third physical implementation of the software system of FIG. 5;
- FIG. 9 is a block diagram of a first embodiment of the media engine component of the hardware system of the present invention;
- FIG. 10 is a block diagram of a preferred embodiment of the media layer component of the hardware system of the present invention;
- FIG. 10a is a block diagram representation of a preferred architecture for the media layer component of the media engine of FIG. 10;
- FIG. 11 is a block diagram representation of a first preferred processing unit;
- FIG. 12 is a time-based schematic of the pipeline processing conducted by the first preferred processing unit;
- FIG. 13 is a block diagram representation of a second preferred processing unit;
- FIG. 13a is a time-based schematic of the pipeline processing conducted by the second preferred processing unit;
- FIG. 13b is a time-based schematic of the pipeline processing conducted by a series of processing units;
- FIG. 14 is a block diagram representation of a preferred embodiment of the packet processor component of the hardware system of the present invention;
- FIG. 15 is a schematic representation of one embodiment of the plurality of network interfaces in the packet processor component of the hardware system of the present invention;
- FIG. 16 is a block diagram of a plurality of PCI interfaces used to facilitate control and signaling functions for the packet processor component of the hardware system of the present invention;
- FIG. 17 is a first exemplary flow diagram of data communicated between components of the software system of the present invention;
- FIG. 17a is a second exemplary flow diagram of data communicated between components of the software system of the present invention;
- FIG. 18 is a schematic diagram of preferred components comprising the media processing subsystem of the software system of the present invention;
- FIG. 19 is a schematic diagram of preferred components comprising the media processing subsystem of the software system of the present invention;
- FIG. 20 is a schematic diagram of preferred components comprising the packetization processing subsystem of the software system of the present invention;
- FIG. 21 is a schematic diagram of preferred components comprising the signaling subsystem of the software system of the present invention;
- FIG. 22 is a block diagram of a host application operative on a physical DSP; and
- FIG. 23 is a block diagram of a host application operative on a virtual DSP.
- The present invention provides a method and system for jitter management using an adaptive buffer estimation procedure. One use of the present invention is as a novel media gateway, designed to enable the communication of media across circuit switched and packet switched networks, and encompasses novel hardware and software methods and systems. The present invention will presently be described with reference to the aforementioned drawings. Headers will be used for purposes of clarity and are not meant to limit or otherwise restrict the disclosures made herein. It will further be appreciated, by those skilled in the art, that use of the term “media” is meant to broadly encompass substantially all types of data that could be sent across a packet switched or circuit switched network, including, but not limited to, voice, video, data, and fax traffic. Where arrows are utilized in the drawings, it would be appreciated by one of ordinary skill in the art that the arrows represent the interconnection of elements and/or components via buses or any other type of communication channel.
- In one jitter management approach, a clock is derived from a digital data signal and the data signal is stored in a buffer. The derived clock is input to an input counter, which counts a predetermined number of degrees out of phase with an output counter. For instance, the input counter may be initialized 180 degrees out of phase with the output counter. When the input counter is at a maximum counter value, such as 31 in the case where the input counter contains 5 flip-flops, the output counter value is adjusted in accordance with the information processed from a look-up table, preferably a read-only table. This table outputs a coefficient to a numerically controlled oscillator (NCO). The NCO includes a low frequency portion that adds the coefficient successively to itself and outputs a carry out (CO) signal. A high frequency clock, around 100 MHz, is fed to the high frequency portion of the NCO, which preferably divides down the high frequency clock to a clock frequency that is centered at the desired output frequency. The high frequency portion preferably includes an edge detect circuit that receives the CO signal and adjusts the frequency of the output clock to produce a compensation clock. The compensation clock adjusts the output counter, which causes the output buffer to delay a packet of data for a pre-determined amount of time, thereby outputting a digital signal that is substantially free of jitter.
- Referring to FIG. 1b, a block diagram of a
system 100 b that employs aFIFO buffer 104 b and a numerically controlled oscillator (NCO) 107 b for jitter correction is provided. It includes aninput counter 101 b, anoutput counter 102 b, an ANDgate 103 b, abuffer 104 b, aphase detection latch 105 b, a read only memory (ROM) 106 b, aninput data line 109 b, anoutput line 111 b producing jitter free data, a numerically controlled oscillator (NCO) 107 b, and ahigh frequency clock 110 b in communication with theNCO 107 b.Input counter 101 b is coupled to an inputclock signal line 108 b. - Variation in packet delay is not a static process. As such, algorithmic approaches are required to estimate packet delay statistics with time-based estimates such as packet mean arrival time and variances from mean arrival time. Dynamic play-out delay adaptation algorithms rely for their adaptive adjustments on the statistics obtained from the timestamp and variable delay histories of the packets received. Such information, such as timing and stream (continuous data packets after a break) number information, may be gathered from streams of data, and future network delay values are predicted by constructing a measured packet-delay distribution curve. The system maintains a delay histogram, each storing the relative frequency with which a particular delay value is expected to occur among the arriving packets. The histogram is then used to approximate the distribution in the form of a curve.
- Referring to FIG. 1c, jitter originates and propagates over a network in a digital signal.
Waveform 101 c is the ideal communication signal andwaveform 102 c is the signal with jitter. Anunexpected delay 103 c arises in the signal that may be due to queuing of packets at connecting terminals. Thedelay 103 c escalates as the signal traverses through the network, resulting indelay 104 c. That variation in delay, calculated as the difference between 103 c and 104 c, is jitter and can increase, decrease, or otherwise modify over time, causing continual variations in the delay time. - FIG. 1d depicts the various timings associated with the sending and receiving of packet i having data. The packet i is generated by the sending host at
time 101 d represented by ti. The packet i is received at the receiving host attime 102 d represented by ai. The packet i is played out at the receiving host attime 103 d represented by pi.D prop 104 d is the fixed propagation delay from the sender to the receiver, which is assumed to be constant, and set to be the minimum of the delay experienced by any packet. Thisdelay 104 d is revised each time a packet is received whose propagation delay is lesser thanD prop 104 d and set equal to the propagation delay of that packet. The variable delay, vi, 106 d experienced by packet i as it is sent from the source to the destination host can be calculated as vi=ai−Dprop. The amount of time, bi, 108 d that packet i spends in the buffer at the receiver awaiting its scheduled playout time can be calculated as bi=pi−ai. The amount of time, di, 112 d from when the ith packet is generated by the source until it is played out at the destination host can be calculated as di=pi−ti, and shall be referred to as the playout delay of packet i. The delay, ni, 110 d introduced by the network can be calculated as ni=ai−ti. - To construct a histogram for determining the buffer size and delay, packet delays need to be determined. A plurality of methods may be used to calculate delay. In one approach, the jitter buffer system incorporates a method that uses a linear recursive filter and is characterized by the weighting factor alpha. The delay estimate is computed as:
- d i =α*d I−1+(1−α)*n i
- And the variation is computed as:
- v i =αv I−1+(1−α)|d i −n 1|
- where α is a weighting factor, di is the amount of time from when the ith packet is generated by the source until it is played out at the destination host, ni is the total delay introduced by the network, and vi is the variable delay experienced by packet i as it is sent from the source to the destination host.
- A second approach adapts more quickly to the short burst of packets incurring long delays by using a weighting mechanism which incorporates two values into the weighting factor, one indicative of increasing trends in the delay and one indicative of decreasing trends.
- if (ni>di) then
- d i =β*d i+(1−β)*n i
- else
- d i =α*d i+(1−α)*n i
- A third approach calculates the delay estimate as:
- d i=minj
ε Si {n j} - where Si is the set of all packets received during the talk spurt prior to the one initiated by packet i.
- A fourth approach adapts to sudden, large increases in the end-to-end network delay followed by a series of packets arriving almost simultaneously, referred to herein as spikes. The detection of the beginning of a spike is done by checking the delay between consecutive packets at the receiver so that the delay is large enough for it to constitute a spike. For example:
- if (abs(ni−ni−1)>spike_threshold)
- mode=IMPULSE;
- A variable var is employed with an exponentially decaying value that adjusts to the slope of spike. When this variable has a small enough value, indicating that there is no longer a significant slope, the algorithm reverts back to normal mode.
1. ni = Reciever_timestamp − Sender_timestamp; 2. if(mode = NORMAL) { if (abs(ni − ni−1) > abs(v) * 2 + 800){ var = 0; /* Detected beginnig of spike */ mode = IMPULSE; } else{ var = var/2 + abs((2ni − ni−1 − ni−2)/8; if (var ≧ 63){ mode = NORMAL; /* End of spike */ ni−2 = ni−1; return; } } 3. if(mode = NORMAL) di = 0.125 * ni + 0.875 * di−1; else di = di −1 + ni − ni−1; vi = 0.125 * abs(ni − di) + 0.875 * vi−1; 4. ni−2 = ni−1; ni−1 = ni; return; - By calculating the packet delays as against the number of packets received, a packet delay histogram may be constructed. The packet delay histogram may be used to determine the required buffer size and delay by, for example, equating the buffer length to the length of the histogram and the buffer delay to the minimum delay experienced by the received packets, represented by the first data points on the histogram.
- Relying on an entire histogram for estimating the buffer size is resource intensive, however. It is preferred, rather, to use only the most important parts of the histogram for constructing the buffer, more specifically to limit the buffer to times when a majority of packets arrive. Therefore, once the histogram is estimated using a particular packet delay calculation method, it is preferred to choose a portion of the histogram to enable the efficient determination of a buffer size and delay.
- One approach is to calculate the variance of the histogram, specifically the standard deviation around when the peak number of packets arrive, and add that variance to a minimum delay experienced by the system. For example, if the variance is 60 ms and the minimum delay is 30 ms, then the buffer begins storing packets at 30 ms point and continues storing packets for 60 ms. To better correspond to experimental conditions, the variance used to determine the buffer parameters can be a calculated variance derived by multiplying the variance of the histogram by a multiplier (k).
- Another approach is to define the selected histogram portion as the variance around the peak of the histogram. The histogram peak may be calculated by computing the mean, or the average delay of the histogram. In calculating the peak, it is preferred to first eliminate a portion of the histogram tail to avoid having the trailing portion of the histogram excessively skew the calculation. The average is then calculated and associated with the peak. Using the peak, the variance of the histogram may be calculated. Once the peak and variance of the histogram is calculated, the buffer size of the histogram is obtained.
- Preferably, the variance used to determine the buffer parameters is a calculated variance derived by multiplying the variance of the histogram by a multiplier (k). For example, to capture packets around the peak, the buffer size should preferably encompass a period=k*variance where k=2, thereby capturing packets within the variance period before the peak and within the variance period after the peak. The buffer initiation point, or minimum delay, is defined as minimum delay=mean−(k/2)*variance. For example, where the variance is 80 ms and the mean is 150 ms, the buffer begins accepting packets at 70 ms and continues accepting for another 160 ms, or up to 230 ms.
- Referring to FIG. 1e, the graph represents
histogram 101 e of a packet stream, specifically a depiction of the number of packets received at different points in time by the system. The x-axis 102 e represents the delay experienced by packets and the y-axis 103 e represents the number of packet samples received. Thevertical bars 104 e show the number of packets received in a defined span of time. Acurve 105 e connects the central point of tops of thebars 104 e of thehistogram 101 e. Thecurve 105 e depicts the distribution of the arrival time of packets. -
-
- As shown in FIG. 1e, the mean is 150 ms and the variance is 90 ms.
- With the mean delay and variance having been calculated, the buffer size may be defined as k*Var, where k can be any number, but is preferably in the range of 2 to 8 and more preferably either 2, 4 or 8, and the buffer begins accepting packets at the point defined by
- Initiation Point=M−(k/2)*Var
- In the present example the initiation point equals 60 ms, k=2, and buffer size equals 180 ms. Thus, the buffer accepts packets from 60 ms to 240 ms.
- Referring to FIG. 1f, the graph represents
histogram 101 f of a packet stream received by a system. The x-axis 102 f represents the delay experienced by packets and the y-axis 103 f represents the number of packet samples received. Thevertical bars 104 f show the number of packets received in a defined span of time. Acurve 105 f connects the central point of tops of thebars 104 f of thehistogram 101 f. Thecurve 105 f depicts the distribution of the arrival time of packets. - As previously discussed, to avoid skewing the peak, or mean delay, calculation, the tail is eliminated at a defined
point 106 f, which in this example is 270 ms on the x-axis 102 f. Therefore, the histogram area to the right ofpoint 106 f is discarded. The mean of thecurve 107 f may be calculated by using the formula - where M is the mean, x1 represents the amount of delay experienced by packets arriving in a particular window of time i, and N is the total number of samples.
- Rather than determine a single variance for the histogram and utilize that single variance to calculate the buffer size and delay, the preferred embodiment of the invention utilizes at least two separately calculated variances to better estimate the buffer size and delay based upon the estimated histogram. Preferably, to calculate the plurality of variances, the histogram is conceptually divided into two portions, a portion encompassing the packets arriving after the mean delay and a portion encompassing packets that arrived prior to the mean delay. Where i packets have been received and the mean delay is associated with packet m, then the two histogram portions are defined by D0 to Dm−1 and the second defined by Dm+1 to D1, or the final packet. The variance of D0 to Dm−1, Var1, may be calculated using the formula:
-
- where j extends from m+1 to i and the total number of samples includes those sample from m+1 to i. Although the two separately calculated variances are calculated using one sample set of packets arriving before the mean delay and one sample set of packets arriving after the mean delay, one would appreciate that the sample set of packets can be calculated using sample sets that overlap or that, when taken together, comprise a subset of packets received.
- Typically, the two variances are not equal because the histogram is asymmetrical. As shown in FIG. 1f,
Var 1 115 f is less thanVar 2 117 f, reflective of the asymmetrical nature of the histogram and better approximating the actual distribution of packets received. This approach therefore represents an improved approach to ascertaining the size and placement of the buffer more accurately while optimizing computational resources. - Optionally, Var1 can be calculated from Var2, or vice versa, using pre-defined equations. As an example, Var1 could be a multiple or factor of Var2, i.e., Var1*C Var2, where C is a constant that is determined experimentally. Alternatively, Var1 could be a fixed value depending on whether Var2 exceeds or does not exceed certain threshold value.
- After the peak and variances are calculated, the buffer size and timing can be determined. The buffer starts accepting packets at delay d, which is determined by subtracting
Var 1 115 f from the mean 107 f. - d=M−Var 1
- and continues accepting for a period (T) which is the sum of the two variances.
- T=Var 1 +Var 2
- For example, where the Var1 is 60 ms, Var2 is 105 ms and the mean is 150 ms, the buffer starts accepting packets at 90 ms and continues accepting for period T of 165 ms, or up to 255 ms. The variances used to determine the buffer parameters can also be calculated variances derived by multiplying Var1 and/or Var2 by a multiplier (k) where the multiplier any number, but preferably in the range of 2-7, and more preferably around 2, 4 or 8.
- FIG. 1g depicts a block diagram of an adaptive process used for jitter correction using the above-described buffering method. The system comprises a
sender 101 g and areceiver 102 g, which is comprised of a subtractor 103 g, adelay evaluator 104 g, a playout delay evaluator 106 g, and a playout buffer monitor 107 g. After being properly delayed, the packet is then sent toplayout unit 112 g. - Packet i is sent from the
sender 101 g with a timestamp ti and reaches the receiver at time ai. Using the timestamp, the subtractor 103 g subtracts ai from ti to produce the delay ni for the packet i. The delay evaluator 104 g analyzes this value and performs one of the aforementioned delay evaluation techniques to generate the distribution of delays that comprise a packet delay histogram. The estimated packet delay histogram is communicated by the delay evaluator 104 g to the playout delay evaluator 106 g which, based upon a portion of the communicated histogram, determines the size and delay of the buffer employed by the playout buffer monitor 107 g. Thereceiver 102 g, in accordance with the adjusted playout time, outputs packets to theplayout unit 112 g for the final playout of the packet. - In an embodiment, upon determining mean delay and variance(s), delay smoothing is applied to the actual playout of packets by a delay smoother. While mean delay and variance are used to determine a calculated playout time, the use of delay smoothing further controls changes in playout time to specifically improve voice quality. Increases in playout time are increased to larger steps while decreases in playout time are limited to smaller steps. If the calculated playout time calls for an increase in buffer delay, buffer delay is increased by an amount greater than requested. If the calculated playout time calls for a decrease in buffer delay, buffer delay is decreased by an amount less than requested.
- Referring to FIG. 1h, the
playout delay evaluator 100 h and playout buffer monitor 103 h are shown in communication with anoutput device 114 h anddata input 104 h. Theplayout delay evaluator 100 h preferably comprises acontrol circuit 101 h and packetdelay distribution system 102 h for the calculation of buffer size and delay characteristics. The playout buffer monitor 103 h preferably comprises a packetdata storage memory 112 h,buffer control circuit 107 h,delay timer 108 h,pointer list 109 h, and input andoutput controllers stream parameter block 105 h and drift control block 106 h. The calculation of the mean delay and variances used to determine the buffer size and delay characteristics may be performed by the delay evaluator or by theplayout delay evaluator 100 h, based upon data received from the delay evaluator. - Together with the packet delay distribution system, the
control circuit 101 h manages the calculation, and communication of, a set of buffer configuration parameters for each data stream and allocates buffer resources for each stream.Control circuit 101 h calculates the buffer size requirements for the stream using the packet size S(p), in bytes, and the packet rate T(r), e.g. one packet every 10 milliseconds. Dividing the buffer delay, BD, by the packet rate T(r) yields the number of packets PS that the buffer needs to accommodate i.e., the number of packet slots in thebuffer 103 h. - PS=BD/T(r)
- The buffer size, S(B), is then the product of packet size S(p) and the number of packet slots PS.
- S(B)=PS*S(p)
-
Control circuit 101 h allocates a block ofmemory 112 h having S(B) bytes and apointer list 109 h having PS slots for buffering each stream.Control circuit 101 h also initializesbuffer control circuits 107 h for the stream. As shown in FIG. 1h, aninput controller 111 h and anoutput controller 113 h are allocated to thebuffer 103 h. Input andoutput controllers data input 104 h oroutput device 113 h, respectively, and thebuffer memory 112 h.Buffer control 107 h contains all the logic circuits necessary to oversee operation ofbuffer 103 h and provide updated information to controlcircuit 101 h. -
Buffer control 107 h maintains a packet pointer for each data packet stored inbuffer 103 h. Each packet pointer contains the starting address of its respective packet contained inmemory 112 h. The pointers are stored bybuffer control 107 h inpointer list 109 h, which has a fixed number of slots, equal to PS, for storing packet pointers.Buffer control 107 h manipulatespointer list 109 h as a shift register with PS slots, numbered 0 through PS-1.Slot 0 contains the pointer for the packet, which is to be output next. The contents of each slot is shifted into the next adjacent slot towards theoutput slot 0, at the packet rate, namely, every T(r) seconds. The buffer delay of a packet is determined by the position of its pointer in thepointer list 109 h. A packet whose pointer is in the 3rd slot will experience a buffer delay of 3*T(r) seconds. - As each packet is received by
buffer circuit 103 h, the proper location of storing the packet in thebuffer memory 112 h is determined bybuffer control circuit 107 h, which passes a packet pointer, i.e., a starting address for the location in the memory where the packet data will be stored, to inputcircuit 111 h.Input circuit 111 h stores the packet data in memory starting at the pointer address as the data is received fromnetwork 104 h. - The starting address is also stored as a packet pointer in the
pointer list 109 h at a slot location determined by thebuffer control circuit 107 h. The pointers may be placed in the pointer list at slot locations determined by the packet sequence. Thus, if packet i+2 is received after the first packet i, it is placed 2 slots higher in the list than the present location of the pointer for packet i, provided that packet I−2 is not earlier in the sequence than the packet last output byoutput circuit 113 h. The use of packet sequence information to select slot locations helps out of order packets to be re-ordered without moving packet data. -
Control circuit 101 h checks the sequence number of each packet being received against the sequence number of the packet last output byoutput circuit 113 h. If the sequence number of the incoming packet is lower than the packet last output by the buffer, the packet being received is discarded because it has arrived to late to be output in sequence.Buffer control 107 h maintains a last-played register to keep track of the last packet output for this purpose. - In response to a signal from
timer 108 h,buffer control 107 h sends the pointer contents of theoutput slot 0 in thepointer list 108 h tooutput control 113 h, which then moves the packet data, stored at the respective memory location to theoutput device 114 h. With each signal fromtimer 108 h,buffer control 107 h also shifts each pointer down one slot in the pointer list as described above. Normally,timer 108 h is set to generate a signal at the packet rate, i.e., every T[r] seconds, to ensure that the playout rate for packets is same as the packet rate. - The packet
delay distribution system 102 h provides information to thecontrol circuit 101 h andbuffer control 107 h concerning the delay experienced by packets in the network. Also controlcircuit 101 h may provide the feedback to reflect changing network operating characteristics.Control circuit 101 h may also update the buffer characteristics, i.e., buffer size and pointer list in response to changing packet delay distribution. - If the rate of incoming packets is faster than the rate at which they are output by the
output device 114 h, buffer overflow will result.Drift control 106 h maintains stream synchronization in the presence of such clock drifts by discarding a packet periodically to prevent buffer overflow. If the receiver clock is faster than the transmitter,drift control circuit 106 h causes a packet to be repeated periodically or outputs a blank or dummy packet so that theoutput device 114 h always has a packet to process. - The jitter management method and system will be further described in the context of an implementation within an exemplary application.
- Exemplary Application
- The present invention can be used to enable the operation of a novel media gateway. The hardware system architecture of the said novel gateway is comprised of a plurality of distributed processing layer processors, referred to as Media Engines, that are in communication with a data bus and interconnected with a Host Processor or a Packet Engine which, in turn, is in communication with interfaces to networks, preferably an asynchronous transfer mode (ATM) physical device or gigabit media independent interface (GMII) physical device.
- Referring to FIG. 2a, a first embodiment of the top-level hardware system architecture is shown. A
data bus 205 a is connected tointerfaces 210 a existent on a first novel Media Engine Type I 215 a and on a second novel Media Engine Type I 220 a. The first novel Media Engine Type I 215 a and second novel Media Engine Type I 220 a are connected through a second set ofcommunication buses 225 a to anovel Packet Engine 230 a which, in turn, is connected throughinterfaces 235 a tooutputs SRAM 246 a andSDRAM 247 a. - It is preferred that the
data bus 205 a be a time-division multiplex (TDM) bus. A TDM bus is a pathway for the transmission of a number of separate voice, fax, modem, video, and/or other data signals simultaneously over a single communication medium. The separate signals are transmitted by interleaving a portion of each signal with each other, thereby enabling one communications channel to handle multiple separate transmissions and avoiding having to dedicate a separate communication channel to each transmission. Existing networks use TDM to transmit data from one communication device to another. It is further preferred that theinterfaces 210 a existent on the first novel Media Engine Type I 215 a and second novel Media Engine Type I 220 a comply with H.100, a hardware specification that details the necessary information to implement a CT bus interface at the physical layer for the PCI computer chassis card slot, independent of software specifications. The CT bus defines a single isochronous communications bus across certain PC chassis card slots and allows for the relatively fluid inter-operation of components. It is appreciated that interfaces abiding by different hardware specifications could be used to receive signals from thedata bus 205 a. - As described below, each of the two novel Media Engines Type I215 a, 220 a can support a plurality of channels for processing media, such as voice. The specific number of channels supported is dependent upon the features required, such as the extent of echo cancellation, and type of codec supported. For codecs having relatively low processing power requirements, such as G.711, each Media Engine Type I 215 a, 220 a can support the processing of around 256 voice channels or more. Each Media Engine Type I 215 a, 220 a is in communication with the
Packet Engine 230 a through acommunication bus 225 a, preferably a peripheral component interconnect (PCI) communication bus. A PCI communication bus serves to deliver control information and data transfers between the Media Engine Type I chip 215 a, 220 a and thePacket Engine chip 230 a. Because Media Engine Type I 215 a, 220 a was designed to support the processing of lower data volumes, relative to Media Engine Type II described below, a single PCI communication bus can effectively support the transfer of both control and data between the designated chips. It is appreciated, however, that where data traffic becomes too great, the PCI communication bus must be supplemented with a second inter-chip communication bus. - The
Packet Engine 230 a receives processed data from each of the two Media Engines Type I 215 a, 220 a via thecommunication bus 225 a. While theoretically able to connect to a plurality of Media Engines Type I, it is preferred that, for this embodiment, thePacket Engine 230 a be in communication with up to two Media Engines Type I 215 a, 220 a. As will be further described below, thePacket Engine 230 a provides cell and packet encapsulation for data channels, at or around 2016 channels in a preferred embodiment, quality of service functions for traffic management, tagging for differentiated services and multi-protocol label switching, and the ability to bridge cell and packet networks. While it is preferred to use thePacket Engine 230 a, it can be replaced with a different host processor, provided that the host processor is capable of performing the above-described functions of thePacket Engine 230 a. - The
Packet Engine 230 a is in communication with an ATMphysical device 240 a and GMIIphysical device 245 a. The ATMphysical device 240 a is capable of receiving processed and packetized data, as passed from the Media Engines Type I 215 a, 220 a through thePacket Engine 230 a, and transmitting it through a network operating on an asynchronous transfer mode (an ATM network). As would be appreciated by one of ordinary skill in the art, an ATM network automatically adjusts the network capacity to meet the system needs and can handle voice, modem, fax, video and other data signals. Each ATM data cell, or packet, consists of five octets of header field plus 48 octets for user data. The header contains data that identifies the related cell, a logical address that identifies the routing, header error correction bits, plus bits for priority handling and network management functions. An ATM network is a wideband, low delay, connection-oriented, packet-like switching and multiplexing network that allows for relatively flexible use of the transmission bandwidth. The GMIIphysical device 245 a operates under a standard for the receipt and transmission of a certain amount of data, irrespective of the media types involved. - The embodiment shown in FIG. 2a can deliver voice processing up to Optical Carrier Level 1 (OC-1). OC-1 is designated at 51.840 million bits per second and provides for the direct electrical-to-optical mapping of the synchronous transport signal (STS-1) with frame synchronous scrambling. Higher optical carrier levels are direct multiples of OC-1, namely OC-3 is three times the rate of OC-1. As shown below, other configurations of the present invention could be used to support voice processing at OC-12.
- Referring now to FIG. 2b, an embodiment supporting data rates up to OC-3 is shown, referred to herein as an OC-3
Tile 200 b. Adata bus 205 b is connected tointerfaces 210 b existent on a first novel Media Engine Type II 215 b and on a second novel Media Engine Type II 220 b. The first novel Media Engine Type II 215 b and second novel Media Engine Type II 220 b are connected through a second set ofcommunication buses novel Packet Engine 230 b which, in turn, is connected throughinterfaces outputs interface 250 b to aHost Processor 255 b. - As previously discussed, it is preferred that the
data bus 205 b be a time-division multiplex (TDM) bus and that theinterfaces 210 b existent on the first novel Media Engine Type II 215 b and second novel Media Engine Type II 220 b comply with the H.100 a hardware specification. It is again appreciated that interfaces abiding by different hardware specifications could be used to receive signals from thedata bus 205 b. - Each of the two novel Media Engines Type II215 b, 220 b can support a plurality of channels for processing media, such as voice. The specific number of channels supported is dependent upon the features required, such as the extent of echo cancellation, and type of codec implemented. For codecs having relatively low processing power requirements, such as G.711, and where the extent of echo cancellation required is 128 milliseconds, each Media Engine Type II can support the processing of approximately 2016 channels of voice. With two Media Engines Type II providing the processing power, this configuration is capable of supporting data rates of OC-3. Where the Media Engines Type II 215 b, 220 b are implementing a codec requiring higher processing power, such as G.729A, the number of supported channels decreases. As an example, the number of supported channels decreases from 2016 per Media Engine Type II when supporting G.711 to approximately 672 to 1024 channels when supporting G.729A. To match OC-3, an additional Media Engine Type II can be connected to the
Packet Engine 230 b via thecommon communication buses - Each Media Engine Type II215 b, 220 b is in communication with the
Packet Engine 230 b throughcommunication buses communication bus 225 b and a UTOPIA II/POSII communication bus 227 b. As previously mentioned, where data traffic volumes exceed a certain threshold, thePCI communication bus 225 b must be supplemented with asecond communication bus 227 b. Preferably, thesecond communication bus 227 b is a UTOPIA II/POS-II bus and serves as the data path between Media Engines Type II 215 b, 220 b and thePacket Engine 230 b. A POS (Packet over SONET) bus represents a high-speed means for transmitting data through a direct connection, allowing the passing of data in its native format without the addition of any significant level of overhead in the form of signaling and control information. UTOPIA (Universal Test and Operations Interface for ATM) refers to an electrical interface between the transmission convergence and physical medium dependent sublayers of the physical layer and acts as the interface for devices connecting to an ATM network. - The physical interface is configured to operate in POS-II mode which allows for variable size data frame transfers. Each packet is transferred using POS-II control signals to explicitly define the start and end of a packet. As shown in FIG. 3, each
packet 300 contains aheader 305 with a plurality of information fields anduser data 310. Preferably, eachheader 305 contains information fields including packet type 315 (e.g., RTP, raw encoded voice, AAL2), packet length 320 (total length of the packet including information fields), and channel identification 325 (identifies the physical channel, namely the TDM slot for which the packet is intended or from which the packet came). When dealing with encoded data transfers between a Media Engine Type II 215 b, 220 b andPacket Engine 230 b, it is further preferred to include coder/decoder type 330,sequence number 335, and voiceactivity detection decision 340 in theheader 305. - The
Packet Engine 230 b is in communication with theHost Processor 255 b through aPCI target interface 250 b. ThePacket Engine 230 b preferably includes a PCI to PCI bridge [not shown] between thePCI interface 226 b to thePCI communication bus 225 b and thePCI target interface 250 b. The PCI to PCI bridge serves as a link for communicating messages between theHost Processor 255 b and two Media Engines Type II 215 b, 220 b. - The
novel Packet Engine 230 b receives processed data from each of the two Media Engines Type II 215 b, 220 b via thecommunication buses Packet Engine 230 b be in communication with no more than three Media Engines Type II 215 b, 220 b [only two are shown in FIG. 2b]. As with the previously described embodiment,Packet Engine 230 b provides cell and packet encapsulation for data channels, up to 2048 channels when implementing a G.711 codec, quality of service functions for traffic management, tagging for differentiated services and multi-protocol label switching, and the ability to bridge cell and packet networks. ThePacket Engine 230 b is in communication with an ATMphysical device 240 b and GMIIphysical device 245 b through a UTOPIA II/POS IIcompatible interface 260 b and GMII compatible interface respectively 265 b. In addition to theGMII interface 265 b in the physical layer, referred to herein as the PHY GMII interface, thePacket Engine 230 b also preferably has another GMII interface [not shown] in the MAC layer of the network, referred to herein as the MAC GMII interface. MAC is a media specific access control protocol defining the lower half of the data link layer that defines topology dependent access control protocols for industry standard local area network specifications. - As will be further discussed, the
Packet Engine 230 b is designed to enable ATM-IP internetworking. Telecommunication service providers have built independent networks operating on an ATM or IP protocol basis. Enabling ATM-IP internetworking permits service providers to support the delivery of substantially all digital services across a single networking infrastructure, thereby reducing the complexities introduced by having multiple technologies/protocols operative throughout a service provider's entire network. ThePacket Engine 230 b is therefore designed to enable a common network infrastructure by providing for the internetworking between ATM modes and IP modes. - More specifically, the
novel Packet Engine 230 b supports the internetworking of ATM AALs (ATM Adaptation Layers) to specific IP protocols. Divided into a convergence sublayer and segmentation/reassembly sublayer, AAL accomplishes conversion from the higher layer, native data format and service specifications into the ATM layer. From the data originating source, the process includes segmentation of the original and larger set of data into the size and format of an ATM cell, which comprises 48 octets of data payload and 5 octets of overhead. On the receiving side, the AAL accomplishes reassembly of the data. AAL-1 functions in support of Class A traffic which is connection-oriented Constant Bit Rate (CBR), time-dependent traffic, such as uncompressed, digitized voice and video, and which is stream-oriented and relatively intolerant of delay. AAL-2 functions in support of Class B traffic which is connection-oriented Variable Bit Rate (VBR) isochronous traffic requiring relatively precise timing between source and sink, such as compressed voice and video. AAL-5 functions in support of Class C traffic which is Variable Bit Rate (VBR) delay-tolerant connection-oriented data traffic requiring relatively minimal sequencing or error detection support, such as signaling and control data. - These ATM AALs are internetworked with protocols operative in an IP network, such as RTP, UDP, TCP and IP. Internet Protocol (IP) describes software that tracks the Internet's addresses for different nodes, routes outgoing messages, and recognizes incoming messages while allowing a data packet to traverse multiple networks from source to destination. Realtime Transport Protocol (RTP) is a standard for streaming realtime multimedia over IP in packets and supports transport of real-time data like, such as interactive video and video over packet switched networks. Transmission Control Protocol (TCP) is a transport layer, connection oriented, end-to-end protocol that provides relatively reliable, sequenced, and unduplicated delivery of bytes to a remote or a local user. User Datagram Protocol (UDP) provides for the exchange of datagrams without acknowledgements or guaranteed delivery and is a transport layer, connectionless mode protocol. In the preferred embodiment represented in FIG. 2, it is preferred that ATM AAL-1 be internetworked with RTP, UDP, and IP protocols, AAL-2 be internetworked with UDP and IP protocols, and AAL-5 be internetworked with UDP and IP protocols or TCP and IP protocols.
- Multiple OC-3 tiles, as presented in FIG. 2b, can be interconnected to form a tile supporting higher data rates. As shown in FIG. 4, four OC-3
tiles 405 can be interconnected, or “daisy chained”, together to form an OC-12tile 400. Daisy chaining is a method of connecting devices in a series such that signals are passed through the chain from one device to the next. By enabling daisy chaining, the present invention provides for currently unavailable levels of scalability in data volume support and hardware implementation. AHost Processor 455 is connected viacommunication buses 425, preferably PCI communication buses, to thePCI interface 435 on each of the OC-3tiles 405. Each OC-3tile 405 has aTDM interface 460 that operates via aTDM communication bus 465 to receive TDM signals via a TDM interface [not shown]. Each OC-3tile 405 is further in communication with an ATMphysical device 490 through acommunication bus 495 connected to the OC-3tile 405 through a UTOPIA II/POS II interface 470. Data received by an OC-3tile 405 and not processed, because, for example, the data packet is directed toward a specific packet engine address that was not found in that specific OC-3tile 405, is sent to the next OC-3tile 405 in the series via thePHY GMII interface 410 and received by the next OC-3 tile via theMAC GMII interface 413. Enabling daisy chaining eliminates the need for an external aggregator to interface the GMII interfaces on each of the OC-3 tiles in order to enable integration. The final OC-3tile 405 is in communication with a GMIIphysical device 417 via thePHY GMII interface 410. - Operating on the above-described hardware architecture embodiments is a plurality of novel, integrated software systems designed to enable media processing, signaling, and packet processing. Referring now to FIG. 5, a logical division of the
software system 500 is shown. Thesoftware system 500 is divided into three subsystems, aMedia Processing Subsystem 505, aPacketization Subsystem 540, and a Signaling/Management Subsystem 570. Eachsubsystem modules 520 designed to perform different tasks in order to effectuate the processing and transmission of media. It is preferred that themodules 520 be designed in order to encompass a single core task that is substantially non-divisible. For example, exemplary modules include echo cancellation, codec implementation, scheduling, IP-based packetization, and ATM-based packetization, among others. The nature and functionality of themodules 520 deployed in the present invention will be further described below. - The logical system of FIG. 5 can be physically deployed in a number of ways, depending on processing needs, due, in part, to the novel software architecture, to be described below. As shown in FIG. 6, one physical embodiment of the software system described in FIG. 5 is to be on a
single chip 600, where themedia processing block 610,packetization block 620, andmanagement block 630 are all operative on the same chip. If processing needs increase, thereby requiring more chip power be dedicated to media processing, the software system can be physically implemented such that themedia processing block 710 andpacketization block 720 operate on aDSP 715 that is in communication via adata bus 770 with themanagement block 730 that operates on aseparate host processor 735, as depicted in FIG. 7. Similarly, if processing needs further increase, themedia processing block 810 andpacketization block 820 can be implemented onseparate DSPs data buses 870 with each other and with themanagement block 830 that operates on aseparate host processor 835, as depicted in FIG. 8. Within each block, the modules can be physically separated onto different processors to enable for a high degree of system scalability. - In an embodiment, four OC-3 tiles are combined onto a single integrated circuit (IC) card wherein each OC-3 tile is configured to perform media processing and packetization tasks. The IC card has four OC-3 tiles in communication via databuses. As previously described, the OC-3 tiles each have three Media Engine II processors in communication via interchip communication buses with a Packet Engine processor. The Packet Engine processor has a MAC and PHY interface by which communications external to the OC-3 tiles are performed. The PHY interface of the first OC-3 tile is in communication with the MAC interface of the second OC-3 tile. Similarly, the PHY interface of the second OC-3 tile is in communication with the MAC interface of the third OC-3 tile and the PHY interface of the third OC-3 tile is in communication with the MAC interface of the fourth OC-3 tile. The MAC interface of the first OC-3 tile is in communication with the PHY interface of a host processor. Operationally, each Media Engine II processor implements the Media Processing Subsystem of the present invention, shown in FIG. 5 as505. Each Packet Engine processor implements the Packetization Subsystem of the present invention, shown in FIG. 5 as 540. The host processor implements the Management Subsystem, shown in FIG. 5 as 570.
- The primary components of the top-level hardware system architecture will now be described in further detail, including Media Engine Type I, Media Engine Type II, and Packet Engine. Additionally, the software architecture, along with specific features, will be further described in detail.
- Media Engines
- Both Media Engine I and Media Engine II are types of DPLPs and therefore comprise a layered architecture wherein each layer encodes and decodes up to N channels of voice, fax, modem, or other data depending on the layer configuration. Each layer implements a set of pipelined processing units specially designed through substantially optimal hardware and software partitioning to perform specific media processing functions. The processing units are special-purpose digital signal processors that are each optimized to perform a particular signal processing function or a class of functions. By creating processing units that are capable of performing a well-defined class of functions, such as echo cancellation or codec implementation, and placing them in a pipeline structure, the present invention provides a media processing system and method with substantially greater performance than conventional approaches.
- Referring to FIG. 9, a diagram of Media Engine1900 is shown. Media Engine 1900 comprises a plurality of
Media Layers 905 each in communication with a central direct memory access (DMA)controller 910 viacommunication data buses 920. Using a DMA approach enables the bypassing of a system processing unit to handle the transfer of data between itself and system memory directly. EachMedia Layer 905 further comprises an interface to theDMA 925 interconnected with thecommunication data buses 920. In turn, theDMA interface 925 is in communication with each of a plurality of pipelined processing units (PUs) 930 viacommunication data buses 920 and a plurality of program anddata memories 940, viacommunication data buses 920, that are situated between theDMA interface 925 and each of thePUs 930. The program anddata memories 940 are also in communication with each of thePUs 930 viadata buses 920. Preferably, eachPU 930 can access at least one program memory and at least onedata memory unit 940. Further, it is also preferred to have at least one first-in, first-out (FIFO) task queue [not shown] to receive scheduled tasks and queue them for operation by thePUs 930. - While the layered architecture of the present invention is not limited to a specific number of Media Layers, certain practical limitations may restrict the number of Media Layers that can be stacked into a single Media Engine I. As the number of Media Layers increase, the memory and device input/output bandwidth may increase to such an extent that the memory requirements, pin count, density, and power consumption are adversely affected and become incompatible with application or economic requirements. Those practical limitations, however, do not represent restrictions on the scope and substance of the present invention.
-
Media Layers 905 are in communication with an interface to the central processing unit 950 (CPU IF) throughcommunication buses 920. The CPU IF 950 transmits and receives control signals and data from anexternal scheduler 955, theDMA controller 910, a PCI interface (PCI IF) 960, a SRAM interface (SRAM IF) 975, and an interface to an external memory, such as an SDRAM interface (SDRAM IF) 970 throughcommunication buses 920. ThePCI IF 960 is preferably used for control signals. The SDRAM IF 970 connects to a synchronized dynamic random access memory module whereby the memory access cycles are synchronized with the CPU clock in order to eliminate wait time associated with memory fetching between random access memory (RAM) and the CPU. In a preferred embodiment, the SDRAM IF 970 that connects the processor with the SDRAM supports 133 MHz synchronous DRAM and asynchronous memory. It supports one bank of SDRAM (64 Mbit/256 Mbit to 256 MB maximum) and 4 asynchronous devices (8/16/32 bit) with a data path of 32 bits and fixed length as well as undefined length block transfers and accommodates back-to-back transfers. Eight transactions may be queued for operation. The SDRAM [not shown] contains the states of thePUs 930. One of ordinary skill in the art would appreciate that, although not preferred, other external memory configurations and types could be selected in place of the SDRAM and, therefore, that another type of memory interface could be used in place of theSDRAM IF 970. - The SDRAM IF970 is further in communication with the
PCI IF 960,DMA controller 910, the CPU IF 950, and, preferably, the SRAM interface (SRAM IF) 975 throughcommunication buses 920. The SRAM [not shown] is a static random access memory that is a form of random access memory that retains data without constant refreshing, offering relatively fast memory access. The SRAM IF 975 is also in communication with a TDM interface (TDM IF) 980, the CPU IF 950, theDMA controller 910, and thePCI IF 960 viadata buses 920. - In an embodiment, the TDM IF980 for the trunk side is preferably H.100/H.110 compatible and the
TDM bus 981 operates at 8.192 MHz. Enabling the Media Engine I 900 to provide 8 data signals, therefore delivering a capacity up to 512 full duplex channels, theTDM IF 980 has the following preferred features: a H.100/H.110 compatible slave, frame size can be set to 16 or 20 samples and the scheduler can program theTDM IF 980 to store a specific buffer or frame size, programmable staggering points for the maximum number of channels. Preferably, the TDM IF interrupts the scheduler after every N samples of 8,000 Hz clock with the number N being programmable with possible values of 2, 4, 6, and 8. In a voice application, theTDM IF 980 preferably does not transfer the pulse code modulation (PCM) data to memory on a sample-by-sample basis, but rather buffers 16 or 20 samples, depending on the frame size which the encoders and decoders are using, of a channel and then transfers the voice data for that channel to memory. - The
PCI IF 960 is also in communication with theDMA controller 910 viacommunication buses 920. External connections comprise connections between the TDM IF 980 and aTDM bus 981, between the SRAM IF 975 and aSRAM bus 976, between the SDRAM IF 970 and aSDRAM bus 971, preferably operating at 32 bit@133 MHz, and between the PCI IF 960 and a PCI 2.1Bus 961 also preferably operating at 32 bit@133 MHz. - External to Media Engine I, the
scheduler 955 maps the channels to the Media Layers 905 for processing. When thescheduler 955 is processing a new channel, it assigns the channel to one of the layers, depending upon processing resources available perlayer 905. Eachlayer 905 handles the processing of a plurality of channels such that the processing is performed in parallel and is divided into fixed frames, or portions of data. Thescheduler 955 communicates with eachMedia Layer 905 through the transmission of data, in the form of tasks, to the FIFO task queues wherein each task is a request to theMedia Layer 905 to process a plurality of data portions for a particular channel. It is therefore preferred for thescheduler 955 to initiate the processing of data from a channel by putting a task in a task queue, rather than programming eachPU 930 individually. More specifically, it is preferred to have thescheduler 955 initiate the processing of data from a channel by putting a task in the task queue of aparticular PU 930 and having the Media Layer's 905 pipeline architecture manage the data flow tosubsequent PUs 930. - The
scheduler 955 should manage the rate by which each of the channels is processed. In an embodiment where theMedia Layer 905 is required to accept the processing of data from M channels and each of the channels uses a frame size of T msec, then it is preferred that thescheduler 955 processes one frame of each of the M channels within each T msec interval. Further, in a preferred embodiment, the scheduling is based upon periodic interrupts, in the form of units of samples, from theTDM IF 980. As an example, if the interrupt period is 2 samples then it is preferred that theTDM IF 980 interrupts the scheduler every time it gathers two new samples of all channels. The scheduler preferably maintains a ‘tick-count’, which is incremented on every interrupt and reset to 0 when time equal to a frame size has passed. The mapping of channels to time slots is preferably not fixed. For example, in voice applications, whenever a call starts on a channel, the scheduler dynamically assigns a layer to a provisioned time slot channel. It is further preferred that the data transfer from a TDM buffer to the memory is aligned with the time slot in which this data is processed, thereby staggering the data transfer for different channels from TDM to memory, and vice-versa, in a manner that is equivalent to the staggering of the processing of different channels. Consequently, it is further preferred that theTDM IF 980 maintains a tick count variable wherein there is some synchronization between the tick counts of TDM andscheduler 955. In the exemplary embodiment described above, the tick count variable is set to zero on every 2 ms or 2.5 ms depending on the buffer size. - Referring to FIG. 10, a block diagram of
Media Engine II 1000 is shown.Media Engine II 1000 comprises a plurality ofMedia Layers 1005 each in communication withprocessing layer controller 1007, referred to herein as aMedia Layer Controller 1007, and central direct memory access (DMA)controller 1010 via communication data buses and aninterface 1015. EachMedia Layer 1005 is in communication with aCPU interface 1006 which, in turn, is in communication with aCPU 1004. Within eachMedia Layer 1005, a plurality of pipelined processing units (PUs) 1030 are in communication with a plurality ofprogram memories 1035 and data memories 1040, via communication data buses. Preferably, eachPU 1030 can access at least oneprogram memory 1035 and one data memory 1040. Each of thePUs 1030,program memories 1035, and data memories 1040 is in communication with anexternal memory 1047 via theMedia Layer Controller 1007 andDMA 1010. In a preferred embodiment, eachMedia Layer 1005 comprises fourPUs 1030, each of which is in communication with asingle program memory 1035 and data memory 1040, wherein the each of thePUs other PUs Media Layer 1005. - Shown in FIG. 10a, a preferred embodiment of the architecture of the Media Layer Controller, or MLC, is provided. A
program memory 1005 a, preferably 512×64, operates in conjunction with acontroller 1010 a anddata memory 1015 a to deliver data and instructions to adata register file 1017 a, preferably 16×32, and addressregister file 1020 a, preferably 4×12. Thedata register file 1017 a andaddress register file 1020 a are in communication with functional units such as an adder/MAC 1025 a,logical unit 1027 a, andbarrel shifter 1030 a and with units such as a requestarbitration logic unit 1033 a andDMA channel bank 1035 a. - Referring back to FIG. 10, the
MLC 1007 arbitrates data and program code transfer requests to and from theprogram memories 1035 and data memories 1040 in a round robin fashion. On the basis of this arbitration theMLC 1007 fills the data pathways that define how units directly access memory, namely the DMA channels [not shown]. TheMLC 1007 is capable of performing instruction decoding to route an instruction according to its dataflow and keep track of the request states for allPUs 1030, such as the state of a read-in request, a write-back request and an instruction forwarding. TheMLC 1007 is further capable of conducting interface related functions, such as programming DMA channels, starting signal generation, maintaining page states forPUs 1030 in eachMedia Layer 1005, decoding of scheduler instructions, and managing the movement of data from and into the task queues of eachPU 1030. By performing the aforementioned functions, theMedia Layer Controller 1007 substantially eliminates the need for associating complex state machines with thePUs 1030 present in eachMedia Layer 1005. - The
DMA controller 1010 is a multi-channel DMA unit for handling the data transfers between the local memory buffer PUs and external memories, such as the SDRAM. Preferably, DMA channels are programmed dynamically. More specifically,PUs 1030 generate independent requests, each having an associated priority level, and send them to theMLC 1007 for reading or writing. Based upon the priority request delivered by aparticular PU 1030, theMLC 1007 programs the DMA channel accordingly. Preferably, there is also an arbitration process, such as a single level of round robin arbitration, between the channels within the DMA to access the external memory. TheDMA Controller 1010 provides hardware support for round robin request arbitration across thePUs 1030 and Media Layers 1005. - In an exemplary operation, it is preferred to conduct transfers between local PU memories and external memories by utilizing the address of the local memory, address of the external memory, size of the transfer, direction of the transfer, namely whether the DMA channel is transferring data to the local memory from the external memory or vice-versa, and how many transfers are required for each PU. In this preferred embodiment, a DMA channel is generated and receives this information from 2, 32 bit registers residing in the DMA. A third register exchanges control information between the DMA and each PU which contains the current status of the DMA transfer. In a preferred embodiment, arbitration is performed among the following requests: 1 structure read, 4 data read and 4 data write requests from each Media Layer, approximately 90 data requests in total, and 4 program code fetch requests from each Media Layer, approximately 40 program code fetch requests in total. The
DMA Controller 1010 is preferably further capable of arbitrating priority for program code fetch requests, conducting link list traversal and DMA channel information generation, and performing DMA channel prefetch and done signal generation. - The
MLC 1007 andDMA Controller 1010 are in communication with a CPU IF 1006 through communication buses. The PCI IF 1060 is in communication with an external memory interface (such as a SDRAM IF) 1070 and with the CPU IF 1006 via communication buses. Theexternal memory interface 1070 is further in communication with theMLC 1007 andDMA Controller 1010 and a TDM IF 1080 through communication buses. The SDRAM IF 1070 is in communication with a packet processor interface, such as a UTOPIA II/POS compatible interface (U2/POS IF), 1090 via communication data buses. The U2/POS IF 1090 is also preferably in communication with the CPU IF 1006. Although the preferred embodiments of the PCI IF and SDRAM IF are similar to Media Engine I, it is preferred that the TDM IF 1080 have all 32 serial data signals implemented, thereby supporting at least 2048 full duplex channels. External connections comprise connections between the TDM IF 1080 and aTDM bus 1081, between theexternal memory 1070 and amemory bus 1071, preferably operating at 64 bit@133 MHz, between the PCI IF 1060 and a PCI 2.1Bus 1061 also preferably operating at 32 bit@133 MHz, and between the U2/POS IF 1090 and a UTOPIA II/POS connection 1091 preferably operative at 622 megabits per second. In a preferred embodiment, the TDM IF 1080 for the trunk side is preferably H.100/H.110 compatible and theTDM bus 1081 operates at 8.192 MHz, as previously discussed in relation to the Media Engine I. - For both Media Engine I and Media Engine II, within each media layer, the present invention utilizes a plurality of pipelined PUs specially designed for conducting a defined set of processing tasks. In that regard, the PUs are not general purpose processors and can not be used to conduct any processing task. A survey and analysis of specific processing tasks yielded certain functional unit commonalities that, when combined, yield a specialized PU capable of optimally processing the universe of those specialized processing tasks. The instruction set architecture of each PU yields compact code. Increased code density results in a decrease in required memory and, consequently, a decrease in required area, power, and memory traffic.
- The pipeline architecture also improves performance. Pipelining is an implementation technique whereby multiple instructions are overlapped in execution. In a computer pipeline, each step in the pipeline completes a part of an instruction. Like an assembly line, different steps are completing different parts of different instructions in parallel. Each of these steps is called a pipe stage or a data segment. The stages are connected on to the next to form a pipe. Within a processor, instructions enter the pipe at one end, progress through the stages, and exit at the other end. The throughput of an instruction pipeline is determined by how often an instruction exits the pipeline.
- More specifically, one type of PU (referred to herein as EC PU) has been specially designed to perform, in a pipeline architecture, a plurality of media processing functions, such as echo cancellation (EC), voice activity detection (VAD), and tone signaling (TS) functions. Echo cancellation removes from a signal echoes that may arise as a result of the reflection and/or retransmission of modified input signals back to the originator of the input signals. Commonly, echoes occur when signals that were emitted from a loudspeaker are then received and retransmitted through a microphone (acoustic echo) or when reflections of a far end signal are generated in the course of transmission along hybrids wires (line echo). Although undesirable, echo is tolerable in a telephone system, provided that the time delay in the echo path is relatively short. However, longer echo delays can be distracting or confusing to a far end speaker. Voice activity detection determines whether a meaningful signal or noise is present at the input. Tone signaling comprises the processing of supervisory, address, and alerting signals over a circuit or network by means of tones. Supervising signals monitor the status of a line or circuit to determine if it is busy, idle, or requesting service. Alerting signals indicate the arrival of an incoming call. Addressing signals comprise routing and destination information.
- The LEC, VAD, and TS functions can be efficiently executed using a PU having several single-cycle multiply and accumulate (MAC) units operating with an Address Generation Unit and an Instruction Decoder. Each MAC unit includes a compressor, sum and carry registers, an adder, and a saturation and rounding logic unit. In a preferred embodiment, shown in FIG. 11, this
PU 1100 comprises a load store architecture with a single Address Generation Unit (AGU) 1105, supporting zero over-head looping and branching with delay slots, and anInstruction Decoder 1106. The plurality ofMAC units 1110 operate in parallel on two 16-bit operands and perform the following function: - Acc+=a*b
- Guard bits are appended with sum and carry registers to facilitate repeated MAC operations. A scale unit prevents accumulator overflow. Each
MAC unit 1110 may be programmed to perform round operations automatically. Additionally, it is preferred to have an addition/subtraction unit [not shown] as a conditional sum adder with both the input operands being 20 bit values and the output operand being a 16-bit value. - Operationally, the EC PU performs tasks in a pipeline fashion. A first pipeline stage comprises an instruction fetch wherein instructions are fetched into an instruction register from program memory. A second pipeline stage comprises an instruction decode and operand fetch wherein an instruction is decoded and stored in a decode register. The hardware loop machine is initialized in this cycle. Operands from the data register files are stored in operand registers. The AGU operates during this cycle. The address is placed on data memory address bus. In the case of a store operation, data is also placed on the data memory data bus. For post increment or decrement instructions, the address is incremented or decremented after being placed on the address bus. The result is written back to address register file. The third pipeline stage, the Execute stage, comprises the operation on the fetched operands by the Addition/Subtraction Unit and MAC units. The status register is updated and the computed result or data loaded from memory is stored in the data/address register files. The states and history information required for the EC PU operations are fetched through a multi-channel DMA interface, as previously shown in each Media Layer. The EC PU configures the DMA controller registers directly. The EC PU loads the DMA chain pointer with the memory location of the head of the chain link.
- By enabling different data streams to move through the pipelined stages concurrently, the EC PU reduces wait time for processing incoming media, such as voice. Referring to FIG. 12, in
time slot 1 1205, an instruction fetch task (IF) is performed for processing data fromchannel 1 1250. Intime slot 2 1206, the IF task is performed for processing data fromchannel 2 1255 while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data fromchannel 1 1250. Intime slot 3 1207, an IF task is performed for processing data fromchannel 3 1260 while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data fromchannel 2 1255 and an Execute (EX) task is performed for processing data fromchannel 1 1250. One of ordinary skill in the art would appreciate that, because channels are dynamically generated, the channel numbering may not reflect the actual location and assignment of a task. Channel numbering here is used to simply indicate the concept of pipelining across multiple channels and not to represent actual task locations. - A second type of PU (referred to herein as CODEC PU) has been specially designed to perform, in a pipeline architecture, a plurality of media processing functions, such as encoding and decoding signals in accordance with certain standards and protocols, including standards promoted by the International Telecommunication Union (ITU) such as voice standards, including G.711, G.723.1, G.726, G.728, G.729A/B/E, and data modem standards, including V.17, V.34, and V.90, among others (referred to herein as Codecs), and performing comfort noise generation (CNG) and discontinuous transmission (DTX) functions. The various Codecs are used to encode and decode voice signals with differing degrees of complexity and resulting quality. CNG is the generation of background noise that gives users a sense that the connection is live and not broken. A DTX function is implemented when the frame being received comprises silence, rather than a voice transmission.
- The Codecs, CNG, and DTX functions can be efficiently executed using a PU having an Arithmetic and Logic Unit (ALU), MAC unit, Barrel Shifter, and Normalization Unit. In a preferred embodiment, shown in FIG. 13, the
CODEC PU 1300 comprises a load store architecture with a single Address Generation Unit (AGU) 1305, supporting zero over-head looping and zero overhead branching with delay slots, and anInstruction Decoder 1306. - In an exemplary embodiment, each
MAC unit 1310 includes a compressor, sum and carry registers, an adder, and a saturation and rounding logic unit. TheMAC unit 1310 is implemented as a compressor with feedback into the compression tree for accumulation. One preferred embodiment of aMAC 1310 has a latency of approximately 2 cycles with a throughput of 1 cycle. TheMAC 1310 operates on two 17-bit operands, signed or unsigned. The intermediate results are kept in sum and carry registers. Guard bits are appended to the sum and carry registers for repeated MAC operations. The saturation logic converts the Sum and Carry results to 32 bit values. The rounding logic rounds a 32 bit to a 16 bit number. Division logic is also implemented in theMAC unit 1310. - In an exemplary embodiment, the
ALU 1320 includes a 32 bit adder and a 32 bit logic circuit capable of performing a plurality of operations, including add, add with carry, subtract, subtract with borrow, negate, AND, OR, XOR, and NOT. One of the inputs to theALU 1320 has an XOR array, which operates on 32-bit operands. Comprising an absolute unit, a logic unit, and an addition/subtraction unit, the ALU's 1320 absolute unit drives this array. Depending on the output of the absolute unit, the input operand is either XORed with one or zero to perform negation on the input operands. - In an exemplary embodiment, the
Barrel Shifter 1330 is placed in series with theALU 1320 and acts as a pre-shifter to operands requiring a shift operation followed by any ALU operations. One type of preferred Barrel Shifter can perform a maximum of 9-bit left or 26-bit right arithmetic shifts on 16-bit or 32-bit operands. The output of the Barrel Shifter is a 32-bit value, which is accessible to both the inputs of theALU 1320. - In an exemplary embodiment, the
Normalization unit 1340 counts the redundant sign bits in the number. It operates on 2's complement 16-bit numbers. Negative numbers are inverted to compute the redundant sign bits. The number to be normalized is fed into the XOR array. The other input comes from the sign bit of the number. Where the media being processed is voice, it is preferred to have an interface to the EC PU. The EC PU uses VAD to determine whether a frame being received comprises silence or speech. The VAD decision is preferably communicated to the CODEC PU so that it may determine whether to implement a Codec or DTX function. - Operationally, the CODEC PU performs tasks in a pipeline fashion. A first pipeline stage comprises an instruction fetch wherein instructions are fetched into an instruction register from program memory. At the same time, the next program counter value is computed and stored in the program counter. In addition, loop and branch decisions are taken in the same cycle. A second pipeline stage comprises an instruction decode and operand fetch wherein an instruction is decoded and stored in a decode register. The instruction decode, register read and branch decisions happen in the instruction decode stage. In the third pipeline stage, the Execute1 stage, the Barrel Shifter and the MAC compressor tree complete their computation. Addresses to data memory are also applied in this stage. In the fourth pipeline stage, the Execute 2 stage, the ALU, normalization unit, and the MAC adder complete their computation. Register write-back and address registers are updated at the end of the Execute-2 stage. The states and history information required for the CODEC PU operations are fetched through a multi-channel DMA interface, as previously shown in each Media Layer.
- By enabling different data streams to move through the pipelined stages concurrently, the CODEC PU reduces wait time for processing incoming media, such as voice. Referring to FIG. 13a, in
time slot 1 1305 a, an instruction fetch task (IF) is performed for processing data fromchannel 1 1350 a. Intime slot 2 1306 a, the IF task is performed for processing data fromchannel 2 1355 a while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data fromchannel 1 1350 a. Intime slot 3 1307 a, an IF task is performed for processing data fromchannel 3 1360 a while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data fromchannel 2 1355 a and an Execute 1 (EX1) task is performed for processing data fromchannel 1 1350 a. Intime slot 4 1308 a, an IF task is performed for processing data fromchannel 4 1370 a while, concurrently, an instruction decode and operand fetch (IDOF) is performed for processing data fromchannel 3 1360 a, an Execute 1 (EX1) task is performed for processing data fromchannel 2 1355 a, and an Execute 2 (EX2) task is performed for processing data fromchannel 1 1350 a. One of ordinary skill in the art would appreciate that, because channels are dynamically generated, the channel numbering may not reflect the actual location and assignment of a task. Channel numbering here is used to simply indicate the concept of pipelining across multiple channels and not to represent actual task locations. - The pipeline architecture of the present invention is not limited to instruction processing within PUs, but also exists on a PU to PU architecture level. As shown in FIG. 13b, multiple PUs may operate on a data set N in a pipeline fashion to complete the processing of a plurality of tasks where each task comprises a plurality of steps. A
first PU 1305 b may be capable of performing echo cancellation functions, labeled task A. A second PU 1310 b may be capable of performing tone signaling functions, labeled task B. A third PU 1315 b may be capable of performing a first set of encoding functions, labeled task C. A fourth PU 1320 b may be capable of performing a second set of encoding functions, labeled task D. In time slot 1 1350 b, the first PU 1305 b performs task A1 1380 b on data set N. In time slot 2 1355 b, the first PU 1305 b performs task A2 1381 b on data set N and the second PU 1310 b performs task B1 1387 b on data set N. In time slot 3 1360 b, the first PU 1305 b performs task A3 1382 b on data set N, the second PU 1310 b performs task B2 1388 b on data set N, and the third PU 1315 b performs task C1 1394 b on data set N. In time slot 4 1365 b, the first PU 1305 b performs task A4 1383 b on data set N, the second PU 1310 b performs task B3 1389 b on data set N, the third PU 1315 b performs task C2 1395 b on data set N, and the fourth PU 1320 b performs task D1 1330 on data set N. In time slot 5 1370 b, the first PU 1305 b performs task A5 1384 b on data set N, the second PU 1310 b performs task B4 1390 b on data set N, the third PU 1315 b performs task C3 1396 b on data set N, and the fourth PU 1320 b performs task D2 1331 on data set N. In time slot 6 1375 b, the first PU 1305 b performs task A5 1385 b on data set N, the second PU 1310 b performs task B4 1391 b on data set N, the third PU 1315 b performs task C3 1397 b on data set N, and the fourth PU 1320 b performs task D2 1332 on data set N. One of ordinary skill in the art would appreciate how the pipeline processing would further progress. - In this exemplary embodiment, the combination of specialized PUs with a pipeline architecture enables the processing of greater channels on a single media layer. Where each channel implements a G.711 codec and 128 ms of echo tail cancellation with DTMF detection/generation, voice activity detection (VAD), comfort noise generation (CNG), and call discrimination, the media engine layer operates at 1.95 MHz per channel. The resulting channel power consumption is at or about 6 mW per channel using 0.13μ standard cell technology.
- Packet Engine
- The Packet Engine of the present invention is a communications processor that, in a preferred embodiment, supports the plurality of interfaces and protocols used in media gateway processing systems between circuit-switched networks, packet-based IP networks, and cell-based ATM networks. The Packet Engine comprises a unique architecture capable of providing a plurality of functions for enabling media processing, including, but not limited to, cell and packet encapsulation, quality of service functions for traffic management and tagging for the delivery of other services and multi-protocol label switching, and the ability to bridge cell and packet networks.
- Referring now to FIG. 14, an exemplary architecture of the
Packet Engine 1400 is provided. In the embodiment depicted, thePacket Engine 1400 is configured to handle data rate up to and around OC-12. It is appreciated by one of ordinary skill in the art that certain modifications can be made to the fundamental architecture to increase the data handling rates beyond OC-12. ThePacket Engine 1400 comprises a plurality ofprocessors 1405, ahost processor 1430, anATM engine 1440, in-boundDMA channel 1450, out-boundDMA channel 1455, a plurality ofnetwork interfaces 1460, a plurality ofregisters 1470,memory 1480, an interface toexternal memory 1490, and a means to receive control andsignaling information 1495. - The
processors 1405 comprise aninternal cache 1407, centralprocessing unit interface 1409, anddata memory 1411. In a preferred embodiment, theprocessors 1405 comprise 32-bit reduced instruction set computing (RISC) processors with a 16 Kb instruction cache and a 12 Kb local memory. The centralprocessing unit interface 1409 permits theprocessor 1405 to communicate with other memories internal to, and external to, thePacket Engine 1400. Theprocessors 1405 are preferably capable of handling both in-bound and out-bound communication traffic. In a preferred implementation, generally half of the processors handle in-bound traffic while the other half handle out-bound traffic. Thememory 1411 in theprocessor 1405 is preferably divided into a plurality of banks such that distinct elements of thePacket Engine 1400 can access thememory 1411 independently and without contention, thereby increasing overall throughput. In a preferred embodiment, the memory is divided into three banks, such that the in-bound DMA channel can write to memory bank one, while the processor is processing data from memory bank two, while the out-bound DMA channel is transferring processed packets from memory bank three. - The
ATM engine 1440 comprises two primary subcomponents, referred to herein as the ATMRx Engine and the ATMTx Engine. The ATMRx Engine processes an incoming ATM cell header and transfers the cell for corresponding AAL protocol, namely AAL1, AAL2, AAL5, processing in the internal memory or to another cell manager, if external to the system. The ATMTx Engine processes outgoing ATM cells and requests the outbound DMA channel to transfer data to a particular interface, such as the UTOPIAII/POSII interface. Preferably, it has separate blocks of local memory for data exchange. TheATM engine 1440 operates in combination withdata memory 1483 to map an AAL channel, namely AAL2, to a corresponding channel on the TDM bus (where thePacket Engine 1400 is connected to a Media Engine) or to a corresponding IP channel identifier where internetworking between IP and ATM systems is required. Theinternal memory 1480 utilizes an independent block to maintain a plurality of tables for comparing and/or relating channel identifiers with virtual path identifiers (VPI), virtual channel identifiers (VCI), and compatibility identifiers (CID). A VPI is an eight-bit field in the ATM cell header which indicates the virtual path over which the cell should be routed. A VCI is the address or label of a virtual channel comprised of a unique numerical tag, defined by a 16 bit field in the ATM cell header, that identifies a virtual channel over which a stream of cells is to travel during the course of a session between devices. The plurality of tables are preferably updated by thehost processor 1430 and are shared by the ATMRx and ATMTx engines. - The
host processor 1430 is preferably a RISC processor with aninstruction cache 1431. Thehost processor 1430 communicates with other hardware blocks through aCPU interface 1432 which is capable of managing communications with Media Engines over a bus, such as a PCI bus, and with a host, such as a signaling host through a PCI-PCI bridge. Thehost processor 1430 is capable of being interrupted byother processors 1405 through their transmission of interrupts which are handled by an interrupthandler 1433 in the CPU interface. It is further preferred that thehost processor 1430 be capable of performing the following functions: 1) boot-up processing, including loading code from a flash memory to an external memory and starting execution, initializing interfaces and internal registers, acting as a PCI host, and appropriately configuring them, and setting up inter-processor communications between a signaling host, the packet engine itself, and media engines, 2) DMA configuration, 3) certain network management functions, 4) handling exceptions, such as the resolution of unknown addresses, fragmented packets, or packets with invalid headers, 4) providing intermediate storage of tables during system shutdown, 5) IP stack implementation, and 6) providing a message-based interface for users external to the packet engine and for communicating with the packet engine through the control and signaling means, among others. - In an embodiment, two DMA channels are provided for data exchange between different memory blocks via data buses. Referring to FIG. 14, the in-bound
DMA channel 1450 is utilized to handle incoming traffic to thePacket Engine 1400 data processing elements and the out-boundDMA channel 1455 is utilized to handle outgoing traffic to the plurality of network interfaces 1460. The in-boundDMA channel 1450 handles all of the data coming into thePacket Engine 1400. - To receive and transmit data to ATM and IP networks, the
Packet Engine 1400 has a plurality ofnetwork interfaces 1460 that permit the Packet Engine to compatibly communicate over networks. Referring to FIG. 15, in a preferred embodiment, the network interfaces comprise aGMII PHY interface 1562, aGMII MAC interface 1564, and two UTOPIAII/POSII interfaces 1566 in communication with 622 Mbps ATM/SONET connections 1568 to receive and transmit data. For IP-based traffic, the Packet Engine [not shown] supports MAC and emulates PHY layers of the Ethernet interface as specified in IEEE 802.3. Thegigabit Ethernet MAC 1570 comprises FIFOs 1503 and acontrol state machine 1525. The transmit and receive FIFOs 1503 are provided for data exchange between thegigabit Ethernet MAC 1570 andbus channel interface 1505. Thebus channel interface 1505 is in communication with theoutbound DMA channel 1515 and in-boundDMA channel 1520 through bus channel. When IP data is being received from theGMII MAC interface 1564, theMAC 1570 preferably sends a request to theDMA 1520 for data movement. Upon receiving the request, theDMA 1520 preferably checks the task queue [not shown] in theMAC interface 1564 and transfers the queued packets. In a preferred embodiment, the task queue in the MAC interface is a set of 64 bit registers containing a data structure comprising: length of data, source address, and destination address. Where theDMA 1520 is maintaining the write pointers for the plurality of destinations [not shown], the destination address will not be used. TheDMA 1520 will move the data over the bus channel to memories located within the processors and will write the number of tasks at a predefined memory location. After completing writing of all tasks, theDMA 1520 will write the total number of tasks transferred to the memory page. The processor will process the received data and will write a task queue for an outbound channel of the DMA. Theoutbound DMA channel 1515 will check the number of frames present in the memory locations and, after reading the task queue, will move the data either to a POSII interface of the Media Engine Type I or II or to an external memory location where IP to ATM bridging is being performed. - For ATM only or ATM and IP traffic in combination, the Packet Engine supports two configurable UTOPIAII/
POSII interfaces 1566 which provides an interface between the PHY and upper layer for IP/ATM traffic. The UTOPIAII/POSII 1580 comprises FIFOs 1504 and acontrol state machine 1526. The transmit and receive FIFOs 1504 are provided for data exchange between the UTOPIAII/POSII 1580 andbus channel interface 1506. Thebus channel interface 1506 is in communication with theoutbound DMA channel 1515 and in-boundDMA channel 1520 through bus channel. The UTOPIA II/POS II interfaces 1566 may be configured in either UTOPIA level II or POS level II modes. When data is received on the UTOPIAII/POSII interface 1566, data will push existing tasks in the task queue forward and request theDMA 1520 to move the data. TheDMA 1520 will read the task queue from the UTOPIAII/POSII interface 1566 which contains a data structure comprising: length of data, source address, and type of interface. Depending upon the type of interface, e.g. either POS or UTOPIA, the in-boundDMA channel 1520 will send the data either to the plurality of processors [not shown] or to the ATMRx engine [not shown]. After data is written into the ATMRx memory, it is processed by the ATM engine and passed to the corresponding AAL layer. On the transmit side, data is moved to the internal memory of the ATMTx engine [not shown] by the respective AAL layer. The ATMTx engine inserts the desired ATM header at the beginning of the cell and will request theoutbound DMA channel 1515 to move the data to the UTOPIAII/POSII interface 1566 having a task queue with the following data structure: length of data and source address. - Referring to FIG. 16, to facilitate control and signaling functions, the
Packet Engine 1600 has a plurality ofPCI interfaces signaling host 1610, through aninitiator 1612, sends messages to be received by thePacket Engine 1600 to aPCI target 1605 via acommunication bus 1617. The PCI target further communicates these messages through a PCI toPCI bridge 1620 to aPCI initiator 1606. ThePCI initiator 1606 sends messages through acommunication bus 1618 to a plurality ofMedia Engines 1650, each having amemory 1660 with amemory queue 1665. - Software Architecture
- As previously discussed, operating on the above-described hardware architecture embodiments is a plurality of novel, integrated software systems designed to enable media processing, signaling, and packet processing. The novel software architecture enables the logical system, presented in FIG. 5, to be physically deployed in a number of ways, depending on processing needs.
- Communication between any two modules, or components, in the software system is facilitated by application program interfaces (APIs) that remain substantially constant and consistent irrespective of whether the software components reside on a hardware element or across multiple hardware elements. This permits the mapping of components onto different processing elements, thereby modifying physical interfaces, without the concurrent modification of the individual components.
- In an exemplary embodiment, shown in FIG. 17, a
first component 1705 operates in conjunction with asecond component 1710 and athird component 1715 through afirst interface 1720 andsecond interface 1725, respectively. Because all threecomponents physical processor 1700, thefirst interface 1720 andsecond interface 1725 perform interfacing tasks through function mapping conducted via the APIs of each of the threecomponents separate hardware elements first interface 1720 a andsecond interface 1725 a implement interfacing tasks throughqueues interfaces components - Referring now to FIG. 18, a logical division of the software system1800 is shown. The software system 1800 is divided into three subsystems, a
Media Processing Subsystem 1805, aPacketization Subsystem 1840, and a Signaling/Management Subsystem (hereinafter referred to as the Signaling Subsystem) 1870. TheMedia Processing Subsystem 1805 sends encoded data to thePacketization Subsystem 1840 for encapsulation and transmission over the network and receives network data from thePacketization Subsystem 1840 to be decoded and played out. TheSignaling Subsystem 1870 communicates with thePacketization Subsystem 1840 to get status information such as the number of packets transferred, to monitor the quality of service, control the mode of particular channels, among other functions. TheSignaling Subsystem 1870 also communicates with thePacketization Subsystem 1840 to control establishment and destruction of packetization sessions for the origination and termination of calls. Eachsubsystem components 1820 designed to perform different tasks in order to effectuate the processing and transmission of media. Each of thecomponents 1820 conducts communications with any other module, subsystem, or system through APIs that remain substantially constant and consistent irrespective of whether the components reside on a hardware element or across multiple hardware elements, as previously discussed. - In an exemplary embodiment, shown in FIG. 19, the
Media Processing Subsystem 1905 comprises asystem API component 1907,media API component 1909, real-time media kernel 1910, and voice processing components, including lineecho cancellation component 1911, components dedicated to performingvoice activity detection 1913,comfort noise generation 1915, anddiscontinuous transmission management 1917, acomponent 1919 dedicated to handling tone signaling functions, such as dual tone (DTMF/MF), call progress, call waiting, and caller identification, and components for media encoding and decoding functions forvoice 1927,fax 1929, andother data 1931. - The
system API component 1907 should be capable of providing a system wide management and enabling the cohesive interaction of individual components, including establishing communications between external applications and individual components, managing run-time component addition and removal, downloading code from central servers, and accessing the MIBs of components upon request from-other components. Themedia API component 1909 interacts with the realtime media kernel 1910 and individual voice processing components. The realtime media kernel 1910 allocates media processing resources, monitors resource utilization on each media-processing element, and performs load balancing to substantially maximize density and efficiency. - The voice processing components can be distributed across multiple processing elements. The line
echo cancellation component 1911 deploys adaptive filter algorithms to remove from a signal echoes that may arise as a result of the reflection and/or retransmission of modified input signals back to the originator of the input signals. In one preferred embodiment, the lineecho cancellation component 1911 has been programmed to implement the following filtration approach: An adaptive finite impulse response (FIR) filter of length N is converged using a convergence process, such as a least means square approach. The adaptive filter generates a filtered output by obtaining individual samples of the far-end signal on a receive path, convolving the samples with the calculated filter coefficients, and then subtracting, at the appropriate time, the resulting echo estimate from the received signal on the transmit channel. With convergence complete, the filter is then converted to an infinite impulse response (IIR) filter using a generalization of the ARMA-Levinson approach. In the course of operation, data is received from an input source and used to adapt the zeroes of the IIR filter using the LMS approach, keeping the poles fixed. The adaptation process generates a set of converged filter coefficients that are then continually applied to the input signal to create a modified signal used to filter the data. The error between the modified signal and actual signal received is monitored and used to further adapt the zeroes of the IIR filter. If the measured error is greater than a pre-determined threshold, convergence is re-initiated by reverting back to the FIR convergence step. - The voice
activity detection component 1913 receives incoming data and determines whether voice or another type of signal, i.e. noise, is present in the received data, based upon an analysis of certain data parameters. The comfortnoise generation component 1915 operates to send a Silence Insertion Descriptor (SID) containing information that enables a decoder to generate noise corresponding to the background noise received from the transmission. An overlay of audible but non-obtrusive noise has been found to be valuable in helping users discern whether a connection is live or dead. The SID frame is typically small, i.e. approximately 15 bits under the G.729 B codec specification. Preferably, updated SID frames are sent to the decoder whenever there has been sufficient change in the background noise. - The
tone signaling component 1919, including recognition of DTMF/MF, call progress, call waiting, and caller identification, operates to intercept tones meant to signal a particular activity or event, such as the conducting of two-stage dialing (in the case of DTMF tones), the retrieval of voice-mail, and the reception of an incoming call (in the case of call waiting), and communicate the nature of that activity or event in an intelligent manner to a receiving device, thereby avoiding the encoding of that tone signal as another element in a voice stream. In one embodiment, the tone-signalingcomponent 1919 is capable of recognizing a plurality of tones and, therefore, when one tone is received, send a plurality of RTP packets that identify the tone, together with other indicators, such as length of the tone. By carrying the occurrence of an identified tone, the RTP packets convey the event associated with the tone to a receiving unit. In a second embodiment, the tone-signalingcomponent 1919 is capable of generating a dynamic RTP profile wherein the RTP profile carries information detailing the nature of the tone, such as the frequency, volume, and duration. By carrying the nature of the tone, the RTP packets convey the tone to the receiving unit and permit the receiving unit to interpret the tone and, consequently, the event or activity associated with it. - Components for the media encoding and decoding functions for
voice 1927,fax 1929, andother data 1931, referred to as codecs, are devised in accordance with International Telecommunications Union (ITU) standard specifications, such as G.711 for the encoding and decoding of voice, fax, and other data. An exemplary codec for voice, data, and fax communications is ITU standard G.711, often referred to as pulse code modulation. G.711 is a waveform codec with a sampling rate of 8,000 Hz. Under uniform quantization, signal levels would typically require at least 12 bits per sample, resulting in a bit rate of 96 kbps. Under non-uniform quantization, as is commonly used, signal levels require approximately 8 bits per sample, leading to a 64 kbps rate. Other voice codecs include ITU standards G.723.1, G.726, and G.729 A/B/E, all of which would be known and appreciated by one of ordinary skill in the art. Other ITU standards supported by the faxmedia processing component 1929 preferably include T.38 and standards falling within V.xx, such as V.17, V.90, and V.34. Exemplary codecs for fax include ITU standard T.4 and T.30. T.4 addresses the formatting of fax images and their transmission from sender to receiver by specifying how the fax machine scans documents, the coding of scanned lines, the modulation scheme used, and the transmission scheme used. Other codecs include ITU standards T.38. - Referring to FIG. 20, in an exemplary embodiment, the
Packetization Subsystem 2040 comprises asystem API component 2043,packetization API component 2045,POSIX API 2047, real-time operating system (RTOS) 2049, components dedicated to performing such quality of service functions as buffering andtraffic management 2050, a component for enablingIP communications 2051, a component for enablingATM communications 2053, a component for resource-reservation protocol (RSVP) 2055, and a component for multi-protocol label switching (MPLS) 2057. ThePacketization Subsystem 2040 facilitates the encapsulation of encoded voice/data into packets for transmission over ATM and IP networks, manages certain quality of service elements, including packet delay, packet loss, and jitter management, and implements trafficshaping to control network traffic. Thepacketization API component 2045 provides external applications facilitated access to thePacketization Subsystem 2040 by communicating with the Media Processing Subsystem [not shown] and Signaling Subsystem [not shown]. - The
POSIX API 2047 layer isolated the operating system (OS) from the components and provides the components with a consistent OS API, thereby insuring that components above this layer do not have to be modified if the software is ported to another OS platform. TheRTOS 2049 acts as the OS facilitating the implementation of software code into hardware instructions. - The
IP communications component 2051 supports packetization for TCP/IP, UDP/IP, and RTP/RTCP protocols. TheATM communications component 2053 supports packetization for AAL1, AAL2, and AAL5 protocols. It is preferred that the RTP/UDP/IP stack be implemented on the RISC processors of the Packet Engine. A portion of the ATM stack is also preferably implemented on the RISC processors with more computationally intensive parts of the ATM stack implemented on the ATM engine. - The component for
RSVP 2055 specifies resource-reservation techniques for IP networks. The RSVP protocol enables resources to be reserved for a certain session (or a plurality of sessions) prior to any attempt to exchange media between the participants. Two levels of service are generally enabled, including a guaranteed level which emulates the quality achieved in conventional circuit switched networks, and controlled load which is substantially equal to the level of service achieved in a network under best-effort and no-load conditions. In operation, a sending unit issues a PATH message to a receiving unit via a plurality of routers. The PATH message contains a traffic specification (Tspec) that provides details about the data that the sender expects to send, including bandwidth requirement and packet size. Each RSVP-enabled router along the transmission path establishes a path state that includes the previous source address of the PATH message (the prior router). The receiving unit responds with a reservation request (RESV) that includes a flow specification having the Tspec and information regarding the type of reservation service requested, such as controlled-load or guaranteed service. The RESV message travels back, in reverse fashion, to the sending unit along the same router pathway. At each router, the requested resources are allocated, provided such resources are available and the receiver has authority to make the request. The RESV eventually reaches the sending unit with a confirmation that the requisite resources have been reserved. - The component for
MPLS 2057 operates to mark traffic at the entrance to a network for the purpose of determining the next router in the path from source to destination. More specifically, theMPLS 2057 component attaches a label containing all of the information a router needs to forward a packet to the packet in front of the IP header. The value of the label is used to look up the next hop in the path and the basis for the forwarding of the packet to the next router. Conventional IP routing operates similarly, except the MPLS process searches for an exact match, not the longest match as in conventional IP routing. - One function that could be provided in either the Media Processing Subsystem or the Packetization Subsystem is jitter buffer management. As previously discussed, an embodiment of the present invention operates by estimating a packet delay histogram that may be used to determine the required buffer size and minimum delay. The preferred method of determining the buffer size and minimum delay comprises the selection of an area of the histogram, the calculation of the mean delay based upon the selected area, the calculation of a plurality of variances based upon the mean delay, and the use of the variances to determine buffer size and minimum delay.
- Referring back to FIG. 1f, the graph represents
histogram 101 f of a packet stream received by a media gateway, more specifically the Media Processing Subsystem or Packetization Subsystem. The x-axis 102 f represents the delay experienced by packets and the y-axis 103 f represents the number of packet samples received. Thevertical bars 104 f show the number of packets received in a defined span of time. Acurve 105 f connects the central point of tops of thebars 104 f of thehistogram 101 f. Thecurve 105 f depicts the distribution of the arrival time of packets. - As previously discussed, to avoid skewing the peak, or mean delay, calculation, the tail is eliminated at a defined
point 106 f, which in this example is 270 ms on the x-axis 102 f. Therefore, the histogram area to the right ofpoint 106 f is discarded. The mean of thecurve 107 f may be calculated by using the formula: - where M is the mean, xi represents the delay experienced by packets arriving in a particular window of time i, and N is the total number of samples.
- The preferred embodiment of the invention utilizes at least two separately calculated variances to better estimate the buffer size and delay based upon the estimated histogram. To calculate the plurality of variances, the histogram is conceptually divided into two portions, a portion encompassing the packets arriving after the mean delay and a portion encompassing packets that arrived prior to the mean delay. Where i packets have been received and the mean delay is associated with packet m, then the two histogram portions are defined by D0 to Dm−1 and the second defined by Dm+1 to Di, or the final packet. The variance of D0 to Dm−1, Var1, may be calculated using the formula:
-
- where j extends from m+1 to i and the total number of samples includes those sample from m+1 to i. Although the two separately calculated variances are calculated using one sample set of packets arriving before the mean delay and one sample set of packets arriving after the mean delay, one would appreciate that the sample set of packets can be calculated using sample sets that overlap or that, when taken together, comprise a subset of packets received.
- Typically, the two variances are not equal because the histogram is asymmetrical. As shown in FIG. 1f,
Var 1 115 f is less thanVar 2 117 f, reflective of the asymmetrical nature of the histogram and better approximating the actual distribution of packets received. This approach therefore represents an improved approach to ascertaining the size and placement of the buffer more accurately while optimizing computational resources. - Optionally, Var1 can be calculated from Var2, or vice versa, using pre-defined equations. As an example, Var1 could be a multiple or factor of Var2, i.e. Var1*C=Var2, where C is a constant that is determined experimentally. Alternatively, Var1 could be a fixed value depending on whether Var2 exceeds or does not exceed certain threshold value.
- After the peak and variances are calculated, the buffer size and timing can be determined. The buffer starts accepting packets at delay d, which is determined by subtracting
Var 1 115 f from the mean 107 f. - d=M−Var 1
- and continues accepting for a period (T) which is the sum of the two variances.
- T=Var 1 +Var 2
- For example, where the Var1 is 60 ms, Var2 is 105 ms and the mean is 150 ms, the buffer starts accepting packets at 90 ms and continues accepting for period T of 165 ms, or up to 255 ms. The variances used to determine the buffer parameters can also be calculated variances derived by multiplying Var1 and/or Var2 by a multiplier (k) where the multiplier is any number, but preferably in the range of 2-8, and more preferably around 2, 4 or 8. Utilizing this approach, the Media Processing Subsystem or Packetization Subsystem is better able to manage jitter in packets received by the Media Gateway system.
- Referring to FIG. 21, in an exemplary embodiment, the
Signaling Subsystem 2170 comprises a userapplication API component 2173,system API component 2175,POSIX API 2177, real-time operating system (RTOS) 2179, asignaling API 2181, components dedicated to performing such signaling functions as signaling stacks forATM networks 2183 and signaling stacks forIP networks 2185, and anetwork management component 2187. Thesignaling API 2181 provides facilitated access to the signaling stacks forATM networks 2183 and signaling stacks forIP networks 2185. Thesignaling API 2181 comprises a master gateway and sub-gateways of N number. A single master gateway can have N sub-gateways associated with it. The master gateway performs the demultiplexing of incoming calls arriving from an ATM or IP network and routes the calls to the sub-gateway that has resources available. The sub-gateways maintain the state machines for all active terminations. The sub-gateways can be replicated to handle many terminations. Using this design, the master gateway and sub-gateways can reside on a single processor or across multiple processors, thereby enabling the simultaneous processing of signaling for a large number of terminations and the provision of substantial scalability. - The user
application API component 2173 provides a means for external applications to interface with the entire software system, comprising each of the Media Processing Subsystem, Packetization Subsystem, and Signaling Subsystem. Thenetwork management component 2187 supports local and remote configuration and network management through the support of simple network management protocol (SNMP). The configuration portion of thenetwork management component 2187 is capable of communicating with any of the other components to conduct configuration and network management tasks and can route remote requests for tasks, such as the addition or removal of specific components. - The signaling stacks for
ATM networks 2183 include support for User Network Interface (UNI) for the communication of data using AAL1, AAL2, and AAL5 protocols. User Network Interface comprises specifications for the procedures and protocols between the gateway system, comprising the software system and hardware system, and an ATM network. The signaling stacks forIP networks 2185 include support for a plurality of accepted standards, including media gateway control protocol (MGCP), H.323, session initiation protocol (SIP), H.248, and network-based call signaling (NCS). MGCP specifies a protocol converter, the components of which may be distributed across multiple distinct devices. MGCP enables external control and management of data communications equipment, such as media gateways, operating at the edge of multi-service packet networks. H.323 standards define a set of call control, channel set up, and codec specifications for transmitting real time voice and video over networks that do not necessarily provide a guaranteed level of service, such as packet networks. SIP is an application layer protocol for the establishment, modification, and termination of conferencing and telephony sessions over an IP-based network and has the capability of negotiating features and capabilities of the session at the time the session is established. H.248 provides recommendations underlying the implementation of MGCP. - To further enable ease of scalability and implementation, the present software method and system does not require specific knowledge of the processing hardware being utilized. Referring to FIG. 22, in a typical embodiment, a
host application 2205 interacts with aDSP 2210 via an interruptcapability 2220 and sharedmemory 2230. As shown in FIG. 23, the same functionality can be achieved by a simulation execution through the operation of avirtual DSP program 2310 as a separate independent thread on thesame processor 2315 as theapplication code 2320. This simulation run is enabled by a task queue mutex 2330 and acondition variable 2340. Thetask queue mutex 2330 protects the data shared between thevirtual DSP program 2310 and a resource manager [not shown]. Thecondition variable 2340 allows the application to synchronize with thevirtual DSP 2310 in a manner similar to the function of the interrupt 2220 in FIG. 22. - The present methods and systems provide for an improved jitter buffer management method and system by basing playout buffer adjustments on computed minimum delays and buffer sizes with reference to a plurality of variances derived from an estimated histogram. While various embodiments of the present invention have been shown and described, it would be apparent to those skilled in the art that many modifications are possible without departing from the inventive concept disclosed herein. For example, it would be apparent that the plurality of variances can be calculated by determining a first variance from an estimated histogram and then deriving subsequent variances through any pre-defined equation incorporating the first variance.
Claims (21)
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/084,559 US20030112758A1 (en) | 2001-12-03 | 2002-02-25 | Methods and systems for managing variable delays in packet transmission |
US12/350,682 US7835280B2 (en) | 2001-12-03 | 2009-01-08 | Methods and systems for managing variable delays in packet transmission |
US12/901,479 US20110141889A1 (en) | 2001-12-03 | 2010-10-08 | Methods and Systems for Managing Variable Delays in Packet Transmission |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/004,753 US20030105799A1 (en) | 2001-12-03 | 2001-12-03 | Distributed processing architecture with scalable processing layers |
US10/084,559 US20030112758A1 (en) | 2001-12-03 | 2002-02-25 | Methods and systems for managing variable delays in packet transmission |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/004,753 Continuation-In-Part US20030105799A1 (en) | 2001-12-03 | 2001-12-03 | Distributed processing architecture with scalable processing layers |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/350,682 Continuation US7835280B2 (en) | 2001-12-03 | 2009-01-08 | Methods and systems for managing variable delays in packet transmission |
Publications (1)
Publication Number | Publication Date |
---|---|
US20030112758A1 true US20030112758A1 (en) | 2003-06-19 |
Family
ID=46280357
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/084,559 Abandoned US20030112758A1 (en) | 2001-12-03 | 2002-02-25 | Methods and systems for managing variable delays in packet transmission |
US12/350,682 Expired - Fee Related US7835280B2 (en) | 2001-12-03 | 2009-01-08 | Methods and systems for managing variable delays in packet transmission |
US12/901,479 Abandoned US20110141889A1 (en) | 2001-12-03 | 2010-10-08 | Methods and Systems for Managing Variable Delays in Packet Transmission |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/350,682 Expired - Fee Related US7835280B2 (en) | 2001-12-03 | 2009-01-08 | Methods and systems for managing variable delays in packet transmission |
US12/901,479 Abandoned US20110141889A1 (en) | 2001-12-03 | 2010-10-08 | Methods and Systems for Managing Variable Delays in Packet Transmission |
Country Status (1)
Country | Link |
---|---|
US (3) | US20030112758A1 (en) |
Cited By (79)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010053147A1 (en) * | 2000-08-04 | 2001-12-20 | Nec Corporation | Synchronous data transmission system |
US20040076191A1 (en) * | 2000-12-22 | 2004-04-22 | Jim Sundqvist | Method and a communiction apparatus in a communication system |
US20040085963A1 (en) * | 2002-05-24 | 2004-05-06 | Zarlink Semiconductor Limited | Method of organizing data packets |
US20040131067A1 (en) * | 2002-09-24 | 2004-07-08 | Brian Cheng | Adaptive predictive playout scheme for packet voice applications |
US20040160948A1 (en) * | 2003-02-19 | 2004-08-19 | Mitsubishi Denki Kabushiki Kaisha | IP network communication apparatus |
US20040184488A1 (en) * | 2003-03-20 | 2004-09-23 | Wolfgang Bauer | Method and a jitter buffer regulating circuit for regulating a jitter buffer |
US20040190508A1 (en) * | 2003-03-28 | 2004-09-30 | Philip Houghton | Optimization of decoder instance memory consumed by the jitter control module |
US20040213238A1 (en) * | 2002-01-23 | 2004-10-28 | Terasync Ltd. | System and method for synchronizing between communication terminals of asynchronous packets networks |
US20050025151A1 (en) * | 2003-02-11 | 2005-02-03 | Alcatel | Early-processing request for an active router |
US20050041692A1 (en) * | 2003-08-22 | 2005-02-24 | Thomas Kallstenius | Remote synchronization in packet-switched networks |
WO2005046133A1 (en) * | 2003-11-11 | 2005-05-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Adapting playout buffer based on audio burst length |
US20050169245A1 (en) * | 2002-03-04 | 2005-08-04 | Lars Hindersson | Arrangement and a method for handling an audio signal |
US20050232309A1 (en) * | 2004-04-17 | 2005-10-20 | Innomedia Pte Ltd. | In band signal detection and presentation for IP phone |
US20060088000A1 (en) * | 2004-10-27 | 2006-04-27 | Hans Hannu | Terminal having plural playback pointers for jitter buffer |
US20060126515A1 (en) * | 2004-12-15 | 2006-06-15 | Ward Robert G | Filtering wireless network packets |
US20060184261A1 (en) * | 2005-02-16 | 2006-08-17 | Adaptec, Inc. | Method and system for reducing audio latency |
US7126957B1 (en) * | 2002-03-07 | 2006-10-24 | Utstarcom, Inc. | Media flow method for transferring real-time data between asynchronous and synchronous networks |
US20060248404A1 (en) * | 2005-04-29 | 2006-11-02 | Microsoft Corporation | System and Method for Providing a Window Management Mode |
US20060280163A1 (en) * | 2005-06-09 | 2006-12-14 | Yongdong Zhao | System for detecting packetization delay of packets in a network |
US20070061626A1 (en) * | 2005-09-14 | 2007-03-15 | Microsoft Corporation | Statistical analysis of sampled profile data in the identification of significant software test performance regressions |
US20070064679A1 (en) * | 2005-09-20 | 2007-03-22 | Intel Corporation | Jitter buffer management in a packet-based network |
US20070071022A1 (en) * | 2005-09-29 | 2007-03-29 | Eric Pan | Mechanism for imposing a consistent delay on information sets received from a variable rate information stream |
US20070081562A1 (en) * | 2005-10-11 | 2007-04-12 | Hui Ma | Method and device for stream synchronization of real-time multimedia transport over packet network |
US20070136508A1 (en) * | 2005-12-13 | 2007-06-14 | Reiner Rieke | System Support Storage and Computer System |
US20070150631A1 (en) * | 2005-12-22 | 2007-06-28 | Intuitive Surgical Inc. | Multi-priority messaging |
US20070147250A1 (en) * | 2005-12-22 | 2007-06-28 | Druke Michael B | Synchronous data communication |
US20070171825A1 (en) * | 2006-01-20 | 2007-07-26 | Anagran, Inc. | System, method, and computer program product for IP flow routing |
US20070171826A1 (en) * | 2006-01-20 | 2007-07-26 | Anagran, Inc. | System, method, and computer program product for controlling output port utilization |
US20070185849A1 (en) * | 2002-11-26 | 2007-08-09 | Bapiraju Vinnakota | Data structure traversal instructions for packet processing |
US20070195744A1 (en) * | 2006-02-18 | 2007-08-23 | Trainin Solomon | TECHNIQUES FOR 40 MEGAHERTZ (MHz) CHANNEL SWITCHING |
US7307998B1 (en) * | 2002-08-27 | 2007-12-11 | 3Com Corporation | Computer system and network interface supporting dynamically optimized receive buffer queues |
US20080069131A1 (en) * | 2006-09-14 | 2008-03-20 | Fujitsu Limited | Broadcast distributing system, broadcast distributing method, and network apparatus |
US20080147917A1 (en) * | 2006-12-19 | 2008-06-19 | Lees Jeremy J | Method and apparatus for maintaining synchronization of audio in a computing system |
US20080147918A1 (en) * | 2006-12-19 | 2008-06-19 | Hanebutte Ulf R | Method and apparatus for maintaining synchronization of audio in a computing system |
US20080306736A1 (en) * | 2007-06-06 | 2008-12-11 | Sumit Sanyal | Method and system for a subband acoustic echo canceller with integrated voice activity detection |
US20090129584A1 (en) * | 2006-01-13 | 2009-05-21 | Oki Electric Industry Co., Ltd. | Echo Canceller |
US7581019B1 (en) * | 2002-06-05 | 2009-08-25 | Israel Amir | Active client buffer management method, system, and apparatus |
US20090245249A1 (en) * | 2005-08-29 | 2009-10-01 | Nec Corporation | Multicast node apparatus, multicast transfer method and program |
US20090268755A1 (en) * | 2008-04-23 | 2009-10-29 | Oki Electric Industry Co., Ltd. | Codec converter, gateway device, and codec converting method |
US20090310729A1 (en) * | 2008-06-17 | 2009-12-17 | Integrated Device Technology, Inc. | Circuit for correcting an output clock frequency in a receiving device |
US20100250781A1 (en) * | 2009-03-26 | 2010-09-30 | Sony Corporation | Receiving apparatus and time correction method for receiving apparatus |
US20100318352A1 (en) * | 2008-02-19 | 2010-12-16 | Herve Taddei | Method and means for encoding background noise information |
US20110007638A1 (en) * | 2009-07-09 | 2011-01-13 | Motorola, Inc. | Artificial delay inflation and jitter reduction to improve tcp throughputs |
US20110051744A1 (en) * | 2009-08-27 | 2011-03-03 | Texas Instruments Incorporated | External memory data management with data regrouping and channel look ahead |
US20110158263A1 (en) * | 2008-08-28 | 2011-06-30 | Kabushiki Kaisha Toshiba | Transit time fixation device |
US8054752B2 (en) | 2005-12-22 | 2011-11-08 | Intuitive Surgical Operations, Inc. | Synchronous data communication |
US20120123774A1 (en) * | 2010-09-30 | 2012-05-17 | Electronics And Telecommunications Research Institute | Apparatus, electronic apparatus and method for adjusting jitter buffer |
US20120185620A1 (en) * | 2011-01-17 | 2012-07-19 | Chia-Yun Cheng | Buffering apparatus for buffering multi-partition video/image bitstream and related method thereof |
US20120250678A1 (en) * | 2009-12-24 | 2012-10-04 | Telecom Italia S.P.A. | Method of scheduling transmission in a communication network, corresponding communication node and computer program product |
US8320446B2 (en) | 2004-11-24 | 2012-11-27 | Qformx, Inc. | System for transmission of synchronous video with compression through channels with varying transmission delay |
US20130208614A1 (en) * | 2002-10-02 | 2013-08-15 | At&T Corp. | Method of providing voice over ip at predefined qos levels |
US20130322282A1 (en) * | 2008-12-05 | 2013-12-05 | AT & T Intellectual Property I, LP. | Method for Measuring Processing Delays of Voice-Over IP Devices |
WO2014039843A1 (en) * | 2012-09-07 | 2014-03-13 | Apple Inc. | Adaptive jitter buffer management for networks with varying conditions |
US8874634B2 (en) | 2012-03-01 | 2014-10-28 | Motorola Mobility Llc | Managing adaptive streaming of data via a communication connection |
WO2015160617A1 (en) * | 2014-04-16 | 2015-10-22 | Dolby Laboratories Licensing Corporation | Jitter buffer control based on monitoring of delay jitter and conversational dynamics |
US20150319212A1 (en) * | 2014-05-02 | 2015-11-05 | Imagination Technologies Limited | Media Controller |
CN105991477A (en) * | 2015-02-11 | 2016-10-05 | 腾讯科技(深圳)有限公司 | Adjusting method of voice jitter buffer area and apparatus thereof |
US9538177B2 (en) | 2011-10-31 | 2017-01-03 | Mediatek Inc. | Apparatus and method for buffering context arrays referenced for performing entropy decoding upon multi-tile encoded picture and related entropy decoder |
WO2017055091A1 (en) * | 2015-10-01 | 2017-04-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for removing jitter in audio data transmission |
KR20170082901A (en) * | 2016-01-07 | 2017-07-17 | 삼성전자주식회사 | Playout delay adjustment method and Electronic apparatus thereof |
US9749886B1 (en) * | 2015-02-16 | 2017-08-29 | Amazon Technologies, Inc. | System for determining metrics of voice communications |
CN107592430A (en) * | 2016-07-07 | 2018-01-16 | 腾讯科技(深圳)有限公司 | The method and terminal device of a kind of echo cancellor |
US10374786B1 (en) | 2012-10-05 | 2019-08-06 | Integrated Device Technology, Inc. | Methods of estimating frequency skew in networks using timestamped packets |
US10775871B2 (en) | 2016-11-10 | 2020-09-15 | Apple Inc. | Methods and apparatus for providing individualized power control for peripheral sub-systems |
US10789110B2 (en) | 2018-09-28 | 2020-09-29 | Apple Inc. | Methods and apparatus for correcting out-of-order data transactions between processors |
US10789198B2 (en) | 2018-01-09 | 2020-09-29 | Apple Inc. | Methods and apparatus for reduced-latency data transmission with an inter-processor communication link between independently operable processors |
US10838450B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Methods and apparatus for synchronization of time between independently operable processors |
US10841880B2 (en) | 2016-01-27 | 2020-11-17 | Apple Inc. | Apparatus and methods for wake-limiting with an inter-device communication link |
US10845868B2 (en) | 2014-10-08 | 2020-11-24 | Apple Inc. | Methods and apparatus for running and booting an inter-processor communication link between independently operable processors |
US10846237B2 (en) | 2016-02-29 | 2020-11-24 | Apple Inc. | Methods and apparatus for locking at least a portion of a shared memory resource |
US10853272B2 (en) | 2016-03-31 | 2020-12-01 | Apple Inc. | Memory access protection apparatus and methods for memory mapped access between independently operable processors |
US11006296B2 (en) * | 2018-11-19 | 2021-05-11 | Pacesetter, Inc. | Implantable medical device and method for measuring communication quality |
US11068326B2 (en) * | 2017-08-07 | 2021-07-20 | Apple Inc. | Methods and apparatus for transmitting time sensitive data over a tunneled bus interface |
GB2593696A (en) * | 2020-03-30 | 2021-10-06 | British Telecomm | Low latency content delivery |
US11176064B2 (en) | 2018-05-18 | 2021-11-16 | Apple Inc. | Methods and apparatus for reduced overhead data transfer with a shared ring buffer |
US11381514B2 (en) | 2018-05-07 | 2022-07-05 | Apple Inc. | Methods and apparatus for early delivery of data link layer packets |
US20220263423A9 (en) * | 2012-12-20 | 2022-08-18 | Dolby Laboratories Licensing Corporation | Controlling a jitter buffer |
US11809258B2 (en) | 2016-11-10 | 2023-11-07 | Apple Inc. | Methods and apparatus for providing peripheral sub-system stability |
US12041300B2 (en) | 2020-03-30 | 2024-07-16 | British Telecommunications Public Limited Company | Low latency content delivery |
Families Citing this family (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6760772B2 (en) | 2000-12-15 | 2004-07-06 | Qualcomm, Inc. | Generating and implementing a communication protocol and interface for high data rate signal transfer |
US8812706B1 (en) | 2001-09-06 | 2014-08-19 | Qualcomm Incorporated | Method and apparatus for compensating for mismatched delays in signals of a mobile display interface (MDDI) system |
JP4777882B2 (en) | 2003-06-02 | 2011-09-21 | クゥアルコム・インコーポレイテッド | Generation and execution of signaling protocols and interfaces for higher data rates |
RU2006107561A (en) | 2003-08-13 | 2007-09-20 | Квэлкомм Инкорпорейтед (US) | SIGNAL INTERFACE FOR HIGH DATA TRANSMISSION SPEEDS |
RU2369033C2 (en) | 2003-09-10 | 2009-09-27 | Квэлкомм Инкорпорейтед | High-speed data transmission interface |
RU2371872C2 (en) | 2003-10-15 | 2009-10-27 | Квэлкомм Инкорпорейтед | Interface with high data transmission rate |
CN101827074B (en) | 2003-10-29 | 2013-07-31 | 高通股份有限公司 | High data rate interface |
EP2242231A1 (en) | 2003-11-12 | 2010-10-20 | Qualcomm Incorporated | High data rate interface with improved link control |
EP1690404A1 (en) | 2003-11-25 | 2006-08-16 | QUALCOMM Incorporated | High data rate interface with improved link synchronization |
US8670457B2 (en) | 2003-12-08 | 2014-03-11 | Qualcomm Incorporated | High data rate interface with improved link synchronization |
US7474739B2 (en) * | 2003-12-15 | 2009-01-06 | International Business Machines Corporation | Providing speaker identifying information within embedded digital information |
EP2309695A1 (en) | 2004-03-10 | 2011-04-13 | Qualcomm Incorporated | High data rate interface apparatus and method |
MXPA06010647A (en) | 2004-03-17 | 2007-01-17 | Qualcomm Inc | High data rate interface apparatus and method. |
WO2005096594A1 (en) | 2004-03-24 | 2005-10-13 | Qualcomm Incorporated | High data rate interface apparatus and method |
US8650304B2 (en) | 2004-06-04 | 2014-02-11 | Qualcomm Incorporated | Determining a pre skew and post skew calibration data rate in a mobile display digital interface (MDDI) communication system |
CN1993948A (en) | 2004-06-04 | 2007-07-04 | 高通股份有限公司 | High data rate interface apparatus and method |
US8873584B2 (en) | 2004-11-24 | 2014-10-28 | Qualcomm Incorporated | Digital data interface device |
US8699330B2 (en) | 2004-11-24 | 2014-04-15 | Qualcomm Incorporated | Systems and methods for digital data transmission rate control |
US8539119B2 (en) | 2004-11-24 | 2013-09-17 | Qualcomm Incorporated | Methods and apparatus for exchanging messages having a digital data interface device message format |
US8667363B2 (en) | 2004-11-24 | 2014-03-04 | Qualcomm Incorporated | Systems and methods for implementing cyclic redundancy checks |
US8692838B2 (en) | 2004-11-24 | 2014-04-08 | Qualcomm Incorporated | Methods and systems for updating a buffer |
US7461236B1 (en) * | 2005-03-25 | 2008-12-02 | Tilera Corporation | Transferring data in a parallel processing environment |
US8054826B2 (en) * | 2005-07-29 | 2011-11-08 | Alcatel Lucent | Controlling service quality of voice over Internet Protocol on a downlink channel in high-speed wireless data networks |
US8692839B2 (en) | 2005-11-23 | 2014-04-08 | Qualcomm Incorporated | Methods and systems for updating a buffer |
US8730069B2 (en) | 2005-11-23 | 2014-05-20 | Qualcomm Incorporated | Double data rate serial encoder |
US8625749B2 (en) * | 2006-03-23 | 2014-01-07 | Cisco Technology, Inc. | Content sensitive do-not-disturb (DND) option for a communication system |
US20080151765A1 (en) * | 2006-12-20 | 2008-06-26 | Sanal Chandran Cheruvathery | Enhanced Jitter Buffer |
US8611337B2 (en) * | 2009-03-31 | 2013-12-17 | Adobe Systems Incorporated | Adaptive subscriber buffering policy with persistent delay detection for live audio streams |
US9692615B2 (en) * | 2009-12-09 | 2017-06-27 | Dialogic Corporation | Facsimile passthrough silence suppression |
US8345570B2 (en) * | 2009-12-10 | 2013-01-01 | Alcatel Lucent | Network impairment metrics for timing over packet |
KR20140068059A (en) * | 2011-09-12 | 2014-06-05 | 에스씨에이 아이피엘에이 홀딩스 인크. | Methods and apparatuses for communicating content data to a communications terminal from a local data store |
US9236039B2 (en) * | 2013-03-04 | 2016-01-12 | Empire Technology Development Llc | Virtual instrument playing scheme |
JP2015039131A (en) * | 2013-08-19 | 2015-02-26 | 株式会社東芝 | Measurement device and method |
US9323584B2 (en) | 2013-09-06 | 2016-04-26 | Seagate Technology Llc | Load adaptive data recovery pipeline |
US9280422B2 (en) | 2013-09-06 | 2016-03-08 | Seagate Technology Llc | Dynamic distribution of code words among multiple decoders |
FR3011084A1 (en) * | 2013-09-25 | 2015-03-27 | St Microelectronics Grenoble 2 | METHOD FOR DETERMINING THE CHARGING STATE OF A BATTERY OF AN ELECTRONIC DEVICE |
EP2882120B1 (en) | 2013-12-06 | 2016-03-09 | ADVA Optical Networking SE | A method and apparatus for mitigation of packet delay variation |
CN105099795A (en) | 2014-04-15 | 2015-11-25 | 杜比实验室特许公司 | Jitter buffer level estimation |
CN104579770A (en) * | 2014-12-30 | 2015-04-29 | 华为技术有限公司 | Method and device for managing data transmission channels |
US10439951B2 (en) | 2016-03-17 | 2019-10-08 | Dolby Laboratories Licensing Corporation | Jitter buffer apparatus and method |
WO2017161088A2 (en) | 2016-03-17 | 2017-09-21 | Dolby Laboratories Licensing Corporation | Jitter buffer apparatus and method |
CN106899843B (en) * | 2016-03-24 | 2019-03-26 | 中国移动通信集团设计院有限公司 | A kind of video service quality appraisal procedure and device |
US10616123B2 (en) * | 2017-07-07 | 2020-04-07 | Qualcomm Incorporated | Apparatus and method for adaptive de-jitter buffer |
US10426424B2 (en) | 2017-11-21 | 2019-10-01 | General Electric Company | System and method for generating and performing imaging protocol simulations |
US20240064217A1 (en) * | 2022-08-19 | 2024-02-22 | Mediatek Inc. | Timing Control Management Method and Timing Control Management System Capable of Adjusting Reordering Timer |
Citations (86)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4914692A (en) * | 1987-12-29 | 1990-04-03 | At&T Bell Laboratories | Automatic speech recognition using echo cancellation |
US5142677A (en) * | 1989-05-04 | 1992-08-25 | Texas Instruments Incorporated | Context switching devices, systems and methods |
US5189500A (en) * | 1989-09-22 | 1993-02-23 | Mitsubishi Denki Kabushiki Kaisha | Multi-layer type semiconductor device with semiconductor element layers stacked in opposite directions and manufacturing method thereof |
US5200564A (en) * | 1990-06-29 | 1993-04-06 | Casio Computer Co., Ltd. | Digital information processing apparatus with multiple CPUs |
US5341507A (en) * | 1990-07-17 | 1994-08-23 | Mitsubishi Denki Kabushiki Kaisha | Data drive type information processor having both simple and high function instruction processing units |
US5363404A (en) * | 1993-07-13 | 1994-11-08 | Motorola Inc. | Apparatus and method for conveying information in a communication network |
US5492857A (en) * | 1993-07-12 | 1996-02-20 | Peregrine Semiconductor Corporation | High-frequency wireless communication system on a single ultrathin silicon on sapphire chip |
US5594784A (en) * | 1993-04-27 | 1997-01-14 | Southwestern Bell Technology Resources, Inc. | Apparatus and method for transparent telephony utilizing speech-based signaling for initiating and handling calls |
US5724356A (en) * | 1995-04-28 | 1998-03-03 | Multi-Tech Systems, Inc. | Advanced bridge/router local area network modem node |
US5860019A (en) * | 1995-07-10 | 1999-01-12 | Sharp Kabushiki Kaisha | Data driven information processor having pipeline processing units connected in series including processing portions connected in parallel |
US5872991A (en) * | 1995-10-18 | 1999-02-16 | Sharp, Kabushiki, Kaisha | Data driven information processor for processing data packet including common identification information and plurality of pieces of data |
US5915123A (en) * | 1997-10-31 | 1999-06-22 | Silicon Spice | Method and apparatus for controlling configuration memory contexts of processing elements in a network of multiple context processing elements |
US5923761A (en) * | 1996-05-24 | 1999-07-13 | Lsi Logic Corporation | Single chip solution for multimedia GSM mobile station systems |
US5941958A (en) * | 1996-06-20 | 1999-08-24 | Daewood Telecom Ltd. | Duplicated data communications network adaptor including a pair of control boards and interface boards |
US5956518A (en) * | 1996-04-11 | 1999-09-21 | Massachusetts Institute Of Technology | Intermediate-grain reconfigurable processing device |
US5956517A (en) * | 1995-04-12 | 1999-09-21 | Sharp Kabushiki Kaisha | Data driven information processor |
US5991308A (en) * | 1995-08-25 | 1999-11-23 | Terayon Communication Systems, Inc. | Lower overhead method for data transmission using ATM and SCDMA over hybrid fiber coax cable plant |
US6047372A (en) * | 1996-12-02 | 2000-04-04 | Compaq Computer Corp. | Apparatus for routing one operand to an arithmetic logic unit from a fixed register slot and another operand from any register slot |
US6067595A (en) * | 1997-09-23 | 2000-05-23 | Icore Technologies, Inc. | Method and apparatus for enabling high-performance intelligent I/O subsystems using multi-port memories |
US6075788A (en) * | 1997-06-02 | 2000-06-13 | Lsi Logic Corporation | Sonet physical layer device having ATM and PPP interfaces |
US6108760A (en) * | 1997-10-31 | 2000-08-22 | Silicon Spice | Method and apparatus for position independent reconfiguration in a network of multiple context processing elements |
US6122719A (en) * | 1997-10-31 | 2000-09-19 | Silicon Spice | Method and apparatus for retiming in a network of multiple context processing elements |
US6226735B1 (en) * | 1998-05-08 | 2001-05-01 | Broadcom | Method and apparatus for configuring arbitrary sized data paths comprising multiple context processing elements |
US6226266B1 (en) * | 1996-12-13 | 2001-05-01 | Cisco Technology, Inc. | End-to-end delay estimation in high speed communication networks |
US6269435B1 (en) * | 1998-09-14 | 2001-07-31 | The Board Of Trustees Of The Leland Stanford Junior University | System and method for implementing conditional vector operations in which an input vector containing multiple operands to be used in conditional operations is divided into two or more output vectors based on a condition vector |
US20010027392A1 (en) * | 1998-09-29 | 2001-10-04 | William M. Wiese | System and method for processing data from and for multiple channels |
US20010028658A1 (en) * | 1986-09-16 | 2001-10-11 | Yoshito Sakurai | Distributed type switching system |
US6304551B1 (en) * | 1997-03-21 | 2001-10-16 | Nec Usa, Inc. | Real-time estimation and dynamic renegotiation of UPC values for arbitrary traffic sources in ATM networks |
US20020008256A1 (en) * | 2000-03-01 | 2002-01-24 | Ming-Kang Liu | Scaleable architecture for multiple-port, system-on-chip ADSL communications systems |
US20020009089A1 (en) * | 2000-05-25 | 2002-01-24 | Mcwilliams Patrick | Method and apparatus for establishing frame synchronization in a communication system using an UTOPIA-LVDS bridge |
US6349098B1 (en) * | 1998-04-17 | 2002-02-19 | Paxonet Communications, Inc. | Method and apparatus for forming a virtual circuit |
US20020031141A1 (en) * | 2000-05-25 | 2002-03-14 | Mcwilliams Patrick | Method of detecting back pressure in a communication system using an utopia-LVDS bridge |
US20020031132A1 (en) * | 2000-05-25 | 2002-03-14 | Mcwilliams Patrick | UTOPIA-LVDS bridge |
US20020034162A1 (en) * | 2000-06-30 | 2002-03-21 | Brinkerhoff Kenneth W. | Technique for implementing fractional interval times for fine granularity bandwidth allocation |
US20020059426A1 (en) * | 2000-06-30 | 2002-05-16 | Mariner Networks, Inc. | Technique for assigning schedule resources to multiple ports in correct proportions |
US20020101982A1 (en) * | 2001-01-30 | 2002-08-01 | Hammam Elabd | Line echo canceller scalable to multiple voice channels/ports |
US20020112097A1 (en) * | 2000-11-29 | 2002-08-15 | Rajko Milovanovic | Media accelerator quality of service |
US20020131421A1 (en) * | 2001-03-13 | 2002-09-19 | Adc Telecommunications Israel Ltd. | ATM linked list buffer system |
US20020136620A1 (en) * | 1999-12-03 | 2002-09-26 | Jan Berends | Vehicle blocking device |
US20030004697A1 (en) * | 2000-01-24 | 2003-01-02 | Ferris Gavin Robert | Method of designing, modelling or fabricating a communications baseband stack |
US20030002538A1 (en) * | 2001-06-28 | 2003-01-02 | Chen Allen Peilen | Transporting variable length ATM AAL CPS packets over a non-ATM-specific bus |
US20030021339A1 (en) * | 2001-05-03 | 2003-01-30 | Koninklijke Philips Electronics N.V. | Method and apparatus for echo cancellation in digital communications using an echo cancellation reference signal |
US6519259B1 (en) * | 1999-02-18 | 2003-02-11 | Avaya Technology Corp. | Methods and apparatus for improved transmission of voice information in packet-based communication systems |
US6522688B1 (en) * | 1999-01-14 | 2003-02-18 | Eric Morgan Dowling | PCM codec and modem for 56K bi-directional transmission |
US20030046457A1 (en) * | 2000-10-02 | 2003-03-06 | Shakuntala Anjanaiah | Apparatus and method for an interface unit for data transfer between processing units in the asynchronous transfer mode |
US20030053493A1 (en) * | 2001-09-18 | 2003-03-20 | Joseph Graham Mobley | Allocation of bit streams for communication over-multi-carrier frequency-division multiplexing (FDM) |
US20030053484A1 (en) * | 2001-09-18 | 2003-03-20 | Sorenson Donald C. | Multi-carrier frequency-division multiplexing (FDM) architecture for high speed digital service |
US20030058885A1 (en) * | 2001-09-18 | 2003-03-27 | Sorenson Donald C. | Multi-carrier frequency-division multiplexing (FDM) architecture for high speed digital service in local networks |
US6573905B1 (en) * | 1999-11-09 | 2003-06-03 | Broadcom Corporation | Video and graphics system with parallel processing of graphics windows |
US6574217B1 (en) * | 1996-11-27 | 2003-06-03 | Alcatel Usa Sourcing, L.P. | Telecommunications switch for providing telephony traffic integrated with video information services |
US6580727B1 (en) * | 1999-08-20 | 2003-06-17 | Texas Instruments Incorporated | Element management system for a digital subscriber line access multiplexer |
US6580793B1 (en) * | 1999-08-31 | 2003-06-17 | Lucent Technologies Inc. | Method and apparatus for echo cancellation with self-deactivation |
US6597689B1 (en) * | 1998-12-30 | 2003-07-22 | Nortel Networks Limited | SVC signaling system and method |
US6628658B1 (en) * | 1999-02-23 | 2003-09-30 | Siemens Aktiengesellschaft | Time-critical control of data to a sequentially controlled interface with asynchronous data transmission |
US6631135B1 (en) * | 2000-03-27 | 2003-10-07 | Nortel Networks Limited | Method and apparatus for negotiating quality-of-service parameters for a network connection |
US6631130B1 (en) * | 2000-11-21 | 2003-10-07 | Transwitch Corporation | Method and apparatus for switching ATM, TDM, and packet data through a single communications switch while maintaining TDM timing |
US6636515B1 (en) * | 2000-11-21 | 2003-10-21 | Transwitch Corporation | Method for switching ATM, TDM, and packet data through a single communications switch |
US6640239B1 (en) * | 1999-11-10 | 2003-10-28 | Garuda Network Corporation | Apparatus and method for intelligent scalable switching network |
US6697345B1 (en) * | 1998-07-24 | 2004-02-24 | Hughes Electronics Corporation | Multi-transport mode radio communications having synchronous and asynchronous transport mode capability |
US6707821B1 (en) * | 2000-07-11 | 2004-03-16 | Cisco Technology, Inc. | Time-sensitive-packet jitter and latency minimization on a shared data link |
US6728209B2 (en) * | 2001-07-25 | 2004-04-27 | Overture Networks, Inc. | Measurement of packet delay variation |
US6738358B2 (en) * | 2000-09-09 | 2004-05-18 | Intel Corporation | Network echo canceller for integrated telecommunications processing |
US6737743B2 (en) * | 2001-07-10 | 2004-05-18 | Kabushiki Kaisha Toshiba | Memory chip and semiconductor device using the memory chip and manufacturing method of those |
US6747977B1 (en) * | 1999-06-30 | 2004-06-08 | Nortel Networks Limited | Packet interface and method of packetizing information |
US6751224B1 (en) * | 2000-03-30 | 2004-06-15 | Azanda Network Devices, Inc. | Integrated ATM/packet segmentation-and-reassembly engine for handling both packet and ATM input data and for outputting both ATM and packet data |
US6751233B1 (en) * | 1999-01-08 | 2004-06-15 | Cisco Technology, Inc. | UTOPIA 2—UTOPIA 3 translator |
US6751723B1 (en) * | 2000-09-02 | 2004-06-15 | Actel Corporation | Field programmable gate array and microcontroller system-on-a-chip |
US6754804B1 (en) * | 2000-12-29 | 2004-06-22 | Mips Technologies, Inc. | Coprocessor interface transferring multiple instructions simultaneously along with issue path designation and/or issue order designation for the instructions |
US6763018B1 (en) * | 2000-11-30 | 2004-07-13 | 3Com Corporation | Distributed protocol processing and packet forwarding using tunneling protocols |
US6768774B1 (en) * | 1998-11-09 | 2004-07-27 | Broadcom Corporation | Video and graphics system with video scaling |
US6795396B1 (en) * | 2000-05-02 | 2004-09-21 | Teledata Networks, Ltd. | ATM buffer system |
US6798420B1 (en) * | 1998-11-09 | 2004-09-28 | Broadcom Corporation | Video and graphics system with a single-port RAM |
US20040202173A1 (en) * | 2000-06-14 | 2004-10-14 | Yoon Chang Bae | Utopia level interface in ATM multiplexing/demultiplexing assembly |
US6807167B1 (en) * | 2000-03-08 | 2004-10-19 | Lucent Technologies Inc. | Line card for supporting circuit and packet switching |
US6810039B1 (en) * | 2000-03-30 | 2004-10-26 | Azanda Network Devices, Inc. | Processor-based architecture for facilitating integrated data transfer between both atm and packet traffic with a packet bus or packet link, including bidirectional atm-to-packet functionally for atm traffic |
US6853385B1 (en) * | 1999-11-09 | 2005-02-08 | Broadcom Corporation | Video, audio and graphics decode, composite and display system |
US6892324B1 (en) * | 2000-07-19 | 2005-05-10 | Broadcom Corporation | Multi-channel, multi-service debug |
US6934937B1 (en) * | 2000-03-30 | 2005-08-23 | Broadcom Corporation | Multi-channel, multi-service debug on a pipelined CPU architecture |
US6952238B2 (en) * | 2001-05-01 | 2005-10-04 | Koninklijke Philips Electronics N.V. | Method and apparatus for echo cancellation in digital ATV systems using an echo cancellation reference signal |
US6959376B1 (en) * | 2001-10-11 | 2005-10-25 | Lsi Logic Corporation | Integrated circuit containing multiple digital signal processors |
US7031341B2 (en) * | 1999-07-27 | 2006-04-18 | Wuhan Research Institute Of Post And Communications, Mii. | Interfacing apparatus and method for adapting Ethernet directly to physical channel |
US7051246B2 (en) * | 2003-01-15 | 2006-05-23 | Lucent Technologies Inc. | Method for estimating clock skew within a communications network |
US7100026B2 (en) * | 2001-05-30 | 2006-08-29 | The Massachusetts Institute Of Technology | System and method for performing efficient conditional vector operations for data parallel architectures involving both input and conditional vector values |
US7110358B1 (en) * | 1999-05-14 | 2006-09-19 | Pmc-Sierra, Inc. | Method and apparatus for managing data traffic between a high capacity source and multiple destinations |
US7218901B1 (en) * | 2001-09-18 | 2007-05-15 | Scientific-Atlanta, Inc. | Automatic frequency control of multiple channels |
US20070239967A1 (en) * | 1999-08-13 | 2007-10-11 | Mips Technologies, Inc. | High-performance RISC-DSP |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5154446A (en) * | 1990-07-27 | 1992-10-13 | Darlene Blake | Shoulder belt adjustment device for seat belt systems |
JPH08249306A (en) | 1995-03-09 | 1996-09-27 | Sharp Corp | Data driven type information processor |
US5999525A (en) | 1996-11-18 | 1999-12-07 | Mci Communications Corporation | Method for video telephony over a hybrid network |
US6154446A (en) | 1998-07-08 | 2000-11-28 | Broadcom Corporation | Network switching architecture utilizing cell based and packet based per class-of-service head-of-line blocking prevention |
US6331977B1 (en) | 1998-08-28 | 2001-12-18 | Sharp Electronics Corporation | System on chip (SOC) four-way switch crossbar system and method |
US6661422B1 (en) | 1998-11-09 | 2003-12-09 | Broadcom Corporation | Video and graphics system with MPEG specific data transfer commands |
US6826187B1 (en) | 1999-05-07 | 2004-11-30 | Cisco Technology, Inc. | Interfacing between a physical layer and a bus |
US6483043B1 (en) | 2000-05-19 | 2002-11-19 | Eaglestone Partners I, Llc | Chip assembly with integrated power distribution between a wafer interposer and an integrated circuit chip |
US6816750B1 (en) | 2000-06-09 | 2004-11-09 | Cirrus Logic, Inc. | System-on-a-chip |
US6669782B1 (en) | 2000-11-15 | 2003-12-30 | Randhir P. S. Thakur | Method and apparatus to control the formation of layers useful in integrated circuits |
JP2002226954A (en) * | 2000-11-30 | 2002-08-14 | Nisshin Steel Co Ltd | Fe-Cr SOFT MAGNETIC MATERIAL AND PRODUCTION METHOD THEREFOR |
US6813673B2 (en) | 2001-04-30 | 2004-11-02 | Advanced Micro Devices, Inc. | Bus arbitrator supporting multiple isochronous streams in a split transactional unidirectional bus architecture and method of operation |
JP2003060053A (en) | 2001-08-10 | 2003-02-28 | Fujitsu Ltd | Semiconductor chip, semiconductor integrated circuit device comprising it and method for selecting semiconductor chip |
US7957271B2 (en) * | 2005-03-09 | 2011-06-07 | International Business Machines Corporation | Using mobile traffic history to minimize transmission time |
US7540065B2 (en) * | 2006-01-03 | 2009-06-02 | The Scott Fetzer Company | Vacuum cleaner handgrip |
-
2002
- 2002-02-25 US US10/084,559 patent/US20030112758A1/en not_active Abandoned
-
2009
- 2009-01-08 US US12/350,682 patent/US7835280B2/en not_active Expired - Fee Related
-
2010
- 2010-10-08 US US12/901,479 patent/US20110141889A1/en not_active Abandoned
Patent Citations (99)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020126649A1 (en) * | 1986-09-16 | 2002-09-12 | Yoshito Sakurai | Distributed type switching system |
US20010028658A1 (en) * | 1986-09-16 | 2001-10-11 | Yoshito Sakurai | Distributed type switching system |
US4914692A (en) * | 1987-12-29 | 1990-04-03 | At&T Bell Laboratories | Automatic speech recognition using echo cancellation |
US5142677A (en) * | 1989-05-04 | 1992-08-25 | Texas Instruments Incorporated | Context switching devices, systems and methods |
US6134578A (en) * | 1989-05-04 | 2000-10-17 | Texas Instruments Incorporated | Data processing device and method of operation with context switching |
US5189500A (en) * | 1989-09-22 | 1993-02-23 | Mitsubishi Denki Kabushiki Kaisha | Multi-layer type semiconductor device with semiconductor element layers stacked in opposite directions and manufacturing method thereof |
US5200564A (en) * | 1990-06-29 | 1993-04-06 | Casio Computer Co., Ltd. | Digital information processing apparatus with multiple CPUs |
US5341507A (en) * | 1990-07-17 | 1994-08-23 | Mitsubishi Denki Kabushiki Kaisha | Data drive type information processor having both simple and high function instruction processing units |
US5594784A (en) * | 1993-04-27 | 1997-01-14 | Southwestern Bell Technology Resources, Inc. | Apparatus and method for transparent telephony utilizing speech-based signaling for initiating and handling calls |
US5861336A (en) * | 1993-07-12 | 1999-01-19 | Peregrine Semiconductor Corporation | High-frequency wireless communication system on a single ultrathin silicon on sapphire chip |
US5883396A (en) * | 1993-07-12 | 1999-03-16 | Peregrine Semiconductor Corporation | High-frequency wireless communication system on a single ultrathin silicon on sapphire chip |
US5663570A (en) * | 1993-07-12 | 1997-09-02 | Peregrine Semiconductor Corporation | High-frequency wireless communication system on a single ultrathin silicon on sapphire chip |
US5492857A (en) * | 1993-07-12 | 1996-02-20 | Peregrine Semiconductor Corporation | High-frequency wireless communication system on a single ultrathin silicon on sapphire chip |
US6057555A (en) * | 1993-07-12 | 2000-05-02 | Peregrine Semiconductor Corporation | High-frequency wireless communication system on a single ultrathin silicon on sapphire chip |
US5363404A (en) * | 1993-07-13 | 1994-11-08 | Motorola Inc. | Apparatus and method for conveying information in a communication network |
US5956517A (en) * | 1995-04-12 | 1999-09-21 | Sharp Kabushiki Kaisha | Data driven information processor |
US5724356A (en) * | 1995-04-28 | 1998-03-03 | Multi-Tech Systems, Inc. | Advanced bridge/router local area network modem node |
US5860019A (en) * | 1995-07-10 | 1999-01-12 | Sharp Kabushiki Kaisha | Data driven information processor having pipeline processing units connected in series including processing portions connected in parallel |
US5991308A (en) * | 1995-08-25 | 1999-11-23 | Terayon Communication Systems, Inc. | Lower overhead method for data transmission using ATM and SCDMA over hybrid fiber coax cable plant |
US5872991A (en) * | 1995-10-18 | 1999-02-16 | Sharp, Kabushiki, Kaisha | Data driven information processor for processing data packet including common identification information and plurality of pieces of data |
US5956518A (en) * | 1996-04-11 | 1999-09-21 | Massachusetts Institute Of Technology | Intermediate-grain reconfigurable processing device |
US5923761A (en) * | 1996-05-24 | 1999-07-13 | Lsi Logic Corporation | Single chip solution for multimedia GSM mobile station systems |
US5941958A (en) * | 1996-06-20 | 1999-08-24 | Daewood Telecom Ltd. | Duplicated data communications network adaptor including a pair of control boards and interface boards |
US6574217B1 (en) * | 1996-11-27 | 2003-06-03 | Alcatel Usa Sourcing, L.P. | Telecommunications switch for providing telephony traffic integrated with video information services |
US6047372A (en) * | 1996-12-02 | 2000-04-04 | Compaq Computer Corp. | Apparatus for routing one operand to an arithmetic logic unit from a fixed register slot and another operand from any register slot |
US6226266B1 (en) * | 1996-12-13 | 2001-05-01 | Cisco Technology, Inc. | End-to-end delay estimation in high speed communication networks |
US6304551B1 (en) * | 1997-03-21 | 2001-10-16 | Nec Usa, Inc. | Real-time estimation and dynamic renegotiation of UPC values for arbitrary traffic sources in ATM networks |
US6075788A (en) * | 1997-06-02 | 2000-06-13 | Lsi Logic Corporation | Sonet physical layer device having ATM and PPP interfaces |
US6839352B1 (en) * | 1997-06-02 | 2005-01-04 | Lsi Logic Corporation | SONET physical layer device having ATM and PPP interfaces |
US6067595A (en) * | 1997-09-23 | 2000-05-23 | Icore Technologies, Inc. | Method and apparatus for enabling high-performance intelligent I/O subsystems using multi-port memories |
US5915123A (en) * | 1997-10-31 | 1999-06-22 | Silicon Spice | Method and apparatus for controlling configuration memory contexts of processing elements in a network of multiple context processing elements |
US6108760A (en) * | 1997-10-31 | 2000-08-22 | Silicon Spice | Method and apparatus for position independent reconfiguration in a network of multiple context processing elements |
US6122719A (en) * | 1997-10-31 | 2000-09-19 | Silicon Spice | Method and apparatus for retiming in a network of multiple context processing elements |
US6349098B1 (en) * | 1998-04-17 | 2002-02-19 | Paxonet Communications, Inc. | Method and apparatus for forming a virtual circuit |
US6226735B1 (en) * | 1998-05-08 | 2001-05-01 | Broadcom | Method and apparatus for configuring arbitrary sized data paths comprising multiple context processing elements |
US6697345B1 (en) * | 1998-07-24 | 2004-02-24 | Hughes Electronics Corporation | Multi-transport mode radio communications having synchronous and asynchronous transport mode capability |
US20070150700A1 (en) * | 1998-09-14 | 2007-06-28 | The Board Of Trustees Of The Leland Stanford Junior University | System and method for performing efficient conditional vector operations for data parallel architectures involving both input and conditional vector values |
US6269435B1 (en) * | 1998-09-14 | 2001-07-31 | The Board Of Trustees Of The Leland Stanford Junior University | System and method for implementing conditional vector operations in which an input vector containing multiple operands to be used in conditional operations is divided into two or more output vectors based on a condition vector |
US20010027392A1 (en) * | 1998-09-29 | 2001-10-04 | William M. Wiese | System and method for processing data from and for multiple channels |
US6768774B1 (en) * | 1998-11-09 | 2004-07-27 | Broadcom Corporation | Video and graphics system with video scaling |
US6798420B1 (en) * | 1998-11-09 | 2004-09-28 | Broadcom Corporation | Video and graphics system with a single-port RAM |
US6597689B1 (en) * | 1998-12-30 | 2003-07-22 | Nortel Networks Limited | SVC signaling system and method |
US6751233B1 (en) * | 1999-01-08 | 2004-06-15 | Cisco Technology, Inc. | UTOPIA 2—UTOPIA 3 translator |
US6522688B1 (en) * | 1999-01-14 | 2003-02-18 | Eric Morgan Dowling | PCM codec and modem for 56K bi-directional transmission |
US6519259B1 (en) * | 1999-02-18 | 2003-02-11 | Avaya Technology Corp. | Methods and apparatus for improved transmission of voice information in packet-based communication systems |
US6628658B1 (en) * | 1999-02-23 | 2003-09-30 | Siemens Aktiengesellschaft | Time-critical control of data to a sequentially controlled interface with asynchronous data transmission |
US7110358B1 (en) * | 1999-05-14 | 2006-09-19 | Pmc-Sierra, Inc. | Method and apparatus for managing data traffic between a high capacity source and multiple destinations |
US6747977B1 (en) * | 1999-06-30 | 2004-06-08 | Nortel Networks Limited | Packet interface and method of packetizing information |
US7031341B2 (en) * | 1999-07-27 | 2006-04-18 | Wuhan Research Institute Of Post And Communications, Mii. | Interfacing apparatus and method for adapting Ethernet directly to physical channel |
US20070239967A1 (en) * | 1999-08-13 | 2007-10-11 | Mips Technologies, Inc. | High-performance RISC-DSP |
US6580727B1 (en) * | 1999-08-20 | 2003-06-17 | Texas Instruments Incorporated | Element management system for a digital subscriber line access multiplexer |
US6580793B1 (en) * | 1999-08-31 | 2003-06-17 | Lucent Technologies Inc. | Method and apparatus for echo cancellation with self-deactivation |
US6853385B1 (en) * | 1999-11-09 | 2005-02-08 | Broadcom Corporation | Video, audio and graphics decode, composite and display system |
US6573905B1 (en) * | 1999-11-09 | 2003-06-03 | Broadcom Corporation | Video and graphics system with parallel processing of graphics windows |
US6640239B1 (en) * | 1999-11-10 | 2003-10-28 | Garuda Network Corporation | Apparatus and method for intelligent scalable switching network |
US20020136620A1 (en) * | 1999-12-03 | 2002-09-26 | Jan Berends | Vehicle blocking device |
US20030004697A1 (en) * | 2000-01-24 | 2003-01-02 | Ferris Gavin Robert | Method of designing, modelling or fabricating a communications baseband stack |
US20020008256A1 (en) * | 2000-03-01 | 2002-01-24 | Ming-Kang Liu | Scaleable architecture for multiple-port, system-on-chip ADSL communications systems |
US6807167B1 (en) * | 2000-03-08 | 2004-10-19 | Lucent Technologies Inc. | Line card for supporting circuit and packet switching |
US6631135B1 (en) * | 2000-03-27 | 2003-10-07 | Nortel Networks Limited | Method and apparatus for negotiating quality-of-service parameters for a network connection |
US6751224B1 (en) * | 2000-03-30 | 2004-06-15 | Azanda Network Devices, Inc. | Integrated ATM/packet segmentation-and-reassembly engine for handling both packet and ATM input data and for outputting both ATM and packet data |
US6934937B1 (en) * | 2000-03-30 | 2005-08-23 | Broadcom Corporation | Multi-channel, multi-service debug on a pipelined CPU architecture |
US6810039B1 (en) * | 2000-03-30 | 2004-10-26 | Azanda Network Devices, Inc. | Processor-based architecture for facilitating integrated data transfer between both atm and packet traffic with a packet bus or packet link, including bidirectional atm-to-packet functionally for atm traffic |
US6795396B1 (en) * | 2000-05-02 | 2004-09-21 | Teledata Networks, Ltd. | ATM buffer system |
US20020031141A1 (en) * | 2000-05-25 | 2002-03-14 | Mcwilliams Patrick | Method of detecting back pressure in a communication system using an utopia-LVDS bridge |
US20020009089A1 (en) * | 2000-05-25 | 2002-01-24 | Mcwilliams Patrick | Method and apparatus for establishing frame synchronization in a communication system using an UTOPIA-LVDS bridge |
US20020031132A1 (en) * | 2000-05-25 | 2002-03-14 | Mcwilliams Patrick | UTOPIA-LVDS bridge |
US20040202173A1 (en) * | 2000-06-14 | 2004-10-14 | Yoon Chang Bae | Utopia level interface in ATM multiplexing/demultiplexing assembly |
US20020059426A1 (en) * | 2000-06-30 | 2002-05-16 | Mariner Networks, Inc. | Technique for assigning schedule resources to multiple ports in correct proportions |
US20020034162A1 (en) * | 2000-06-30 | 2002-03-21 | Brinkerhoff Kenneth W. | Technique for implementing fractional interval times for fine granularity bandwidth allocation |
US6707821B1 (en) * | 2000-07-11 | 2004-03-16 | Cisco Technology, Inc. | Time-sensitive-packet jitter and latency minimization on a shared data link |
US6892324B1 (en) * | 2000-07-19 | 2005-05-10 | Broadcom Corporation | Multi-channel, multi-service debug |
US6751723B1 (en) * | 2000-09-02 | 2004-06-15 | Actel Corporation | Field programmable gate array and microcontroller system-on-a-chip |
US6738358B2 (en) * | 2000-09-09 | 2004-05-18 | Intel Corporation | Network echo canceller for integrated telecommunications processing |
US20030046457A1 (en) * | 2000-10-02 | 2003-03-06 | Shakuntala Anjanaiah | Apparatus and method for an interface unit for data transfer between processing units in the asynchronous transfer mode |
US20040109468A1 (en) * | 2000-10-02 | 2004-06-10 | Shakuntala Anjanaiah | Apparatus and method for input clock signal detection in an asynchronous transfer mode interface unit |
US20030076839A1 (en) * | 2000-10-02 | 2003-04-24 | Martin Li | Apparatus and method for an interface unit for data transfer between a host processing unit and a multi-target digital signal processing unit in an asynchronous transfer mode |
US6631130B1 (en) * | 2000-11-21 | 2003-10-07 | Transwitch Corporation | Method and apparatus for switching ATM, TDM, and packet data through a single communications switch while maintaining TDM timing |
US6636515B1 (en) * | 2000-11-21 | 2003-10-21 | Transwitch Corporation | Method for switching ATM, TDM, and packet data through a single communications switch |
US20020112097A1 (en) * | 2000-11-29 | 2002-08-15 | Rajko Milovanovic | Media accelerator quality of service |
US6763018B1 (en) * | 2000-11-30 | 2004-07-13 | 3Com Corporation | Distributed protocol processing and packet forwarding using tunneling protocols |
US6754804B1 (en) * | 2000-12-29 | 2004-06-22 | Mips Technologies, Inc. | Coprocessor interface transferring multiple instructions simultaneously along with issue path designation and/or issue order designation for the instructions |
US20020101982A1 (en) * | 2001-01-30 | 2002-08-01 | Hammam Elabd | Line echo canceller scalable to multiple voice channels/ports |
US20020131421A1 (en) * | 2001-03-13 | 2002-09-19 | Adc Telecommunications Israel Ltd. | ATM linked list buffer system |
US7215672B2 (en) * | 2001-03-13 | 2007-05-08 | Koby Reshef | ATM linked list buffer system |
US6952238B2 (en) * | 2001-05-01 | 2005-10-04 | Koninklijke Philips Electronics N.V. | Method and apparatus for echo cancellation in digital ATV systems using an echo cancellation reference signal |
US6806915B2 (en) * | 2001-05-03 | 2004-10-19 | Koninklijke Philips Electronics N.V. | Method and apparatus for echo cancellation in digital communications using an echo cancellation reference signal |
US20030021339A1 (en) * | 2001-05-03 | 2003-01-30 | Koninklijke Philips Electronics N.V. | Method and apparatus for echo cancellation in digital communications using an echo cancellation reference signal |
US7100026B2 (en) * | 2001-05-30 | 2006-08-29 | The Massachusetts Institute Of Technology | System and method for performing efficient conditional vector operations for data parallel architectures involving both input and conditional vector values |
US6928080B2 (en) * | 2001-06-28 | 2005-08-09 | Intel Corporation | Transporting variable length ATM AAL CPS packets over a non-ATM-specific bus |
US20030002538A1 (en) * | 2001-06-28 | 2003-01-02 | Chen Allen Peilen | Transporting variable length ATM AAL CPS packets over a non-ATM-specific bus |
US6737743B2 (en) * | 2001-07-10 | 2004-05-18 | Kabushiki Kaisha Toshiba | Memory chip and semiconductor device using the memory chip and manufacturing method of those |
US6728209B2 (en) * | 2001-07-25 | 2004-04-27 | Overture Networks, Inc. | Measurement of packet delay variation |
US20030058885A1 (en) * | 2001-09-18 | 2003-03-27 | Sorenson Donald C. | Multi-carrier frequency-division multiplexing (FDM) architecture for high speed digital service in local networks |
US7218901B1 (en) * | 2001-09-18 | 2007-05-15 | Scientific-Atlanta, Inc. | Automatic frequency control of multiple channels |
US20030053484A1 (en) * | 2001-09-18 | 2003-03-20 | Sorenson Donald C. | Multi-carrier frequency-division multiplexing (FDM) architecture for high speed digital service |
US20030053493A1 (en) * | 2001-09-18 | 2003-03-20 | Joseph Graham Mobley | Allocation of bit streams for communication over-multi-carrier frequency-division multiplexing (FDM) |
US6959376B1 (en) * | 2001-10-11 | 2005-10-25 | Lsi Logic Corporation | Integrated circuit containing multiple digital signal processors |
US7051246B2 (en) * | 2003-01-15 | 2006-05-23 | Lucent Technologies Inc. | Method for estimating clock skew within a communications network |
Cited By (142)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010053147A1 (en) * | 2000-08-04 | 2001-12-20 | Nec Corporation | Synchronous data transmission system |
US7450601B2 (en) | 2000-12-22 | 2008-11-11 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and communication apparatus for controlling a jitter buffer |
US20040076191A1 (en) * | 2000-12-22 | 2004-04-22 | Jim Sundqvist | Method and a communiction apparatus in a communication system |
US7139273B2 (en) * | 2002-01-23 | 2006-11-21 | Terasync Ltd. | System and method for synchronizing between communication terminals of asynchronous packets networks |
US20040213238A1 (en) * | 2002-01-23 | 2004-10-28 | Terasync Ltd. | System and method for synchronizing between communication terminals of asynchronous packets networks |
US20050169245A1 (en) * | 2002-03-04 | 2005-08-04 | Lars Hindersson | Arrangement and a method for handling an audio signal |
US7126957B1 (en) * | 2002-03-07 | 2006-10-24 | Utstarcom, Inc. | Media flow method for transferring real-time data between asynchronous and synchronous networks |
US20040085963A1 (en) * | 2002-05-24 | 2004-05-06 | Zarlink Semiconductor Limited | Method of organizing data packets |
US8046486B2 (en) | 2002-06-05 | 2011-10-25 | Lee Capital Llc | Active client buffer management |
US20090285282A1 (en) * | 2002-06-05 | 2009-11-19 | Israel Amir | Active client buffer management method, system, and apparatus |
US7581019B1 (en) * | 2002-06-05 | 2009-08-25 | Israel Amir | Active client buffer management method, system, and apparatus |
US7307998B1 (en) * | 2002-08-27 | 2007-12-11 | 3Com Corporation | Computer system and network interface supporting dynamically optimized receive buffer queues |
US20040131067A1 (en) * | 2002-09-24 | 2004-07-08 | Brian Cheng | Adaptive predictive playout scheme for packet voice applications |
US8787196B2 (en) * | 2002-10-02 | 2014-07-22 | At&T Intellectual Property Ii, L.P. | Method of providing voice over IP at predefined QOS levels |
US20130208614A1 (en) * | 2002-10-02 | 2013-08-15 | At&T Corp. | Method of providing voice over ip at predefined qos levels |
US20070185849A1 (en) * | 2002-11-26 | 2007-08-09 | Bapiraju Vinnakota | Data structure traversal instructions for packet processing |
US20050025151A1 (en) * | 2003-02-11 | 2005-02-03 | Alcatel | Early-processing request for an active router |
US20040160948A1 (en) * | 2003-02-19 | 2004-08-19 | Mitsubishi Denki Kabushiki Kaisha | IP network communication apparatus |
US7554915B2 (en) * | 2003-03-20 | 2009-06-30 | Siemens Aktiengesellschaft | Method and a jitter buffer regulating circuit for regulating a jitter buffer |
US20040184488A1 (en) * | 2003-03-20 | 2004-09-23 | Wolfgang Bauer | Method and a jitter buffer regulating circuit for regulating a jitter buffer |
US7542465B2 (en) * | 2003-03-28 | 2009-06-02 | Broadcom Corporation | Optimization of decoder instance memory consumed by the jitter control module |
US20040190508A1 (en) * | 2003-03-28 | 2004-09-30 | Philip Houghton | Optimization of decoder instance memory consumed by the jitter control module |
US20050041692A1 (en) * | 2003-08-22 | 2005-02-24 | Thomas Kallstenius | Remote synchronization in packet-switched networks |
US7415044B2 (en) | 2003-08-22 | 2008-08-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Remote synchronization in packet-switched networks |
WO2005046133A1 (en) * | 2003-11-11 | 2005-05-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Adapting playout buffer based on audio burst length |
US7379466B2 (en) * | 2004-04-17 | 2008-05-27 | Innomedia Pte Ltd | In band signal detection and presentation for IP phone |
US20050232309A1 (en) * | 2004-04-17 | 2005-10-20 | Innomedia Pte Ltd. | In band signal detection and presentation for IP phone |
US20060088000A1 (en) * | 2004-10-27 | 2006-04-27 | Hans Hannu | Terminal having plural playback pointers for jitter buffer |
TWI400933B (en) * | 2004-10-27 | 2013-07-01 | Ericsson Telefon Ab L M | Terminal for receiving transmissions in a form of a media stream and method of operating the same |
WO2006046904A1 (en) * | 2004-10-27 | 2006-05-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Terminal having plural playback pointers for jitter buffer |
US7970020B2 (en) | 2004-10-27 | 2011-06-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Terminal having plural playback pointers for jitter buffer |
US8320446B2 (en) | 2004-11-24 | 2012-11-27 | Qformx, Inc. | System for transmission of synchronous video with compression through channels with varying transmission delay |
US20060126515A1 (en) * | 2004-12-15 | 2006-06-15 | Ward Robert G | Filtering wireless network packets |
US7630318B2 (en) * | 2004-12-15 | 2009-12-08 | Agilent Technologies, Inc. | Filtering wireless network packets |
US20060184261A1 (en) * | 2005-02-16 | 2006-08-17 | Adaptec, Inc. | Method and system for reducing audio latency |
US7672742B2 (en) * | 2005-02-16 | 2010-03-02 | Adaptec, Inc. | Method and system for reducing audio latency |
US20060248404A1 (en) * | 2005-04-29 | 2006-11-02 | Microsoft Corporation | System and Method for Providing a Window Management Mode |
US7646726B2 (en) | 2005-06-09 | 2010-01-12 | At&T Intellectual Property 1, L.P. | System for detecting packetization delay of packets in a network |
US20060280163A1 (en) * | 2005-06-09 | 2006-12-14 | Yongdong Zhao | System for detecting packetization delay of packets in a network |
US20090245249A1 (en) * | 2005-08-29 | 2009-10-01 | Nec Corporation | Multicast node apparatus, multicast transfer method and program |
US7577875B2 (en) * | 2005-09-14 | 2009-08-18 | Microsoft Corporation | Statistical analysis of sampled profile data in the identification of significant software test performance regressions |
US20070061626A1 (en) * | 2005-09-14 | 2007-03-15 | Microsoft Corporation | Statistical analysis of sampled profile data in the identification of significant software test performance regressions |
US7746847B2 (en) * | 2005-09-20 | 2010-06-29 | Intel Corporation | Jitter buffer management in a packet-based network |
US20070064679A1 (en) * | 2005-09-20 | 2007-03-22 | Intel Corporation | Jitter buffer management in a packet-based network |
US7529189B2 (en) * | 2005-09-29 | 2009-05-05 | Via Technologies, Inc. | Mechanism for imposing a consistent delay on information sets received from a variable rate information stream |
US20070071022A1 (en) * | 2005-09-29 | 2007-03-29 | Eric Pan | Mechanism for imposing a consistent delay on information sets received from a variable rate information stream |
US7680153B2 (en) * | 2005-10-11 | 2010-03-16 | Hui Ma | Method and device for stream synchronization of real-time multimedia transport over packet network |
US20070081562A1 (en) * | 2005-10-11 | 2007-04-12 | Hui Ma | Method and device for stream synchronization of real-time multimedia transport over packet network |
US8275949B2 (en) * | 2005-12-13 | 2012-09-25 | International Business Machines Corporation | System support storage and computer system |
US20070136508A1 (en) * | 2005-12-13 | 2007-06-14 | Reiner Rieke | System Support Storage and Computer System |
US20070147250A1 (en) * | 2005-12-22 | 2007-06-28 | Druke Michael B | Synchronous data communication |
US8054752B2 (en) | 2005-12-22 | 2011-11-08 | Intuitive Surgical Operations, Inc. | Synchronous data communication |
KR101279827B1 (en) * | 2005-12-22 | 2013-07-30 | 인튜어티브 서지컬 인코포레이티드 | Synchronous data communication |
WO2007133292A3 (en) * | 2005-12-22 | 2008-06-26 | Intuitive Surgical Inc | Synchronous data commnication |
US20070150631A1 (en) * | 2005-12-22 | 2007-06-28 | Intuitive Surgical Inc. | Multi-priority messaging |
US7757028B2 (en) * | 2005-12-22 | 2010-07-13 | Intuitive Surgical Operations, Inc. | Multi-priority messaging |
US7756036B2 (en) * | 2005-12-22 | 2010-07-13 | Intuitive Surgical Operations, Inc. | Synchronous data communication |
US8838270B2 (en) | 2005-12-22 | 2014-09-16 | Intuitive Surgical Operations, Inc. | Synchronous data communication |
US8090093B2 (en) * | 2006-01-13 | 2012-01-03 | Oki Electric Industry Co., Ltd. | Echo canceller |
US20090129584A1 (en) * | 2006-01-13 | 2009-05-21 | Oki Electric Industry Co., Ltd. | Echo Canceller |
US20070171826A1 (en) * | 2006-01-20 | 2007-07-26 | Anagran, Inc. | System, method, and computer program product for controlling output port utilization |
US8547843B2 (en) * | 2006-01-20 | 2013-10-01 | Saisei Networks Pte Ltd | System, method, and computer program product for controlling output port utilization |
US20070171825A1 (en) * | 2006-01-20 | 2007-07-26 | Anagran, Inc. | System, method, and computer program product for IP flow routing |
US20070195744A1 (en) * | 2006-02-18 | 2007-08-23 | Trainin Solomon | TECHNIQUES FOR 40 MEGAHERTZ (MHz) CHANNEL SWITCHING |
US8451808B2 (en) * | 2006-02-18 | 2013-05-28 | Intel Corporation | Techniques for 40 megahertz (MHz) channel switching |
US8462749B2 (en) | 2006-02-18 | 2013-06-11 | Intel Corporation | Techniques for 40 megahertz (MHz) channel switching |
US9781634B2 (en) | 2006-02-18 | 2017-10-03 | Intel Corporation | Techniques for 40 megahertz(MHz) channel switching |
US7965727B2 (en) * | 2006-09-14 | 2011-06-21 | Fujitsu Limited | Broadcast distributing system and broadcast distributing method |
US20080069131A1 (en) * | 2006-09-14 | 2008-03-20 | Fujitsu Limited | Broadcast distributing system, broadcast distributing method, and network apparatus |
US20080147917A1 (en) * | 2006-12-19 | 2008-06-19 | Lees Jeremy J | Method and apparatus for maintaining synchronization of audio in a computing system |
US7774520B2 (en) * | 2006-12-19 | 2010-08-10 | Intel Corporation | Method and apparatus for maintaining synchronization of audio in a computing system |
US20080147918A1 (en) * | 2006-12-19 | 2008-06-19 | Hanebutte Ulf R | Method and apparatus for maintaining synchronization of audio in a computing system |
US7568057B2 (en) * | 2006-12-19 | 2009-07-28 | Intel Corporation | Method and apparatus for maintaining synchronization of audio in a computing system |
US8982744B2 (en) * | 2007-06-06 | 2015-03-17 | Broadcom Corporation | Method and system for a subband acoustic echo canceller with integrated voice activity detection |
US20080306736A1 (en) * | 2007-06-06 | 2008-12-11 | Sumit Sanyal | Method and system for a subband acoustic echo canceller with integrated voice activity detection |
US20100318352A1 (en) * | 2008-02-19 | 2010-12-16 | Herve Taddei | Method and means for encoding background noise information |
US20090268755A1 (en) * | 2008-04-23 | 2009-10-29 | Oki Electric Industry Co., Ltd. | Codec converter, gateway device, and codec converting method |
US8085809B2 (en) * | 2008-04-23 | 2011-12-27 | Oki Electric Industry Co., Ltd. | Codec converter, gateway device, and codec converting method |
WO2009155263A1 (en) * | 2008-06-17 | 2009-12-23 | Integrated Device Technology, Inc. | Circuit for correcting an output clock frequency in a receiving device |
US8135105B2 (en) | 2008-06-17 | 2012-03-13 | Integraded Device Technologies, Inc. | Circuit for correcting an output clock frequency in a receiving device |
US20090310729A1 (en) * | 2008-06-17 | 2009-12-17 | Integrated Device Technology, Inc. | Circuit for correcting an output clock frequency in a receiving device |
US20110158263A1 (en) * | 2008-08-28 | 2011-06-30 | Kabushiki Kaisha Toshiba | Transit time fixation device |
US8599883B2 (en) * | 2008-08-28 | 2013-12-03 | Kabushiki Kaisha Toshiba | Transit time fixation device |
US9485352B2 (en) * | 2008-12-05 | 2016-11-01 | At & T Intellectual Property I, L.P. | Method for measuring processing delays of voice-over IP devices |
US20160014269A1 (en) * | 2008-12-05 | 2016-01-14 | At & T Intellectual Property I, L.P. | Method for Measuring Processing Delays of Voice-Over IP Devices |
US20130322282A1 (en) * | 2008-12-05 | 2013-12-05 | AT & T Intellectual Property I, LP. | Method for Measuring Processing Delays of Voice-Over IP Devices |
US9160842B2 (en) * | 2008-12-05 | 2015-10-13 | At&T Intellectual Property I, Lp. | Method for measuring processing delays of voice-over IP devices |
US8370676B2 (en) * | 2009-03-26 | 2013-02-05 | Sony Corporation | Receiving apparatus and time correction method for receiving apparatus |
US20100250781A1 (en) * | 2009-03-26 | 2010-09-30 | Sony Corporation | Receiving apparatus and time correction method for receiving apparatus |
US20110007638A1 (en) * | 2009-07-09 | 2011-01-13 | Motorola, Inc. | Artificial delay inflation and jitter reduction to improve tcp throughputs |
US8854992B2 (en) * | 2009-07-09 | 2014-10-07 | Motorola Mobility Llc | Artificial delay inflation and jitter reduction to improve TCP throughputs |
US20110051744A1 (en) * | 2009-08-27 | 2011-03-03 | Texas Instruments Incorporated | External memory data management with data regrouping and channel look ahead |
US8249099B2 (en) * | 2009-08-27 | 2012-08-21 | Texas Instruments Incorporated | External memory data management with data regrouping and channel look ahead |
US9036624B2 (en) * | 2009-12-24 | 2015-05-19 | Telecom Italia S.P.A. | Method of scheduling transmission in a communication network, corresponding communication node and computer program product |
US20120250678A1 (en) * | 2009-12-24 | 2012-10-04 | Telecom Italia S.P.A. | Method of scheduling transmission in a communication network, corresponding communication node and computer program product |
US20120123774A1 (en) * | 2010-09-30 | 2012-05-17 | Electronics And Telecommunications Research Institute | Apparatus, electronic apparatus and method for adjusting jitter buffer |
US8843379B2 (en) * | 2010-09-30 | 2014-09-23 | Electronics And Telecommunications Research Institute | Apparatus, electronic apparatus and method for adjusting jitter buffer |
US9497466B2 (en) * | 2011-01-17 | 2016-11-15 | Mediatek Inc. | Buffering apparatus for buffering multi-partition video/image bitstream and related method thereof |
US20120185620A1 (en) * | 2011-01-17 | 2012-07-19 | Chia-Yun Cheng | Buffering apparatus for buffering multi-partition video/image bitstream and related method thereof |
US9538177B2 (en) | 2011-10-31 | 2017-01-03 | Mediatek Inc. | Apparatus and method for buffering context arrays referenced for performing entropy decoding upon multi-tile encoded picture and related entropy decoder |
US10057316B2 (en) | 2012-03-01 | 2018-08-21 | Google Technology Holdings LLC | Managing adaptive streaming of data via a communication connection |
US8874634B2 (en) | 2012-03-01 | 2014-10-28 | Motorola Mobility Llc | Managing adaptive streaming of data via a communication connection |
US9420023B2 (en) | 2012-03-01 | 2016-08-16 | Google Technology Holdings LLC | Managing adaptive streaming of data via a communication connection |
US9787416B2 (en) | 2012-09-07 | 2017-10-10 | Apple Inc. | Adaptive jitter buffer management for networks with varying conditions |
TWI511500B (en) * | 2012-09-07 | 2015-12-01 | Apple Inc | Adaptive jitter buffer management for networks with varying conditions |
WO2014039843A1 (en) * | 2012-09-07 | 2014-03-13 | Apple Inc. | Adaptive jitter buffer management for networks with varying conditions |
US10374786B1 (en) | 2012-10-05 | 2019-08-06 | Integrated Device Technology, Inc. | Methods of estimating frequency skew in networks using timestamped packets |
US12063162B2 (en) * | 2012-12-20 | 2024-08-13 | Dolby Laboratories Licensing Corporation | Controlling a jitter buffer |
US20220263423A9 (en) * | 2012-12-20 | 2022-08-18 | Dolby Laboratories Licensing Corporation | Controlling a jitter buffer |
US11632318B2 (en) | 2014-04-16 | 2023-04-18 | Dolby Laboratories Licensing Corporation | Jitter buffer control based on monitoring of delay jitter and conversational dynamics |
WO2015160617A1 (en) * | 2014-04-16 | 2015-10-22 | Dolby Laboratories Licensing Corporation | Jitter buffer control based on monitoring of delay jitter and conversational dynamics |
US10742531B2 (en) | 2014-04-16 | 2020-08-11 | Dolby Laboratories Licensing Corporation | Jitter buffer control based on monitoring of delay jitter and conversational dynamics |
US20150319212A1 (en) * | 2014-05-02 | 2015-11-05 | Imagination Technologies Limited | Media Controller |
US9985660B2 (en) * | 2014-05-02 | 2018-05-29 | Imagination Technologies Limited | Media controller |
US10680657B2 (en) * | 2014-05-02 | 2020-06-09 | Imagination Technologies Limited | Media controller with jitter buffer |
US20200266839A1 (en) * | 2014-05-02 | 2020-08-20 | Imagination Technologies Limited | Media Controller with Buffer Interface |
US10845868B2 (en) | 2014-10-08 | 2020-11-24 | Apple Inc. | Methods and apparatus for running and booting an inter-processor communication link between independently operable processors |
CN105991477A (en) * | 2015-02-11 | 2016-10-05 | 腾讯科技(深圳)有限公司 | Adjusting method of voice jitter buffer area and apparatus thereof |
US9749886B1 (en) * | 2015-02-16 | 2017-08-29 | Amazon Technologies, Inc. | System for determining metrics of voice communications |
WO2017055091A1 (en) * | 2015-10-01 | 2017-04-06 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for removing jitter in audio data transmission |
US10651976B2 (en) | 2015-10-01 | 2020-05-12 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for removing jitter in audio data transmission |
US10148391B2 (en) | 2015-10-01 | 2018-12-04 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for removing jitter in audio data transmission |
KR102419595B1 (en) | 2016-01-07 | 2022-07-11 | 삼성전자주식회사 | Playout delay adjustment method and Electronic apparatus thereof |
KR20170082901A (en) * | 2016-01-07 | 2017-07-17 | 삼성전자주식회사 | Playout delay adjustment method and Electronic apparatus thereof |
US10841880B2 (en) | 2016-01-27 | 2020-11-17 | Apple Inc. | Apparatus and methods for wake-limiting with an inter-device communication link |
US10846237B2 (en) | 2016-02-29 | 2020-11-24 | Apple Inc. | Methods and apparatus for locking at least a portion of a shared memory resource |
US10853272B2 (en) | 2016-03-31 | 2020-12-01 | Apple Inc. | Memory access protection apparatus and methods for memory mapped access between independently operable processors |
CN107592430A (en) * | 2016-07-07 | 2018-01-16 | 腾讯科技(深圳)有限公司 | The method and terminal device of a kind of echo cancellor |
US10775871B2 (en) | 2016-11-10 | 2020-09-15 | Apple Inc. | Methods and apparatus for providing individualized power control for peripheral sub-systems |
US11809258B2 (en) | 2016-11-10 | 2023-11-07 | Apple Inc. | Methods and apparatus for providing peripheral sub-system stability |
US11068326B2 (en) * | 2017-08-07 | 2021-07-20 | Apple Inc. | Methods and apparatus for transmitting time sensitive data over a tunneled bus interface |
US10789198B2 (en) | 2018-01-09 | 2020-09-29 | Apple Inc. | Methods and apparatus for reduced-latency data transmission with an inter-processor communication link between independently operable processors |
US11381514B2 (en) | 2018-05-07 | 2022-07-05 | Apple Inc. | Methods and apparatus for early delivery of data link layer packets |
US11176064B2 (en) | 2018-05-18 | 2021-11-16 | Apple Inc. | Methods and apparatus for reduced overhead data transfer with a shared ring buffer |
US10838450B2 (en) | 2018-09-28 | 2020-11-17 | Apple Inc. | Methods and apparatus for synchronization of time between independently operable processors |
US11379278B2 (en) | 2018-09-28 | 2022-07-05 | Apple Inc. | Methods and apparatus for correcting out-of-order data transactions between processors |
US10789110B2 (en) | 2018-09-28 | 2020-09-29 | Apple Inc. | Methods and apparatus for correcting out-of-order data transactions between processors |
US11243560B2 (en) | 2018-09-28 | 2022-02-08 | Apple Inc. | Methods and apparatus for synchronization of time between independently operable processors |
US11006296B2 (en) * | 2018-11-19 | 2021-05-11 | Pacesetter, Inc. | Implantable medical device and method for measuring communication quality |
GB2593696B (en) * | 2020-03-30 | 2022-07-13 | British Telecomm | Low latency content delivery |
US12041300B2 (en) | 2020-03-30 | 2024-07-16 | British Telecommunications Public Limited Company | Low latency content delivery |
GB2593696A (en) * | 2020-03-30 | 2021-10-06 | British Telecomm | Low latency content delivery |
Also Published As
Publication number | Publication date |
---|---|
US7835280B2 (en) | 2010-11-16 |
US20110141889A1 (en) | 2011-06-16 |
US20090316580A1 (en) | 2009-12-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7835280B2 (en) | Methods and systems for managing variable delays in packet transmission | |
US7516320B2 (en) | Distributed processing architecture with scalable processing layers | |
US20080126812A1 (en) | Integrated Architecture for the Unified Processing of Visual Media | |
EP1137226B1 (en) | Improved packet scheduling of real time information over a packet network | |
EP0873037B1 (en) | Traffic shaper for ATM network using dual leaky bucket regulator | |
AU2008330261B2 (en) | Play-out delay estimation | |
US7089390B2 (en) | Apparatus and method to reduce memory footprints in processor architectures | |
US8170007B2 (en) | Packet telephony appliance | |
US7126957B1 (en) | Media flow method for transferring real-time data between asynchronous and synchronous networks | |
US6038232A (en) | MPEG-2 multiplexer for ATM network adaptation | |
US7035250B2 (en) | System for organizing voice channel data for network transmission and/or reception | |
US20030163675A1 (en) | Context switching system for a multi-thread execution pipeline loop and method of operation thereof | |
US6650650B1 (en) | Method and apparatus for transmitting voice data over network structures | |
US7668982B2 (en) | System and method for synchronous processing of media data on an asynchronous processor | |
US6985477B2 (en) | Method and apparatus for supporting multiservice digital signal processing applications | |
JP2001045005A (en) | Method for generating atm cell for low bit rate application | |
US7542465B2 (en) | Optimization of decoder instance memory consumed by the jitter control module | |
US6694373B1 (en) | Method and apparatus for hitless switchover of a voice connection in a voice processing module | |
JP2000032009A (en) | Minimizing device for cell delay change in communication system supporting connection of multiplex fixed bit rate | |
JP2001094611A (en) | Device for processing voice and fax data in remotely connected server | |
US6894998B1 (en) | Asynchronous packet processing using media stream functions | |
JP2001339398A (en) | Scheduling circuit | |
JPH10233809A (en) | Concentrator and converter | |
JP2004260723A (en) | Sound source packet copy method and device | |
Baker | Speech transport for packet telephony and voice over IP |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: AVAZ NETWORKS INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PANG, JON LAURENT PANG;USMAN, MOHAMMAD;KHAN, SHOAB AHMAD;AND OTHERS;REEL/FRAME:013586/0092 Effective date: 20021209 |
|
AS | Assignment |
Owner name: QUARTICS, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CMA BUSINESS CREDIT SERVICES ON BEHALF OF AVAZ NETWORKS, INC.;REEL/FRAME:015758/0372 Effective date: 20030801 |
|
AS | Assignment |
Owner name: COMERICA BANK, CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:021773/0871 Effective date: 20081028 Owner name: HERCULES TECHNOLOGY GROWTH CAPITAL, INC., CALIFORN Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:021773/0871 Effective date: 20081028 |
|
AS | Assignment |
Owner name: THE SAFI QURESHEY FAMILY TRUST DATED MAY 21, 1984, Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:021924/0742 Effective date: 20081126 Owner name: FOUNDATION CAPITAL IV, L.P., CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:021924/0742 Effective date: 20081126 Owner name: FV INVESTORS III, L.P., CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:021924/0742 Effective date: 20081126 Owner name: FOCUS VENTURES III, L.P., CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:021924/0742 Effective date: 20081126 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO PAY ISSUE FEE |
|
AS | Assignment |
Owner name: GIRISH PATEL AND PRAGATI PATEL, TRUSTEE OF THE GIR Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:026923/0001 Effective date: 20101013 |
|
AS | Assignment |
Owner name: GREEN SEQUOIA LP, CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:028024/0001 Effective date: 20101013 Owner name: MEYYAPPAN-KANNAPPAN FAMILY TRUST, CALIFORNIA Free format text: SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:028024/0001 Effective date: 20101013 |
|
AS | Assignment |
Owner name: SEVEN HILLS GROUP USA, LLC, CALIFORNIA Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:028054/0791 Effective date: 20101013 Owner name: AUGUSTUS VENTURES LIMITED, ISLE OF MAN Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:028054/0791 Effective date: 20101013 Owner name: SIENA HOLDINGS LIMITED Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:028054/0791 Effective date: 20101013 Owner name: HERIOT HOLDINGS LIMITED, SWITZERLAND Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:028054/0791 Effective date: 20101013 Owner name: CASTLE HILL INVESTMENT HOLDINGS LIMITED Free format text: INTELLECTUAL PROPERTY SECURITY AGREEMENT;ASSIGNOR:QUARTICS, INC.;REEL/FRAME:028054/0791 Effective date: 20101013 |