EP3255841A1 - Packet processing method and apparatus - Google Patents

Packet processing method and apparatus Download PDF

Info

Publication number
EP3255841A1
EP3255841A1 EP15892862.2A EP15892862A EP3255841A1 EP 3255841 A1 EP3255841 A1 EP 3255841A1 EP 15892862 A EP15892862 A EP 15892862A EP 3255841 A1 EP3255841 A1 EP 3255841A1
Authority
EP
European Patent Office
Prior art keywords
latency
packet
network device
storage unit
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP15892862.2A
Other languages
German (de)
French (fr)
Other versions
EP3255841A4 (en
EP3255841B1 (en
Inventor
Cong CHEN
Zhu CHENG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP3255841A1 publication Critical patent/EP3255841A1/en
Publication of EP3255841A4 publication Critical patent/EP3255841A4/en
Application granted granted Critical
Publication of EP3255841B1 publication Critical patent/EP3255841B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/56Queue scheduling implementing delay-aware scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F5/00Methods or arrangements for data conversion without changing the order or content of the data handled
    • G06F5/06Methods or arrangements for data conversion without changing the order or content of the data handled for changing the speed of data flow, i.e. speed regularising or timing, e.g. delay lines, FIFO buffers; over- or underrun control therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/0858One way delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J3/00Time-division multiplex systems
    • H04J3/02Details
    • H04J3/06Synchronising arrangements
    • H04J3/0635Clock or time synchronisation in a network
    • H04J3/0638Clock or time synchronisation among nodes; Internode synchronisation
    • H04J3/0658Clock or time synchronisation among packet nodes
    • H04J3/0661Clock or time synchronisation among packet nodes using timestamps
    • H04J3/0667Bidirectional timestamps, e.g. NTP or PTP for compensation of clock drift and for compensation of propagation delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/901Buffering arrangements using storage descriptor, e.g. read or write pointers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W88/00Devices specially adapted for wireless communication networks, e.g. terminals, base stations or access point devices
    • H04W88/08Access point devices
    • H04W88/085Access point devices with remote components

Definitions

  • the present invention relates to the field of communications technologies, and in particular, to a packet processing method and apparatus.
  • a packet may need to pass through a forwarding device when being transmitted in a network.
  • a latency may be generated when the packet passes through the forwarding device.
  • a latency of the packet in a transmission path may include the latency generated when the packet passes through the forwarding device.
  • Latencies generated when different packets pass through the forwarding device may be unequal. Therefore, latencies of the different packets in a transmission path may be unequal.
  • the foregoing case may be caused because processing performed by the forwarding device on the different packets is different.
  • time intervals needed for the table lookup operations corresponding to the different packets may be unequal.
  • a phenomenon that latencies of different packets in a transmission path are unequal may be referred to as latency variation. Latency variation is unacceptable for some services.
  • a CPRI Common Public Radio Interface, common public radio interface
  • an SDH Synchronous Digital Hierarchy, synchronous digital hierarchy
  • a PDH Piesiochronous Digital Hierarchy, plesiochronous digital hierarchy
  • BBU Baseband Unit, baseband unit
  • RRU Remote Radio Unit
  • a latency generated when a packet passes through a forwarding device cannot be enabled to be equal to a certain value.
  • a latency generated when a packet passes through a forwarding device may be equal to a certain value.
  • a packet processing method includes:
  • the setting, by the first network device, a write pointer according to the determined first latency specifically includes:
  • the determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
  • a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous; and a clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • the receiving, by a first network device, a packet at a first time includes:
  • a packet processing method includes:
  • the setting, by the first network device, a write pointer according to the first latency specifically includes:
  • the determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
  • a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the first network device and the second network device.
  • the receiving, by a first network device, a packet includes:
  • a packet processing apparatus includes:
  • the setting unit is specifically configured to:
  • the setting unit is specifically configured to:
  • a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous; and a clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • the receiving unit is specifically configured to:
  • a packet processing apparatus includes:
  • the setting unit is specifically configured to:
  • the setting unit is specifically configured to:
  • a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the apparatus and the second network device.
  • the receiving unit is specifically configured to:
  • a first network device processes the packet and determines a first latency of the processed packet in a FIFO memory, where: the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the second latency includes a third latency, and the third latency includes a time interval for processing the packet. That is, the time interval for processing the packet is taken into consideration in determining of the first latency.
  • the determining of the first latency enables a latency generated when the packet passes through the first network device to be equal to the target latency. Therefore, in the foregoing technical solutions, a latency generated when a packet passes through a network device may be enabled to be equal to a certain value.
  • FIG. 1 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. The method includes the following steps.
  • a first network device receives a packet at a first time.
  • the first network device may be a PTN (Packet Transport Network, packet transport network) device, an OTN (Optical Transport Network, optical transport network) device, a router, or a switch.
  • PTN Packet Transport Network, packet transport network
  • OTN Optical Transport Network, optical transport network
  • the first time in this embodiment of the present invention is a time at which the first network device receives the packet.
  • a service carried by the packet may be a CPRI service, an SDH service, or a PDH service.
  • the first network device may record the first time at which the packet is received.
  • the first network device may record the first time in a packet header of the packet.
  • the first network device may determine, by reading the packet header of the packet, the first time at which the packet is received.
  • the first network device may also record the first time in a storage medium of the first network device.
  • the first network device may determine, by reading the first time from the storage medium, a time at which the packet is received.
  • S101 may be performed by a receiver circuit in the first network device.
  • the receiver circuit may be configured to implement an Ethernet interface.
  • the first network device processes the packet to obtain a processed packet, and writes the processed packet into a buffer memory.
  • the processing performed by the first network device on the packet may be coding, decoding, encryption, or decryption.
  • the processing may be determining, by searching a Media Access Control (media access control, MAC) protocol table, an egress interface for forwarding the packet.
  • the processing may be determining, by searching a routing table, an egress interface for forwarding the packet.
  • the buffer memory is a memory for storing the processed packet.
  • the buffer memory may be a component of the first network device, and is a memory located inside the first network device. Alternatively, the buffer memory may be a memory located outside the first network device.
  • the buffer memory may be coupled to the receiver circuit.
  • S102 may be performed by an instruction execution circuit in the first network device.
  • the instruction execution circuit may perform the processing on the packet according to an instruction.
  • the instruction execution circuit may be implemented by using a network processor (network processor, NP) or an application-specific integrated circuit (application-specific integrated circuit, ASIC).
  • the first network device reads the processed packet from the buffer memory at a second time.
  • S103 may be performed by the instruction execution circuit in the first network device.
  • the first network device determines, at a time after the second time, a first latency of the processed packet in a first in first out memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the first in first out memory includes multiple contiguous storage units.
  • the multiple contiguous storage units in the FIFO (First In First Out, first in first out) memory may be configured to store a packet queue.
  • Each storage unit is configured to store one packet or null data (null data).
  • the packet queue includes at least one packet.
  • a location of a packet that is among the multiple packets and that is written by the FIFO memory at an earlier time is in front of a location of a packet that is among the multiple packets and that is written by the FIFO memory at a later time.
  • target latencies of all packets in the multiple packets in the first network device are equal.
  • a value of the target latency may be statically configured by an engineer by using the first network device.
  • the target latency may be equal to a fixed value.
  • the engineer configures the target latency for the first network device by using a telnet.
  • the engineer may determine the target latency for the first network device by means of an experiment.
  • the following operation is performed on a packet, used for the experiment, passing through the first network device: receiving, through an ingress port, the packet used for the experiment; processing, by the first network device, the packet used for the experiment, thereby obtaining a processed packet used for the experiment; writing the processed packet used for the experiment into the buffer memory; reading the processed packet used for the experiment from the buffer memory; writing the processed packet used for the experiment into the FIFO memory; reading the processed packet used for the experiment from the FIFO memory; and forwarding the processed packet used for the experiment through an egress port. It should be noted that, in the foregoing operations, a step of determining the first latency is not performed.
  • the first network device may be capable of processing multiple services.
  • the multiple services are corresponding to multiple packets.
  • the packet used for the experiment may include the multiple packets.
  • the first network device can process a service 1, a service 2, and a service 3.
  • the multiple packets are a packet 1, a packet 2, and a packet 3.
  • the packet 1, the packet 2, and the packet 3 are corresponding to the service 1, the service 2, and the service 3, respectively.
  • Latencies generated when the packet 1, the packet 2, and the packet 3 pass through the first network device are 3 ms, 4 ms, and 5 ms, respectively.
  • a difference between latencies corresponding to different packets is caused because time intervals occupied by the first network device for processing packets of different services are different.
  • time intervals occupied by the first network device for processing the packet 1, the packet 2, and the packet 3 are 0.5 ms, 1 ms, and 2 ms, respectively.
  • a period from a time at which the first network device receives the packet 1 through the ingress port to a time at which the first network device writes the processed packet 1 into the buffer memory is 0.5 ms.
  • a period from a time at which the first network device receives the packet 2 through the ingress port to a time at which the first network device writes the processed packet 2 into the buffer memory is 1 ms.
  • a period from a time at which the first network device receives the packet 3 through the ingress port to a time at which the first network device writes the processed packet 3 into the buffer memory is 2 ms.
  • the engineer may determine the target latency as a maximum value of a latency generated when the packet used for the experiment passes through the first network device, that is, 5 ms. Certainly, the engineer may also determine the target latency as a value greater than the maximum value of the latency generated when the packet used for the experiment passes through the first network device. For example, the target latency is set as 6 ms or 7 ms.
  • the first network device may implement, by controlling time intervals, that is, the first latency, of the different packets in the FIFO memory, that all latencies generated when the different packets pass through the first network device are equal to the target latency. For example, all latencies generated when the different packets pass through the first network device are equal to 6 ms.
  • the target latency of the packet in the first network device includes three parts: the first latency, the third latency, and the fourth latency.
  • the first network device may enable, by determining the first latency of the processed packet in the FIFO memory, a value of the target latency of the packet in the first network device to be the value that is statically configured.
  • the first network device may determine, by setting the read pointer and/or the write pointer that are/is of the FIFO memory, the first latency of the packet in the FIFO memory.
  • the third latency is equal to a difference between the second time and the first time.
  • the fourth latency is a fixed latency, and may depend on a hardware structure of the first network device.
  • the buffer memory may connect to the FIFO memory by using a transmission medium.
  • the FIFO memory may connect to the egress port by using a transmission medium.
  • the transmission medium between the buffer memory and the FIFO memory is determined, that is, a physical attribute of the transmission medium between the buffer memory and the FIFO memory is determined. Therefore, a time interval for transmitting a signal over the transmission medium between the buffer memory and the FIFO memory is a fixed value. Likewise, a time interval for transmitting a signal over the transmission medium between the FIFO memory and the egress port is also a fixed value.
  • the first network device sets a read pointer and/or a write pointer according to the determined first latency.
  • the setting the read pointer may be specifically setting a value of the read pointer.
  • the setting the write pointer may be specifically setting a value of the write pointer.
  • the read pointer of the FIFO memory is configured to perform a read operation on a storage unit in the FIFO memory.
  • the write pointer of the FIFO memory is configured to perform a write operation on a storage unit in the FIFO memory.
  • the first network device may determine, according to the first latency, a storage unit to which the read pointer points, so as to set the value of the read pointer to an address of the storage unit.
  • the first network device may determine, according to the first latency, a storage unit to which the write pointer points, so as to set the value of the write pointer to an address of the storage unit.
  • the first network device may determine, according to the first latency, a storage unit to which the read pointer and the write pointer point, so as to set values of the read pointer and the write pointer to an address of the storage unit.
  • the first network device writes, according to the set write pointer, the processed packet into a storage unit in the first in first out memory, or reads, according to the set read pointer, the processed packet from a storage unit in the first in first out memory.
  • the value of the read pointer is increased by 1.
  • the read pointer whose value is increased by 1 points to a next storage unit from which a packet is to be read.
  • the value of the write pointer is increased by 1.
  • the write pointer whose value is increased by 1 points to a next storage unit into which a packet is to be written.
  • the read operation corresponding to the read pointer and the write operation corresponding to the write pointer may be performed synchronously, or may be performed asynchronously.
  • the first network device performs a write operation on the storage unit according to the set write pointer, so as to write the processed packet into the storage unit.
  • the first network device performs a read operation on the storage unit according to the set read pointer, so as to read the processed packet from the storage unit.
  • the first network device forwards, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • the FIFO memory performs a read operation to read the processed packet from the storage unit to which the read pointer points.
  • S107 may be performed by a transmitter circuit in the first network device, and the FIFO memory is a component of the first network device.
  • the transmitter circuit is coupled to the FIFO memory.
  • FIG. 2 is a schematic diagram of a latency generated when the packet passes through the first network device, in the method shown in FIG. 1 , according to an embodiment.
  • the packet enters the first network device at the first time, through the ingress port.
  • the packet leaves the first network device at the third time through the egress port.
  • the latency generated when the packet passes through the first network device is equal to the target latency.
  • the target latency is a period from the first time to the third time.
  • the target latency includes the first latency, the third latency, and the fourth latency.
  • the third latency is equal to a period from the first time to the second time.
  • the first time is a time at which the first network device receives the packet through the ingress port.
  • the second time is a time at which the first network device reads the processed packet from the buffer memory.
  • the first network device processes the packet. For example, the first network device may process the packet by using a network processor (not shown in the figure).
  • the first latency is equal to a period from a time at which the processed packet is written into the FIFO memory to a time at which the processed packet is read from the FIFO memory.
  • the fourth latency is a fixed latency.
  • the fourth latency includes a first part and a second part.
  • the first part is a period from a time at which the processed packet is read from the buffer memory to a time at which the processed packet is written into the FIFO memory.
  • the second part is equal to a period from the time at which the processed packet is read from the FIFO memory to a time at which the processed packet is forwarded through the egress port.
  • FIG. 3 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. Referring to FIG. 3 , the method includes S301 and S302.
  • the first network device sets a write pointer according to the determined first latency specifically includes:
  • the first network device determines, according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
  • a clock frequency (clock frequency) at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • a clock phase (clock phase) at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • inequality between a rate at which data is written when the FIFO memory performs the write operation and a rate at which the data is read when the FIFO memory performs the read operation can be avoided.
  • the inequality between the rate at which data is written and the rate at which the data is read may cause a data loss.
  • S101 may be specifically that the first network device receives, at the first time, the packet that is from an RRU.
  • S101 may be specifically that the first network device receives, at the first time, the packet that is from a BBU.
  • the first network device is a network device between the BBU and the RRU.
  • the first network device is configured to connect the BBU and the RRU.
  • multiple RRUs connect to one BBU by using the first network device.
  • each RRU needs to be directly connected to the BBU by using an optical fiber, which helps reduce optical fibers and reduce costs.
  • a latency generated when the packet passes through the first network device is equal to a target latency.
  • the target latency may be equal to a fixed value.
  • the first network device may be configured to forward a packet that is used to carry a CPRI service, an SDH service, or a PDH service.
  • a first network device determines, according to a target latency set by the first network device, a first latency of a processed packet in a FIFO memory, which enables a latency of the packet in the first network device to be equal to the target latency.
  • Latency variation may also be generated in a process in which multiple packets pass through multiple network devices.
  • latencies generated when the multiple packets separately pass through the multiple network devices may be determined as a same target latency. For details, refer to the following description.
  • FIG. 4 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. The method includes the following steps.
  • a first network device receives a packet that is from a second network device, where the packet carries a first time, and the first time is a time at which the second network device receives the packet.
  • the first network device and the second network device may be a PTN device, an OTN device, a router, or a switch.
  • an intermediate network device may be disposed between the first network device and the second network device. That is, the first network device and the second network device may be indirectly connected.
  • the intermediate network device may be a repeater.
  • an intermediate network device may not be disposed between the first network device and the second network device. That is, the first network device and the second network device may be directly connected. Specifically, the first network device and the second network device may be connected by using only a transmission medium.
  • the transmission medium may be a cable or an optical cable.
  • the first time is a time at which the second network device receives the packet.
  • a service carried by the packet may be a CPRI service, an SDH service, or a PDH service.
  • the second network device may record the first time in a packet header of the packet.
  • the first network device may determine, by reading the packet header of the packet, the first time at which the second network device receives the packet.
  • the second network device may record the first time in the packet header of the packet by using a receiver circuit in the second network device.
  • S401 may be performed by a receiver circuit in the first network device.
  • the receiver circuit may be configured to implement an Ethernet interface.
  • the first network device processes the packet to obtain a processed packet, and writes the processed packet into a buffer memory.
  • the processing performed by the first network device on the packet may be coding, decoding, encryption, or decryption.
  • the processing may be determining, by searching a MAC protocol table, an egress interface for forwarding the packet.
  • the processing may be determining, by searching a routing table, an egress interface for forwarding the packet.
  • the buffer memory is a memory for storing the processed packet.
  • the buffer memory may be a component of the first network device.
  • the buffer memory may be coupled to the receiver circuit.
  • the buffer memory may be a memory located inside the first network device, or may be a memory located outside the first network device.
  • the first network device reads the processed packet from the buffer memory at a second time.
  • a time at which the first network device reads the processed packet from the buffer memory is the second time.
  • S403 may be performed by an instruction execution circuit.
  • the instruction execution circuit may perform the processing on the packet according to an instruction.
  • the instruction execution circuit may be implemented by using a network processor or an application-specific integrated circuit.
  • the first network device determines, at a time after the second time, a first latency of the processed packet in a first in first out memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the first in first out memory includes multiple contiguous storage units.
  • the multiple contiguous storage units in the FIFO memory are configured to store a packet queue, and each storage unit is configured to store one packet or null data.
  • the packet queue includes at least one packet.
  • a location of a packet that is among the multiple packets and that is written by the FIFO memory at an earlier time is in front of a location of a packet that is among the multiple packets and that is written by the FIFO memory at a later time.
  • target latencies of all packets in the multiple packets are equal.
  • a value of the target latency is equal to a fixed value.
  • the value of the target latency is statically configured by an engineer by using the first network device.
  • a method for configuring the target latency is similar to the method described in S104. For details, refer to the description in step S104, and details are not described herein again.
  • the target latency of the packet includes three parts: the first latency, the third latency, and the fourth latency.
  • the first network device enables, by determining the first latency of the processed packet in the FIFO memory of the first network device, a value of the target latency of the packet to be the value that is statically configured.
  • the first network device may determine, by setting a read pointer and/or a write pointer that are/is of the FIFO memory, the first latency of the packet in the FIFO memory of the first network device.
  • the third latency is equal to a difference between the second time and the first time.
  • the fourth latency is a fixed latency, and may depend on a hardware structure of the first network device.
  • the buffer memory may connect to the FIFO memory by using a transmission medium.
  • the FIFO memory may connect to the egress port by using a transmission medium.
  • the transmission medium between the buffer memory and the FIFO memory is determined. That is, a physical attribute of the transmission medium between the buffer memory and the FIFO memory is determined. Therefore, a time interval for transmitting a signal over the transmission medium between the buffer memory and the FIFO memory is a fixed value.
  • a time interval for transmitting a signal over the transmission medium that connects the FIFO memory and the egress port is also a fixed value.
  • the first network device sets a read pointer and/or a write pointer according to the determined first latency.
  • the setting the read pointer may be specifically setting a value of the read pointer.
  • the setting the write pointer may be specifically setting a value of the write pointer.
  • the read pointer of the FIFO memory is configured to perform a read operation on a storage unit in the FIFO memory.
  • the write pointer of the FIFO memory is configured to perform a write operation on a storage unit in the FIFO memory.
  • the first network device may determine, according to the first latency, a storage unit to which the read pointer points, so as to set the value of the read pointer to an address of the storage unit.
  • the first network device may determine, according to the first latency, a storage unit to which the write pointer points, so as to set the value of the write pointer to an address of the storage unit.
  • the first network device may determine, according to the first latency, a storage unit to which the read pointer and the write pointer point, so as to set values of the read pointer and the write pointer to an address of the storage unit.
  • the first network device writes, according to the set write pointer, the processed packet into a storage unit in the first in first out memory, or reads, according to the set read pointer, the processed packet from a storage unit in the first in first out memory.
  • the value of the read pointer is increased by 1.
  • the read pointer whose value is increased by 1 points to a next storage unit from which a packet is to be read.
  • the value of the write pointer is increased by 1.
  • the write pointer whose value is increased by 1 points to a next storage unit into which a packet is to be written.
  • the read operation corresponding to the read pointer and the write operation corresponding to the write pointer may be performed synchronously, or may be performed asynchronously.
  • the first network device performs a write operation on the storage unit according to the set write pointer, so as to write the processed packet into the storage unit.
  • the first network device performs a read operation on the storage unit according to the set read pointer, so as to read the processed packet from the storage unit.
  • the first network device forwards, at the third time through the egress port, the processed packet that is read from the first in first out memory.
  • the FIFO memory performs a read operation to read the processed packet from the storage unit to which the read pointer points.
  • S407 may be performed by a transmitter circuit.
  • Both the transmitter circuit and the FIFO memory are components of the first network device.
  • the transmitter circuit is coupled to the FIFO memory.
  • FIG. 5 is a schematic diagram of a latency generated when the packet in the method shown in FIG. 4 passes through the second network device and the first network device.
  • the second network device 501 receives the packet through an ingress port of the second network device 501.
  • the packet passes through a bearer network 502 between the second network device 501 and the first network device 500, and is received by the ingress port of the first network device 500.
  • the target latency is equal to a period from the first time at which the packet is received by the second network device 501 through the ingress port to a third time at which the processed packet is forwarded by the first network device 500 through an egress port.
  • the target latency includes the first latency, the third latency, and the fourth latency.
  • the third latency is equal to a period from the first time to the second time.
  • the first time is a time at which the first network device 500 receives the packet through the ingress port.
  • the second time is a time at which the first network device 500 reads the processed packet from the buffer memory.
  • the first network device processes the packet.
  • the first network device may process the packet by using a network processor (not shown in the figure).
  • the second network device or the bearer network 502 may also process the packet. It should be noted that, in FIG.
  • the bearer network 502 is disposed between the second network device 501 and the first network device 500.
  • the bearer network 502 may not be disposed between the second network device 501 and the first network device 500.
  • the second network device 501 and the first network device 500 are connected by using only a transmission medium.
  • the first latency is equal to a period from a time at which the processed packet is written into the FIFO memory to a time at which the processed packet is read from the FIFO memory.
  • the fourth latency is a fixed latency.
  • the fourth latency includes a first part and a second part.
  • the first part is a period from a time at which the processed packet is read from the buffer memory to a time at which the processed packet is written into the FIFO memory.
  • the second part is equal to a period from the time at which the processed packet is read from the FIFO memory to a time at which the processed packet is forwarded by the egress port.
  • FIG. 6 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. Referring to FIG. 6 , the method includes S601 and S602.
  • the first network device sets a write pointer according to the first latency specifically includes:
  • the first network device determines, according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
  • the Precision Time Protocol precision time protocol, PTP
  • the Network Time Protocol network time protocol, NTP
  • PTP Precision Time Protocol
  • NTP Network Time Protocol
  • S401 may be specifically that the first network device receives the packet that is from an RRU.
  • S401 may be specifically that the first network device receives the packet that is from a BBU.
  • the first network device is a network device between the BBU and the RRU
  • the second network device is the BBU or the RRU.
  • the first network device is configured to connect the BBU and the RRU.
  • multiple RRUs connect to the BBU by using the first network device.
  • each RRU needs to be directly connected to the BBU by using an optical fiber, which helps reduce optical fibers and reduce costs.
  • a latency generated when the packet passes through the second network device and the first network device is equal to a target latency, where the target latency may be equal to a fixed value.
  • the first network device may perform a similar operation on each packet, that is, the first network device may perform operations of S401 to S407 on each packet. Therefore, when being configured to connect the BBU and the RRU, the first network device may be configured to forward a packet that is used to carry a CPRI service, an SDH service, or a PDH service.
  • the foregoing solution can reduce latency variation.
  • a first network device determines, according to a target latency set by the first network device or a second network device, a first latency of a processed packet in a FIFO memory of the first network device, which enables a latency of the packet between the second network device and the first network device to be equal to a preset target latency, thereby avoiding latency variation caused in processes of processing, such as transmitting, storing, forwarding, and exchanging, the packet between the second network device and the first network device.
  • an embodiment of the present invention further provides a packet processing apparatus.
  • FIG. 7 is a schematic structural diagram of a packet processing apparatus according to an embodiment of the present invention.
  • a packet processing apparatus 700 may be configured to perform the method shown in FIG. 1 .
  • the packet processing apparatus 700 may be a PTN device, an OTN device, a router, or a switch.
  • the packet processing apparatus 700 includes: a receiving unit 701, a processing unit 702, a reading unit 703, a first latency determining unit 704, a setting unit 705, and a forwarding unit 706.
  • the receiving unit 701 is configured to receive a packet at a first time;
  • the receiving unit 701 may be configured to perform S101.
  • S101 For a function and specific implementation of the receiving unit 701, reference may be made to the description of S101 in the embodiment corresponding to the method shown in FIG. 1 , and details are not described herein again.
  • the processing unit 702 is configured to process the packet received by the receiving unit 701 to obtain a processed packet, and write the processed packet into a buffer memory.
  • the processing unit 702 may be configured to perform S102.
  • S102 For a function and specific implementation of the processing unit 502, reference may be made to the description of S102 in the embodiment corresponding to the method shown in FIG. 1 , and details are not described herein again.
  • the reading unit 703 is configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit 702.
  • the reading unit 703 may be configured to perform S103.
  • S103 For a function and specific implementation of the reading unit 703, reference may be made to the description of S103 in the embodiment corresponding to the method shown in FIG. 1 , and details are not described herein again.
  • the first latency determining unit 704 is configured to determine, at a time after the second time, a first latency of the processed packet read by the reading unit 703 in a FIFO memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the forwarding unit 706 through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units.
  • the first latency determining unit 704 may be configured to perform S104.
  • S104 For a function and specific implementation of the first latency determining unit 704, reference may be made to the description of S104 in the embodiment corresponding to the method shown in FIG. 1 , and details are not described herein again.
  • the setting unit 705 is configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit 704; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory.
  • the setting unit 705 may be configured to perform S105.
  • S105 For a function and specific implementation of the setting unit 705, reference may be made to the description of S105 in the embodiment corresponding to the method shown in FIG. 1 , and details are not described herein again.
  • the forwarding unit 706 is configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • the forwarding unit 706 may be configured to perform S106.
  • S106 For a function and specific implementation of the forwarding unit 706, reference may be made to the description of S106 in the embodiment corresponding to the method shown in FIG. 1 , and details are not described herein again.
  • the setting unit 705 is specifically configured to:
  • the setting unit 705 is specifically configured to:
  • a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • a clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • the receiving unit 701 is specifically configured to:
  • the receiving unit 701 is specifically configured to:
  • an embodiment of the present invention further provides a packet processing apparatus.
  • FIG. 8 is a schematic structural diagram of a packet processing apparatus according to an embodiment of the present invention.
  • a packet processing apparatus 800 may be configured to perform the method shown in FIG. 4 .
  • the packet processing apparatus 800 may be a PTN device, an OTN device, a router, or a switch.
  • the packet processing apparatus 800 includes: a receiving unit 801, a processing unit 802, a reading unit 803, a first latency determining unit 804, a setting unit 805, and a forwarding unit 806.
  • the receiving unit 801 is configured to receive a packet that is from a second network device, where the packet carries a first time, and the first time is a time at which the second network device receives the packet.
  • the receiving unit 801 may be configured to perform S401.
  • S401 For a function and specific implementation of the receiving unit 801, reference may be made to the description of S401 in the embodiment corresponding to the method shown in FIG. 4 , and details are not described herein again.
  • the processing unit 802 is configured to process the packet received by the receiving unit 801 to obtain a processed packet, and write the processed packet into a buffer memory.
  • the processing unit 802 may be configured to perform S402.
  • S402 For a function and specific implementation of the processing unit 802, reference may be made to the description of S402 in the embodiment corresponding to the method shown in FIG. 4 , and details are not described herein again.
  • the reading unit 803 is configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit 802.
  • the reading unit 803 may be configured to perform S403.
  • S403 For a function and specific implementation of the reading unit 803, reference may be made to the description of S403 in the embodiment corresponding to the method shown in FIG. 4 , and details are not described herein again.
  • the first latency determining unit 804 is configured to determine, at a time after the second time, a first latency of the processed packet read by the reading unit 803 in a first in first out FIFO memory.
  • the first latency is equal to a difference obtained by subtracting a second latency from a target latency;
  • the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the forwarding unit through an egress port.
  • the second latency is equal to a sum of a third latency and a fourth latency.
  • the third latency is equal to a period from the first time to the second time.
  • the fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units.
  • the first latency determining unit 804 may be configured to perform S404.
  • S404 For a function and specific implementation of the first latency determining unit 804, reference may be made to the description of S404 in the embodiment corresponding to the method shown in FIG. 4 , and details are not described herein again.
  • the setting unit 805 is configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit 804; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory.
  • the setting unit 805 may be configured to perform S405.
  • S405 For a function and specific implementation of the setting unit 805, reference may be made to the description of S405 in the embodiment corresponding to the method shown in FIG. 4 , and details are not described herein again.
  • the forwarding unit 806 is configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • the forwarding unit 806 may be configured to perform S406.
  • S406 For a function and specific implementation of the forwarding unit 806, reference may be made to the description of S406 in the embodiment corresponding to the method shown in FIG. 4 , and details are not described herein again.
  • the setting unit 805 is specifically configured to:
  • the setting unit 805 is specifically configured to:
  • a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the apparatus and the second network device.
  • the receiving unit 801 is specifically configured to:
  • the receiving unit 801 is specifically configured to:
  • an embodiment of the present invention further provides a network device.
  • FIG. 9 is a schematic structural diagram of a network device according to an embodiment of the present invention.
  • a network device 900 may be a PTN device, an OTN device, a router, or a switch.
  • the network device 900 includes: a receiver circuit 901, a buffer memory 902, a FIFO memory 903, an instruction execution circuit 904, a transmitter circuit 905, and an instruction memory 906.
  • the instruction execution circuit 904 is coupled to the instruction memory 906.
  • the instruction memory 906 is configured to store a computer instruction.
  • the instruction execution circuit 904 implements a function by reading the computer instruction. For example, the instruction execution circuit 904 implements processing of a packet.
  • the instruction execution circuit 904 is separately coupled to the receiver circuit 901, the buffer memory 902, the FIFO memory 903, the instruction execution circuit 904, and the transmitter circuit 905. Specifically, the instruction execution circuit 904 may perform a read operation on the receiver circuit 901, so as to acquire data received by the receiver circuit 901. The instruction execution circuit 904 may perform a write operation on the transmitter circuit 905, so as to provide data to the transmitter circuit 905. The instruction execution circuit 904 may perform a read operation and a write operation on the buffer memory 902. The instruction execution circuit 904 may perform a read operation and a write operation on the FIFO memory 903. An output end of the receiver circuit 901 is coupled to an input end of the buffer memory 902. The buffer memory 902 may receive data sent by the receiver circuit 901.
  • An output end of the buffer memory 902 is coupled to an input end of the FIFO memory 903.
  • the FIFO memory 903 may receive data sent by the buffer memory 902.
  • An output end of the FIFO memory 903 is coupled to an input end of the transmitter circuit 905.
  • the transmitter circuit 905 may receive data sent by the FIFO memory 903.
  • the network device 900 may be configured to perform the method shown in FIG. 1 .
  • the receiver circuit 901 may be configured to perform S101.
  • the instruction execution circuit 904 may perform S102 by accessing a computer program in the instruction memory 906, and read a processed packet by accessing the buffer memory 902.
  • the instruction execution circuit 904 may perform S103 by accessing the computer program in the instruction memory 906.
  • the instruction execution circuit 904 may perform S104 by accessing the computer program in the instruction memory 906.
  • the instruction execution circuit 904 may perform S105 by accessing the computer program in the instruction memory 906, and perform a write operation and/or a read operation on the FIFO memory 903 by using a write pointer and/or a read pointer.
  • the instruction execution circuit 904 may perform S106 by accessing the computer program in the instruction memory 906.
  • the transmitter circuit 905 may be configured to perform S107. Specifically, the transmitter circuit 905 may be configured to implement an egress port involved in S107.
  • the network device 900 may be configured to perform the method shown in FIG. 4 .
  • the receiver circuit 901 may be configured to perform S401.
  • the instruction execution circuit 904 may perform S402 by accessing the computer program in the instruction memory 906, and read a processed packet by accessing the buffer memory 902.
  • the instruction execution circuit 904 may perform S403 by accessing the computer program in the instruction memory 906.
  • the instruction execution circuit 904 may perform S404 by accessing the computer program in the instruction memory 906.
  • the instruction execution circuit 904 may perform S405 by accessing the computer program in the instruction memory 906, and perform a write operation and/or a read operation on the FIFO memory 903 by using a write pointer and/or a read pointer.
  • the instruction execution circuit 904 may perform S406 by accessing the computer program in the instruction memory 906.
  • the transmitter circuit 905 may be configured to perform S407. Specifically, the transmitter circuit 905 may be configured to implement an egress port involved in S407.
  • FIG. 10 is a schematic structural diagram of a network device according to an embodiment of the present invention.
  • a network device 1000 may be a PTN device, an OTN device, a router, or a switch.
  • the network device 1000 includes: an ingress port 1001, an egress port 1002, a logic circuit 1003, and a memory 1004.
  • the logic circuit 1003 is coupled to the ingress port 1001, the egress port 1002, and the memory 1004 by using a bus.
  • the memory 1004 stores a computer program.
  • the logic circuit 1003 may implement a function by executing the computer program stored by the memory 1004. For example, the logic circuit 1003 implements processing of a packet.
  • the network apparatus 1000 may be configured to perform the method shown in FIG. 1 .
  • the network apparatus 1000 may be configured to implement the first network device involved in the method shown in FIG. 1 .
  • the ingress port 1001 may be configured to perform S101.
  • the logic circuit 1003 may perform S102 by accessing the computer program in the memory 1004.
  • the memory 1004 may be configured to implement the buffer memory involved in S102.
  • the logic circuit 1003 may perform S103 by accessing the computer program in the memory 1004.
  • the logic circuit 1003 may perform S104 by accessing the computer program in the memory 1004.
  • the memory 1004 may be configured to implement the FIFO memory involved in S104.
  • the logic circuit 1003 may perform S105 by accessing the computer program in the memory 1004.
  • the logic circuit 1003 may perform S106 by accessing the computer program in the memory 1004.
  • the egress port 1002 may be configured to perform S1010. Specifically, the egress port 1002 may be configured to implement an egress port involved in S107.
  • the network apparatus 1000 may be configured to perform the method shown in FIG. 4 .
  • the network apparatus 1000 may be configured to implement the first network device involved in the method shown in FIG. 4 .
  • the ingress port 1001 may be configured to perform S401.
  • the logic circuit 1003 may perform S402 by accessing the computer program in the memory 1004.
  • the memory 1004 may be configured to implement the buffer memory involved in S402.
  • the logic circuit 1003 may perform S403 by accessing the computer program in the memory 1004.
  • the logic circuit 1003 may perform S404 by accessing the computer program in the memory 1004.
  • the memory 1004 may be configured to implement the FIFO memory involved in S404.
  • the logic circuit 1003 may perform S405 by accessing the computer program in the memory 1004.
  • the logic circuit 1003 may perform S406 by accessing the computer program in the memory 1004.
  • the egress port 1002 may be configured to perform S407. Specifically, the egress port 1002 may be configured to implement an egress port involved in S407.
  • the embodiments of the present invention may be provided as a method, a system, or a computer program product. Therefore, the present invention may use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present invention may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, and the like) that include computer-usable program code.
  • computer-usable storage media including but not limited to a disk memory, a CD-ROM, an optical memory, and the like
  • These computer program instructions may be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of any other programmable data processing device to generate a machine, so that the instructions executed by a computer or a processor of any other programmable data processing device generate an apparatus for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be stored in a computer readable memory that can instruct the computer or any other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus.
  • the instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be loaded onto a computer or another programmable data processing device, so that a series of operations and steps are performed on the computer or the another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or the another programmable device provide steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.

Abstract

Embodiments of the present invention provide a packet processing method and apparatus. After receiving a packet, a first network device processes the packet, and determines a first latency of the processed packet in a FIFO memory, where: the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the second latency includes a third latency, and the third latency includes a time interval for processing the packet. That is, the time interval for processing the packet is taken into consideration in determining of the first latency. In addition, the determining of the first latency enables a latency generated when the packet passes through the first network device to be equal to the target latency. Therefore, in the foregoing technical solutions, a latency generated when a packet passes through a network device may be enabled to be equal to a certain value.

Description

    TECHNICAL FIELD
  • The present invention relates to the field of communications technologies, and in particular, to a packet processing method and apparatus.
  • BACKGROUND
  • A packet may need to pass through a forwarding device when being transmitted in a network. A latency may be generated when the packet passes through the forwarding device. A latency of the packet in a transmission path may include the latency generated when the packet passes through the forwarding device. Latencies generated when different packets pass through the forwarding device may be unequal. Therefore, latencies of the different packets in a transmission path may be unequal.
  • The foregoing case may be caused because processing performed by the forwarding device on the different packets is different. For example, when the forwarding device performs table lookup operations according to the different packets, time intervals needed for the table lookup operations corresponding to the different packets may be unequal.
  • A phenomenon that latencies of different packets in a transmission path are unequal may be referred to as latency variation. Latency variation is unacceptable for some services. For example, a CPRI (Common Public Radio Interface, common public radio interface) service, an SDH (Synchronous Digital Hierarchy, synchronous digital hierarchy) service, or a PDH (Piesiochronous Digital Hierarchy, plesiochronous digital hierarchy) service that is transmitted between a BBU (Baseband Unit, baseband unit) and an RRU (Remote Radio Unit, remote radio unit) imposes a strict requirement on a latency of a packet.
  • In the prior art, a latency generated when a packet passes through a forwarding device cannot be enabled to be equal to a certain value.
  • SUMMARY
  • According to a packet processing method and apparatus provided in embodiments, a latency generated when a packet passes through a forwarding device may be equal to a certain value.
  • According to a first aspect, a packet processing method is provided, where the method includes:
    • receiving, by a first network device, a packet at a first time;
    • processing, by the first network device, the packet to obtain a processed packet, and writing the processed packet into a buffer memory;
    • reading, by the first network device, the processed packet from the buffer memory at a second time;
    • determining, by the first network device at a time after the second time, a first latency of the processed packet in a first in first out FIFO memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units;
    • setting, by the first network device, a read pointer and/or a write pointer according to the determined first latency;
    • writing, by the first network device according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or reading, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    • forwarding, by the first network device at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • With reference to the first aspect, in a first possible implementation manner of the first aspect, the setting, by the first network device, a write pointer according to the determined first latency specifically includes:
    • determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory; and
    • setting, by the first network device, the write pointer according to the determined location of the storage unit, where the set write pointer points to the storage unit.
  • With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, the determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
    • determining, by the first network device, the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
      Figure imgb0001
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • With reference to the first aspect or either of the first to second possible implementation manners of the first aspect, in a third possible implementation manner of the first aspect, a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous; and
    a clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • With reference to the first aspect or the first to third possible implementation manners of the first aspect, in a fourth possible implementation manner of the first aspect, the receiving, by a first network device, a packet at a first time includes:
    • receiving, by the first network device at the first time, the packet that is from a remote radio unit RRU; or
    • receiving, by the first network device at the first time, the packet that is from a baseband unit BBU.
  • According to a second aspect, a packet processing method is provided, where the method includes:
    • receiving, by a first network device, a packet that is from a second network device, where the packet carries a first time, and the first time is a time at which the second network device receives the packet;
    • processing, by the first network device, the packet to obtain a processed packet, and writing the processed packet into a buffer memory;
    • reading, by the first network device, the processed packet from the buffer memory at a second time;
    • determining, by the first network device at a time after the second time, a first latency of the processed packet in a first in first out FIFO memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units;
    • setting, by the first network device, a read pointer and/or a write pointer according to the determined first latency;
    • writing, by the first network device according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or reading, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    • forwarding, by the first network device at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • With reference to the second aspect, in a first possible implementation manner of the second aspect, the setting, by the first network device, a write pointer according to the first latency specifically includes:
    • determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory; and
    • setting, by the first network device according to the determined location of the storage unit, the write pointer to point to the storage unit.
  • With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner of the second aspect, the determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
    • determining, by the first network device, the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
      Figure imgb0002
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • With reference to the second aspect or either of the first to second possible implementation manners of the second aspect, in a third possible implementation manner of the second aspect, a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the first network device and the second network device.
  • With reference to the second aspect or any one of the first to third possible implementation manners of the second aspect, in a fourth possible implementation manner of the second aspect, the receiving, by a first network device, a packet includes:
    • receiving, by the first network device, the packet that is from a remote radio unit RRU; or
    • receiving, by the first network device, the packet that is from a baseband unit BBU.
  • According to a third aspect, a packet processing apparatus is provided, where the apparatus includes:
    • a receiving unit, configured to receive a packet at a first time;
    • a processing unit, configured to process the packet received by the receiving unit to obtain a processed packet, and write the processed packet into a buffer memory;
    • a reading unit, configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit;
    • a first latency determining unit, configured to determine, at a time after the second time, a first latency of the processed packet in a first in first out FIFO memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by a forwarding unit through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units;
    • a setting unit, configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    • the forwarding unit, configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • With reference to the third aspect, in a first possible implementation manner of the third aspect, the setting unit is specifically configured to:
    • determine, according to the first latency, a location of the storage unit in the FIFO memory; and
    • set the write pointer according to the determined location of the storage unit, where the set write pointer points to the storage unit.
  • With reference to the first possible implementation manner of the third aspect, in a second possible implementation manner of the third aspect, the setting unit is specifically configured to:
    • determine the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
      Figure imgb0003
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • With reference to the third aspect or either of the first to second possible implementation manners of the third aspect, in a third possible implementation manner of the third aspect, a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous; and
    a clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • With reference to the third aspect or the first to third possible implementation manners of the third aspect, in a fourth possible implementation manner of the third aspect, the receiving unit is specifically configured to:
    • receive, at the first time, the packet that is from a remote radio unit RRU; or
    • receive, at the first time, the packet that is from a baseband unit BBU.
  • According to a fourth aspect, a packet processing apparatus is provided, where the apparatus includes:
    • a receiving unit, configured to receive a packet that is from a second network device, where the packet carries a first time, and the first time is a time at which the second network device receives the packet;
    • a processing unit, configured to process the packet received by the receiving unit to obtain a processed packet, and write the processed packet into a buffer memory;
    • a reading unit, configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit;
    • a first latency determining unit, configured to determine, at a time after the second time, a first latency of the processed packet read by the reading unit in a first in first out FIFO memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by a forwarding unit through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units;
    • a setting unit, configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    • the forwarding unit, configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • With reference to the fourth aspect, in a first possible implementation manner of the fourth aspect, the setting unit is specifically configured to:
    • determine, according to the first latency, a location of the storage unit in the FIFO memory; and
    • set, according to the determined location of the storage unit, the write pointer to point to the storage unit.
  • With reference to the first possible implementation manner of the fourth aspect, in a second possible implementation manner of the fourth aspect, the setting unit is specifically configured to:
    • determine the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
      Figure imgb0004
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • With reference to the fourth aspect or either of the first to second possible implementation manners of the fourth aspect, in a third possible implementation manner of the fourth aspect, a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the apparatus and the second network device.
  • With reference to the fourth aspect or the first to third possible implementation manners of the fourth aspect, in a fourth possible implementation manner of the fourth aspect, the receiving unit is specifically configured to:
    • receive the packet that is from a remote radio unit RRU; or
    • receive the packet that is from a baseband unit BBU.
  • According to a method and an apparatus that are provided in the embodiments, after receiving a packet, a first network device processes the packet and determines a first latency of the processed packet in a FIFO memory, where: the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the second latency includes a third latency, and the third latency includes a time interval for processing the packet. That is, the time interval for processing the packet is taken into consideration in determining of the first latency. In addition, the determining of the first latency enables a latency generated when the packet passes through the first network device to be equal to the target latency. Therefore, in the foregoing technical solutions, a latency generated when a packet passes through a network device may be enabled to be equal to a certain value.
  • BRIEF DESCRIPTION OF DRAWINGS
    • FIG. 1 is a schematic flowchart of a packet processing method according to an embodiment of the present invention;
    • FIG. 2 is a schematic diagram of a latency generated when a packet passes through a first network device according to an embodiment of the present invention;
    • FIG. 3 is a schematic flowchart of a packet processing method according to an embodiment of the present invention;
    • FIG. 4 is a schematic flowchart of a packet processing method according to an embodiment of the present invention;
    • FIG. 5 is a schematic diagram of a latency generated when a packet passes through a second network device and a first network device according to an embodiment of the present invention;
    • FIG. 6 is a schematic flowchart of a packet processing method according to an embodiment of the present invention;
    • FIG. 7 is a schematic structural diagram of a packet processing apparatus according to an embodiment of the present invention;
    • FIG. 8 is a schematic structural diagram of a packet processing apparatus according to an embodiment of the present invention;
    • FIG. 9 is a schematic structural diagram of a network device according to an embodiment of the present invention; and
    • FIG. 10 is a schematic structural diagram of a network device according to an embodiment of the present invention.
    DESCRIPTION OF EMBODIMENTS
  • The following describes the embodiments in detail with reference to the accompanying drawings for the specification.
  • As shown in FIG. 1. FIG. 1 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. The method includes the following steps.
  • S101. A first network device receives a packet at a first time.
  • For example, the first network device may be a PTN (Packet Transport Network, packet transport network) device, an OTN (Optical Transport Network, optical transport network) device, a router, or a switch.
  • The first time in this embodiment of the present invention is a time at which the first network device receives the packet.
  • For example, a service carried by the packet may be a CPRI service, an SDH service, or a PDH service.
  • For example, when receiving the packet at the first time, the first network device may record the first time at which the packet is received.
  • For example, when receiving the packet at the first time, the first network device may record the first time in a packet header of the packet. The first network device may determine, by reading the packet header of the packet, the first time at which the packet is received.
  • For example, when receiving the packet at the first time, the first network device may also record the first time in a storage medium of the first network device. The first network device may determine, by reading the first time from the storage medium, a time at which the packet is received.
  • For example, S101 may be performed by a receiver circuit in the first network device. The receiver circuit may be configured to implement an Ethernet interface.
  • S102. The first network device processes the packet to obtain a processed packet, and writes the processed packet into a buffer memory.
  • For example, the processing performed by the first network device on the packet may be coding, decoding, encryption, or decryption. When the packet is an Ethernet frame (ethernet frame), the processing may be determining, by searching a Media Access Control (media access control, MAC) protocol table, an egress interface for forwarding the packet. When the packet is an Internet Protocol (internet protocol, IP) packet, the processing may be determining, by searching a routing table, an egress interface for forwarding the packet.
  • The buffer memory is a memory for storing the processed packet. The buffer memory may be a component of the first network device, and is a memory located inside the first network device. Alternatively, the buffer memory may be a memory located outside the first network device. The buffer memory may be coupled to the receiver circuit.
  • For example, S102 may be performed by an instruction execution circuit in the first network device. The instruction execution circuit may perform the processing on the packet according to an instruction. The instruction execution circuit may be implemented by using a network processor (network processor, NP) or an application-specific integrated circuit (application-specific integrated circuit, ASIC).
  • S103. The first network device reads the processed packet from the buffer memory at a second time.
  • For example, S103 may be performed by the instruction execution circuit in the first network device.
  • S104. The first network device determines, at a time after the second time, a first latency of the processed packet in a first in first out memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the first in first out memory includes multiple contiguous storage units.
  • For example, the multiple contiguous storage units in the FIFO (First In First Out, first in first out) memory may be configured to store a packet queue. Each storage unit is configured to store one packet or null data (null data). The packet queue includes at least one packet. When the packet queue includes multiple packets, in the packet queue, a location of a packet that is among the multiple packets and that is written by the FIFO memory at an earlier time is in front of a location of a packet that is among the multiple packets and that is written by the FIFO memory at a later time.
  • For example, in order to avoid latency variation from being generated when the multiple packets pass through the first network device, target latencies of all packets in the multiple packets in the first network device are equal.
  • For example, a value of the target latency may be statically configured by an engineer by using the first network device. The target latency may be equal to a fixed value. For example, the engineer configures the target latency for the first network device by using a telnet. The engineer may determine the target latency for the first network device by means of an experiment. If the first network device does not enable functions corresponding to S104 and S106, the following operation is performed on a packet, used for the experiment, passing through the first network device: receiving, through an ingress port, the packet used for the experiment; processing, by the first network device, the packet used for the experiment, thereby obtaining a processed packet used for the experiment; writing the processed packet used for the experiment into the buffer memory; reading the processed packet used for the experiment from the buffer memory; writing the processed packet used for the experiment into the FIFO memory; reading the processed packet used for the experiment from the FIFO memory; and forwarding the processed packet used for the experiment through an egress port. It should be noted that, in the foregoing operations, a step of determining the first latency is not performed. Further, a write pointer used for writing the processed packet used for the experiment into the FIFO memory is not set according to the first latency. A read pointer used for reading the processed packet used for the experiment from the FIFO memory is not set according to the first latency, either. The first network device may be capable of processing multiple services. The multiple services are corresponding to multiple packets. The packet used for the experiment may include the multiple packets. For example, the first network device can process a service 1, a service 2, and a service 3. The multiple packets are a packet 1, a packet 2, and a packet 3. The packet 1, the packet 2, and the packet 3 are corresponding to the service 1, the service 2, and the service 3, respectively. Latencies generated when the packet 1, the packet 2, and the packet 3 pass through the first network device are 3 ms, 4 ms, and 5 ms, respectively. A difference between latencies corresponding to different packets is caused because time intervals occupied by the first network device for processing packets of different services are different. For example, time intervals occupied by the first network device for processing the packet 1, the packet 2, and the packet 3 are 0.5 ms, 1 ms, and 2 ms, respectively. Specifically, a period from a time at which the first network device receives the packet 1 through the ingress port to a time at which the first network device writes the processed packet 1 into the buffer memory is 0.5 ms. A period from a time at which the first network device receives the packet 2 through the ingress port to a time at which the first network device writes the processed packet 2 into the buffer memory is 1 ms. A period from a time at which the first network device receives the packet 3 through the ingress port to a time at which the first network device writes the processed packet 3 into the buffer memory is 2 ms.
  • According to the foregoing experiment, the engineer may determine the target latency as a maximum value of a latency generated when the packet used for the experiment passes through the first network device, that is, 5 ms. Certainly, the engineer may also determine the target latency as a value greater than the maximum value of the latency generated when the packet used for the experiment passes through the first network device. For example, the target latency is set as 6 ms or 7 ms. In this way, after enabling the functions corresponding to S104 and S106 and receiving the different packets, the first network device may implement, by controlling time intervals, that is, the first latency, of the different packets in the FIFO memory, that all latencies generated when the different packets pass through the first network device are equal to the target latency. For example, all latencies generated when the different packets pass through the first network device are equal to 6 ms.
  • For example, the target latency of the packet in the first network device includes three parts: the first latency, the third latency, and the fourth latency. The first network device may enable, by determining the first latency of the processed packet in the FIFO memory, a value of the target latency of the packet in the first network device to be the value that is statically configured.
  • For example, the first network device may determine, by setting the read pointer and/or the write pointer that are/is of the FIFO memory, the first latency of the packet in the FIFO memory.
  • For example, the third latency is equal to a difference between the second time and the first time.
  • For example, the fourth latency is a fixed latency, and may depend on a hardware structure of the first network device. Specifically, the buffer memory may connect to the FIFO memory by using a transmission medium. The FIFO memory may connect to the egress port by using a transmission medium. After the first network device is created, the transmission medium between the buffer memory and the FIFO memory is determined, that is, a physical attribute of the transmission medium between the buffer memory and the FIFO memory is determined. Therefore, a time interval for transmitting a signal over the transmission medium between the buffer memory and the FIFO memory is a fixed value. Likewise, a time interval for transmitting a signal over the transmission medium between the FIFO memory and the egress port is also a fixed value.
  • S105. The first network device sets a read pointer and/or a write pointer according to the determined first latency.
  • For example, the setting the read pointer may be specifically setting a value of the read pointer. The setting the write pointer may be specifically setting a value of the write pointer.
  • For example, the read pointer of the FIFO memory is configured to perform a read operation on a storage unit in the FIFO memory. The write pointer of the FIFO memory is configured to perform a write operation on a storage unit in the FIFO memory.
  • For example, the first network device may determine, according to the first latency, a storage unit to which the read pointer points, so as to set the value of the read pointer to an address of the storage unit. Alternatively, the first network device may determine, according to the first latency, a storage unit to which the write pointer points, so as to set the value of the write pointer to an address of the storage unit. Alternatively, the first network device may determine, according to the first latency, a storage unit to which the read pointer and the write pointer point, so as to set values of the read pointer and the write pointer to an address of the storage unit.
  • S106. The first network device writes, according to the set write pointer, the processed packet into a storage unit in the first in first out memory, or reads, according to the set read pointer, the processed packet from a storage unit in the first in first out memory.
  • For example, after a read operation is performed on the storage unit to which the read pointer of the FIFO memory points, the value of the read pointer is increased by 1. The read pointer whose value is increased by 1 points to a next storage unit from which a packet is to be read.
  • For example, after a write operation is performed on the storage unit to which the write pointer of the FIFO memory points, the value of the write pointer is increased by 1. The write pointer whose value is increased by 1 points to a next storage unit into which a packet is to be written.
  • For example, the read operation corresponding to the read pointer and the write operation corresponding to the write pointer may be performed synchronously, or may be performed asynchronously.
  • For example, the first network device performs a write operation on the storage unit according to the set write pointer, so as to write the processed packet into the storage unit. The first network device performs a read operation on the storage unit according to the set read pointer, so as to read the processed packet from the storage unit.
  • S107. The first network device forwards, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • For example, the FIFO memory performs a read operation to read the processed packet from the storage unit to which the read pointer points.
  • For example, S107 may be performed by a transmitter circuit in the first network device, and the FIFO memory is a component of the first network device. The transmitter circuit is coupled to the FIFO memory.
  • FIG. 2 is a schematic diagram of a latency generated when the packet passes through the first network device, in the method shown in FIG. 1, according to an embodiment. Referring to FIG. 2, the packet enters the first network device at the first time, through the ingress port. The packet leaves the first network device at the third time through the egress port. The latency generated when the packet passes through the first network device is equal to the target latency. The target latency is a period from the first time to the third time. The target latency includes the first latency, the third latency, and the fourth latency.
  • The third latency is equal to a period from the first time to the second time. The first time is a time at which the first network device receives the packet through the ingress port. The second time is a time at which the first network device reads the processed packet from the buffer memory. During a period from a time at which the packet is received through the ingress port to a time at which the packet enters the buffer memory, the first network device processes the packet. For example, the first network device may process the packet by using a network processor (not shown in the figure).
  • The first latency is equal to a period from a time at which the processed packet is written into the FIFO memory to a time at which the processed packet is read from the FIFO memory.
  • The fourth latency is a fixed latency. The fourth latency includes a first part and a second part. The first part is a period from a time at which the processed packet is read from the buffer memory to a time at which the processed packet is written into the FIFO memory. The second part is equal to a period from the time at which the processed packet is read from the FIFO memory to a time at which the processed packet is forwarded through the egress port.
  • FIG. 3 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. Referring to FIG. 3, the method includes S301 and S302.
  • Optionally, in the method shown in FIG. 1, that the first network device sets a write pointer according to the determined first latency specifically includes:
    • S301. The first network device determines, according to the first latency, a location of the storage unit in the FIFO memory.
    • S302. The first network device sets the write pointer according to the determined location of the storage unit, where the set write pointer points to the storage unit.
  • For S301 and S302, refer to FIG. 3 for details.
  • Optionally, in the method shown in FIG. 3, that the first network device determines, according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
    • determining, by the first network device, the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1
      Figure imgb0005
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle (clock cycle) in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • Optionally, in the method shown in FIG. 1, a clock frequency (clock frequency) at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • A clock phase (clock phase) at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • In the foregoing solution, inequality between a rate at which data is written when the FIFO memory performs the write operation and a rate at which the data is read when the FIFO memory performs the read operation can be avoided. The inequality between the rate at which data is written and the rate at which the data is read may cause a data loss.
  • Optionally, in the method shown in FIG. 1, S101 may be specifically that the first network device receives, at the first time, the packet that is from an RRU.
  • Optionally, in the method shown in FIG. 1, S101 may be specifically that the first network device receives, at the first time, the packet that is from a BBU.
  • For example, the first network device is a network device between the BBU and the RRU. The first network device is configured to connect the BBU and the RRU. Alternatively, multiple RRUs connect to one BBU by using the first network device. In the foregoing solution, it can be avoided that each RRU needs to be directly connected to the BBU by using an optical fiber, which helps reduce optical fibers and reduce costs. In addition, a latency generated when the packet passes through the first network device is equal to a target latency. The target latency may be equal to a fixed value. When multiple packets pass through the first network device, the first network device may perform a similar operation on each packet, that is, the first network device may perform operations of S101 to S107 on each packet. Therefore, latencies generated when all packets pass through the first network device may be equal to the target latency. Therefore, when being configured to connect the BBU and the RRU, the first network device may be configured to forward a packet that is used to carry a CPRI service, an SDH service, or a PDH service. The foregoing solution can reduce latency variation.
  • According to the method described above, after receiving a packet, a first network device determines, according to a target latency set by the first network device, a first latency of a processed packet in a FIFO memory, which enables a latency of the packet in the first network device to be equal to the target latency.
  • Latency variation may also be generated in a process in which multiple packets pass through multiple network devices. To avoid latency variation from being generated in the process in which the multiple packets pass through the multiple network devices, latencies generated when the multiple packets separately pass through the multiple network devices may be determined as a same target latency. For details, refer to the following description.
  • FIG. 4 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. The method includes the following steps.
  • S401. A first network device receives a packet that is from a second network device, where the packet carries a first time, and the first time is a time at which the second network device receives the packet.
  • For example, the first network device and the second network device may be a PTN device, an OTN device, a router, or a switch.
  • For example, an intermediate network device may be disposed between the first network device and the second network device. That is, the first network device and the second network device may be indirectly connected. The intermediate network device may be a repeater.
  • For example, an intermediate network device may not be disposed between the first network device and the second network device. That is, the first network device and the second network device may be directly connected. Specifically, the first network device and the second network device may be connected by using only a transmission medium. The transmission medium may be a cable or an optical cable.
  • In this embodiment of the present invention, the first time is a time at which the second network device receives the packet.
  • For example, a service carried by the packet may be a CPRI service, an SDH service, or a PDH service.
  • For example, after receiving the packet, the second network device may record the first time in a packet header of the packet. The first network device may determine, by reading the packet header of the packet, the first time at which the second network device receives the packet.
  • For example, the second network device may record the first time in the packet header of the packet by using a receiver circuit in the second network device.
  • For example, S401 may be performed by a receiver circuit in the first network device. The receiver circuit may be configured to implement an Ethernet interface.
  • S402. The first network device processes the packet to obtain a processed packet, and writes the processed packet into a buffer memory.
  • For example, the processing performed by the first network device on the packet may be coding, decoding, encryption, or decryption.
  • When the packet is an Ethernet frame, the processing may be determining, by searching a MAC protocol table, an egress interface for forwarding the packet. When the packet is an IP packet, the processing may be determining, by searching a routing table, an egress interface for forwarding the packet.
  • The buffer memory is a memory for storing the processed packet. The buffer memory may be a component of the first network device. The buffer memory may be coupled to the receiver circuit.
  • For example, the buffer memory may be a memory located inside the first network device, or may be a memory located outside the first network device.
  • S403. The first network device reads the processed packet from the buffer memory at a second time.
  • In S403, a time at which the first network device reads the processed packet from the buffer memory is the second time.
  • For example, S403 may be performed by an instruction execution circuit. The instruction execution circuit may perform the processing on the packet according to an instruction. The instruction execution circuit may be implemented by using a network processor or an application-specific integrated circuit.
  • S404. The first network device determines, at a time after the second time, a first latency of the processed packet in a first in first out memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the first in first out memory includes multiple contiguous storage units.
  • For example, the multiple contiguous storage units in the FIFO memory are configured to store a packet queue, and each storage unit is configured to store one packet or null data. The packet queue includes at least one packet. When the packet queue includes multiple packets, in the packet queue, a location of a packet that is among the multiple packets and that is written by the FIFO memory at an earlier time is in front of a location of a packet that is among the multiple packets and that is written by the FIFO memory at a later time.
  • For example, in order to avoid latency variation from being generated when the multiple packets pass through the second network device and the first network device, target latencies of all packets in the multiple packets are equal.
  • For example, a value of the target latency is equal to a fixed value. The value of the target latency is statically configured by an engineer by using the first network device. A method for configuring the target latency is similar to the method described in S104. For details, refer to the description in step S104, and details are not described herein again.
  • For example, the target latency of the packet includes three parts: the first latency, the third latency, and the fourth latency. The first network device enables, by determining the first latency of the processed packet in the FIFO memory of the first network device, a value of the target latency of the packet to be the value that is statically configured.
  • For example, the first network device may determine, by setting a read pointer and/or a write pointer that are/is of the FIFO memory, the first latency of the packet in the FIFO memory of the first network device.
  • For example, the third latency is equal to a difference between the second time and the first time.
  • For example, the fourth latency is a fixed latency, and may depend on a hardware structure of the first network device. Specifically, the buffer memory may connect to the FIFO memory by using a transmission medium. The FIFO memory may connect to the egress port by using a transmission medium. After the first network device is created, the transmission medium between the buffer memory and the FIFO memory is determined. That is, a physical attribute of the transmission medium between the buffer memory and the FIFO memory is determined. Therefore, a time interval for transmitting a signal over the transmission medium between the buffer memory and the FIFO memory is a fixed value. Likewise, a time interval for transmitting a signal over the transmission medium that connects the FIFO memory and the egress port is also a fixed value.
  • S405. The first network device sets a read pointer and/or a write pointer according to the determined first latency.
  • For example, the setting the read pointer may be specifically setting a value of the read pointer. The setting the write pointer may be specifically setting a value of the write pointer.
  • For example, the read pointer of the FIFO memory is configured to perform a read operation on a storage unit in the FIFO memory. The write pointer of the FIFO memory is configured to perform a write operation on a storage unit in the FIFO memory.
  • For example, the first network device may determine, according to the first latency, a storage unit to which the read pointer points, so as to set the value of the read pointer to an address of the storage unit. Alternatively, the first network device may determine, according to the first latency, a storage unit to which the write pointer points, so as to set the value of the write pointer to an address of the storage unit. Alternatively, the first network device may determine, according to the first latency, a storage unit to which the read pointer and the write pointer point, so as to set values of the read pointer and the write pointer to an address of the storage unit.
  • S406. The first network device writes, according to the set write pointer, the processed packet into a storage unit in the first in first out memory, or reads, according to the set read pointer, the processed packet from a storage unit in the first in first out memory.
  • For example, after a read operation is performed on the storage unit to which the read pointer of the FIFO memory points, the value of the read pointer is increased by 1. The read pointer whose value is increased by 1 points to a next storage unit from which a packet is to be read.
  • For example, after a write operation is performed on the storage unit to which the write pointer of the FIFO memory points, the value of the write pointer is increased by 1. The write pointer whose value is increased by 1 points to a next storage unit into which a packet is to be written.
  • For example, the read operation corresponding to the read pointer and the write operation corresponding to the write pointer may be performed synchronously, or may be performed asynchronously.
  • For example, the first network device performs a write operation on the storage unit according to the set write pointer, so as to write the processed packet into the storage unit. The first network device performs a read operation on the storage unit according to the set read pointer, so as to read the processed packet from the storage unit.
  • S407. The first network device forwards, at the third time through the egress port, the processed packet that is read from the first in first out memory.
  • For example, the FIFO memory performs a read operation to read the processed packet from the storage unit to which the read pointer points.
  • For example, S407 may be performed by a transmitter circuit. Both the transmitter circuit and the FIFO memory are components of the first network device. The transmitter circuit is coupled to the FIFO memory.
  • According to an embodiment, FIG. 5 is a schematic diagram of a latency generated when the packet in the method shown in FIG. 4 passes through the second network device and the first network device. Referring to FIG. 5, the second network device 501 receives the packet through an ingress port of the second network device 501. After being forwarded by the second network device 501, the packet passes through a bearer network 502 between the second network device 501 and the first network device 500, and is received by the ingress port of the first network device 500. The target latency is equal to a period from the first time at which the packet is received by the second network device 501 through the ingress port to a third time at which the processed packet is forwarded by the first network device 500 through an egress port. The target latency includes the first latency, the third latency, and the fourth latency.
  • The third latency is equal to a period from the first time to the second time. The first time is a time at which the first network device 500 receives the packet through the ingress port. The second time is a time at which the first network device 500 reads the processed packet from the buffer memory. During a period from the time at which the ingress port receives the packet to a time at which the packet enters the buffer memory, the first network device processes the packet. For example, the first network device may process the packet by using a network processor (not shown in the figure). In addition, during a period from a time at which the ingress port receives the packet to the time at which the packet enters the buffer memory, the second network device or the bearer network 502 may also process the packet. It should be noted that, in FIG. 5, the bearer network 502 is disposed between the second network device 501 and the first network device 500. In specific implementation, the bearer network 502 may not be disposed between the second network device 501 and the first network device 500. The second network device 501 and the first network device 500 are connected by using only a transmission medium.
  • The first latency is equal to a period from a time at which the processed packet is written into the FIFO memory to a time at which the processed packet is read from the FIFO memory.
  • The fourth latency is a fixed latency. The fourth latency includes a first part and a second part. The first part is a period from a time at which the processed packet is read from the buffer memory to a time at which the processed packet is written into the FIFO memory. The second part is equal to a period from the time at which the processed packet is read from the FIFO memory to a time at which the processed packet is forwarded by the egress port.
  • FIG. 6 is a schematic flowchart of a packet processing method according to an embodiment of the present invention. Referring to FIG. 6, the method includes S601 and S602.
  • Optionally, in the method shown in FIG. 4, that the first network device sets a write pointer according to the first latency specifically includes:
    • S601. The first network device determines, according to the first latency, a location of the storage unit in the FIFO memory.
    • S602. The first network device sets, according to the determined location of the storage unit, the write pointer to point to the storage unit.
  • For S601 and S602, refer to FIG. 6 for details.
  • Optionally, in the method shown in FIG. 6, that the first network device determines, according to the first latency, a location of the storage unit in the FIFO memory specifically includes:
    • determining, by the first network device, the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1
      Figure imgb0006
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • Optionally, in the method shown in FIG. 4, the Precision Time Protocol (precision time protocol, PTP) or the Network Time Protocol (network time protocol, NTP) is used to perform time synchronization between the first network device and the second network device.
  • In the foregoing solution, an error, in calculating a target latency, caused by asynchrony between a reference time point of the first network device and a reference time point of the second network device can be avoided.
  • Optionally, in the method shown in FIG. 4, S401 may be specifically that the first network device receives the packet that is from an RRU.
  • Optionally, in the method shown in FIG. 4, S401 may be specifically that the first network device receives the packet that is from a BBU.
  • For example, the first network device is a network device between the BBU and the RRU, and the second network device is the BBU or the RRU. The first network device is configured to connect the BBU and the RRU. Alternatively, multiple RRUs connect to the BBU by using the first network device. In the foregoing solution, it can be avoided that each RRU needs to be directly connected to the BBU by using an optical fiber, which helps reduce optical fibers and reduce costs. In addition, a latency generated when the packet passes through the second network device and the first network device is equal to a target latency, where the target latency may be equal to a fixed value. When multiple packets pass through the second network device and the first network device, the first network device may perform a similar operation on each packet, that is, the first network device may perform operations of S401 to S407 on each packet. Therefore, when being configured to connect the BBU and the RRU, the first network device may be configured to forward a packet that is used to carry a CPRI service, an SDH service, or a PDH service. The foregoing solution can reduce latency variation.
  • According to the foregoing method, after receiving a packet, a first network device determines, according to a target latency set by the first network device or a second network device, a first latency of a processed packet in a FIFO memory of the first network device, which enables a latency of the packet between the second network device and the first network device to be equal to a preset target latency, thereby avoiding latency variation caused in processes of processing, such as transmitting, storing, forwarding, and exchanging, the packet between the second network device and the first network device.
  • On the basis of an inventive concept that is the same as that of the foregoing method, an embodiment of the present invention further provides a packet processing apparatus.
  • FIG. 7 is a schematic structural diagram of a packet processing apparatus according to an embodiment of the present invention. A packet processing apparatus 700 may be configured to perform the method shown in FIG. 1. For example, the packet processing apparatus 700 may be a PTN device, an OTN device, a router, or a switch.
  • Referring to FIG. 7, the packet processing apparatus 700 includes: a receiving unit 701, a processing unit 702, a reading unit 703, a first latency determining unit 704, a setting unit 705, and a forwarding unit 706.
  • The receiving unit 701 is configured to receive a packet at a first time;
    The receiving unit 701 may be configured to perform S101. For a function and specific implementation of the receiving unit 701, reference may be made to the description of S101 in the embodiment corresponding to the method shown in FIG. 1, and details are not described herein again.
  • The processing unit 702 is configured to process the packet received by the receiving unit 701 to obtain a processed packet, and write the processed packet into a buffer memory.
  • The processing unit 702 may be configured to perform S102. For a function and specific implementation of the processing unit 502, reference may be made to the description of S102 in the embodiment corresponding to the method shown in FIG. 1, and details are not described herein again.
  • The reading unit 703 is configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit 702.
  • The reading unit 703 may be configured to perform S103. For a function and specific implementation of the reading unit 703, reference may be made to the description of S103 in the embodiment corresponding to the method shown in FIG. 1, and details are not described herein again.
  • The first latency determining unit 704 is configured to determine, at a time after the second time, a first latency of the processed packet read by the reading unit 703 in a FIFO memory, where the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the forwarding unit 706 through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units.
  • The first latency determining unit 704 may be configured to perform S104. For a function and specific implementation of the first latency determining unit 704, reference may be made to the description of S104 in the embodiment corresponding to the method shown in FIG. 1, and details are not described herein again.
  • The setting unit 705 is configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit 704; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory.
  • The setting unit 705 may be configured to perform S105. For a function and specific implementation of the setting unit 705, reference may be made to the description of S105 in the embodiment corresponding to the method shown in FIG. 1, and details are not described herein again.
  • The forwarding unit 706 is configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • The forwarding unit 706 may be configured to perform S106. For a function and specific implementation of the forwarding unit 706, reference may be made to the description of S106 in the embodiment corresponding to the method shown in FIG. 1, and details are not described herein again.
  • Optionally, the setting unit 705 is specifically configured to:
    • determine, according to the first latency, a location of the storage unit in the FIFO memory; and
    • set the write pointer according to the determined location of the storage unit, where the set write pointer points to the storage unit.
  • Optionally, the setting unit 705 is specifically configured to:
    • determine the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
      Figure imgb0007
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • Optionally, a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • A clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  • Optionally, the receiving unit 701 is specifically configured to:
    • receive, at the first time, the packet that is from an RRU.
  • Optionally, the receiving unit 701 is specifically configured to:
    • receive, at the first time, the packet that is from a BBU.
  • On the basis of an inventive concept that is the same as that of the foregoing method, an embodiment of the present invention further provides a packet processing apparatus.
  • FIG. 8 is a schematic structural diagram of a packet processing apparatus according to an embodiment of the present invention. A packet processing apparatus 800 may be configured to perform the method shown in FIG. 4. For example, the packet processing apparatus 800 may be a PTN device, an OTN device, a router, or a switch.
  • Referring to FIG. 8, the packet processing apparatus 800 includes: a receiving unit 801, a processing unit 802, a reading unit 803, a first latency determining unit 804, a setting unit 805, and a forwarding unit 806.
  • The receiving unit 801 is configured to receive a packet that is from a second network device, where the packet carries a first time, and the first time is a time at which the second network device receives the packet.
  • For example, the receiving unit 801 may be configured to perform S401. For a function and specific implementation of the receiving unit 801, reference may be made to the description of S401 in the embodiment corresponding to the method shown in FIG. 4, and details are not described herein again.
  • The processing unit 802 is configured to process the packet received by the receiving unit 801 to obtain a processed packet, and write the processed packet into a buffer memory.
  • For example, the processing unit 802 may be configured to perform S402. For a function and specific implementation of the processing unit 802, reference may be made to the description of S402 in the embodiment corresponding to the method shown in FIG. 4, and details are not described herein again.
  • The reading unit 803 is configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit 802.
  • For example, the reading unit 803 may be configured to perform S403. For a function and specific implementation of the reading unit 803, reference may be made to the description of S403 in the embodiment corresponding to the method shown in FIG. 4, and details are not described herein again.
  • The first latency determining unit 804 is configured to determine, at a time after the second time, a first latency of the processed packet read by the reading unit 803 in a first in first out FIFO memory. The first latency is equal to a difference obtained by subtracting a second latency from a target latency; The target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the forwarding unit through an egress port. The second latency is equal to a sum of a third latency and a fourth latency. The third latency is equal to a period from the first time to the second time. The fourth latency is a fixed latency, and the FIFO memory includes multiple contiguous storage units.
  • For example, the first latency determining unit 804 may be configured to perform S404. For a function and specific implementation of the first latency determining unit 804, reference may be made to the description of S404 in the embodiment corresponding to the method shown in FIG. 4, and details are not described herein again.
  • The setting unit 805 is configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit 804; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory.
  • For example, the setting unit 805 may be configured to perform S405. For a function and specific implementation of the setting unit 805, reference may be made to the description of S405 in the embodiment corresponding to the method shown in FIG. 4, and details are not described herein again.
  • The forwarding unit 806 is configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  • For example, the forwarding unit 806 may be configured to perform S406. For a function and specific implementation of the forwarding unit 806, reference may be made to the description of S406 in the embodiment corresponding to the method shown in FIG. 4, and details are not described herein again.
  • Optionally, the setting unit 805 is specifically configured to:
    • determine, by the first network device according to the first latency, a location of the storage unit in the FIFO memory; and
    • set, by the first network device according to the determined location of the storage unit, the write pointer to point to the storage unit.
  • Optionally, the setting unit 805 is specifically configured to:
    • determine the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
      Figure imgb0008
    • where P_add indicates a quantity of storage units between a first storage unit and a second storage unit, where the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  • Optionally, a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the apparatus and the second network device.
  • Optionally, the receiving unit 801 is specifically configured to:
    • receive the packet that is from an RRU.
  • Optionally, the receiving unit 801 is specifically configured to:
    • receive the packet that is from a BBU.
  • On the basis of an inventive concept that is the same as that of the foregoing method, an embodiment of the present invention further provides a network device.
  • FIG. 9 is a schematic structural diagram of a network device according to an embodiment of the present invention. For example, a network device 900 may be a PTN device, an OTN device, a router, or a switch.
  • Referring to FIG. 9, the network device 900 includes: a receiver circuit 901, a buffer memory 902, a FIFO memory 903, an instruction execution circuit 904, a transmitter circuit 905, and an instruction memory 906.
  • The instruction execution circuit 904 is coupled to the instruction memory 906. The instruction memory 906 is configured to store a computer instruction. The instruction execution circuit 904 implements a function by reading the computer instruction. For example, the instruction execution circuit 904 implements processing of a packet.
  • The instruction execution circuit 904 is separately coupled to the receiver circuit 901, the buffer memory 902, the FIFO memory 903, the instruction execution circuit 904, and the transmitter circuit 905. Specifically, the instruction execution circuit 904 may perform a read operation on the receiver circuit 901, so as to acquire data received by the receiver circuit 901. The instruction execution circuit 904 may perform a write operation on the transmitter circuit 905, so as to provide data to the transmitter circuit 905. The instruction execution circuit 904 may perform a read operation and a write operation on the buffer memory 902. The instruction execution circuit 904 may perform a read operation and a write operation on the FIFO memory 903. An output end of the receiver circuit 901 is coupled to an input end of the buffer memory 902. The buffer memory 902 may receive data sent by the receiver circuit 901. An output end of the buffer memory 902 is coupled to an input end of the FIFO memory 903. The FIFO memory 903 may receive data sent by the buffer memory 902. An output end of the FIFO memory 903 is coupled to an input end of the transmitter circuit 905. The transmitter circuit 905 may receive data sent by the FIFO memory 903. The network device 900 may be configured to perform the method shown in FIG. 1. The receiver circuit 901 may be configured to perform S101.
  • The instruction execution circuit 904 may perform S102 by accessing a computer program in the instruction memory 906, and read a processed packet by accessing the buffer memory 902. The instruction execution circuit 904 may perform S103 by accessing the computer program in the instruction memory 906.
  • The instruction execution circuit 904 may perform S104 by accessing the computer program in the instruction memory 906.
  • The instruction execution circuit 904 may perform S105 by accessing the computer program in the instruction memory 906, and perform a write operation and/or a read operation on the FIFO memory 903 by using a write pointer and/or a read pointer. The instruction execution circuit 904 may perform S106 by accessing the computer program in the instruction memory 906.
  • The transmitter circuit 905 may be configured to perform S107. Specifically, the transmitter circuit 905 may be configured to implement an egress port involved in S107.
  • The network device 900 may be configured to perform the method shown in FIG. 4. The receiver circuit 901 may be configured to perform S401.
  • The instruction execution circuit 904 may perform S402 by accessing the computer program in the instruction memory 906, and read a processed packet by accessing the buffer memory 902. The instruction execution circuit 904 may perform S403 by accessing the computer program in the instruction memory 906.
  • The instruction execution circuit 904 may perform S404 by accessing the computer program in the instruction memory 906.
  • The instruction execution circuit 904 may perform S405 by accessing the computer program in the instruction memory 906, and perform a write operation and/or a read operation on the FIFO memory 903 by using a write pointer and/or a read pointer. The instruction execution circuit 904 may perform S406 by accessing the computer program in the instruction memory 906.
  • The transmitter circuit 905 may be configured to perform S407. Specifically, the transmitter circuit 905 may be configured to implement an egress port involved in S407.
  • FIG. 10 is a schematic structural diagram of a network device according to an embodiment of the present invention. For example, a network device 1000 may be a PTN device, an OTN device, a router, or a switch.
  • Referring to FIG. 10, the network device 1000 includes: an ingress port 1001, an egress port 1002, a logic circuit 1003, and a memory 1004. The logic circuit 1003 is coupled to the ingress port 1001, the egress port 1002, and the memory 1004 by using a bus. The memory 1004 stores a computer program. The logic circuit 1003 may implement a function by executing the computer program stored by the memory 1004. For example, the logic circuit 1003 implements processing of a packet.
  • The network apparatus 1000 may be configured to perform the method shown in FIG. 1. The network apparatus 1000 may be configured to implement the first network device involved in the method shown in FIG. 1. The ingress port 1001 may be configured to perform S101. The logic circuit 1003 may perform S102 by accessing the computer program in the memory 1004. The memory 1004 may be configured to implement the buffer memory involved in S102.
  • The logic circuit 1003 may perform S103 by accessing the computer program in the memory 1004. The logic circuit 1003 may perform S104 by accessing the computer program in the memory 1004. In addition, the memory 1004 may be configured to implement the FIFO memory involved in S104.
  • The logic circuit 1003 may perform S105 by accessing the computer program in the memory 1004. The logic circuit 1003 may perform S106 by accessing the computer program in the memory 1004.
  • The egress port 1002 may be configured to perform S1010. Specifically, the egress port 1002 may be configured to implement an egress port involved in S107.
  • The network apparatus 1000 may be configured to perform the method shown in FIG. 4. The network apparatus 1000 may be configured to implement the first network device involved in the method shown in FIG. 4. The ingress port 1001 may be configured to perform S401. The logic circuit 1003 may perform S402 by accessing the computer program in the memory 1004. The memory 1004 may be configured to implement the buffer memory involved in S402.
  • The logic circuit 1003 may perform S403 by accessing the computer program in the memory 1004. The logic circuit 1003 may perform S404 by accessing the computer program in the memory 1004. In addition, the memory 1004 may be configured to implement the FIFO memory involved in S404.
  • The logic circuit 1003 may perform S405 by accessing the computer program in the memory 1004. The logic circuit 1003 may perform S406 by accessing the computer program in the memory 1004.
  • The egress port 1002 may be configured to perform S407. Specifically, the egress port 1002 may be configured to implement an egress port involved in S407.
  • A person skilled in the art should understand that the embodiments of the present invention may be provided as a method, a system, or a computer program product. Therefore, the present invention may use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present invention may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, and the like) that include computer-usable program code.
  • The present invention is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiments of the present invention. It should be understood that computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions may be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of any other programmable data processing device to generate a machine, so that the instructions executed by a computer or a processor of any other programmable data processing device generate an apparatus for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be stored in a computer readable memory that can instruct the computer or any other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • These computer program instructions may also be loaded onto a computer or another programmable data processing device, so that a series of operations and steps are performed on the computer or the another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or the another programmable device provide steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.
  • A person skilled in the art may make modifications and variations to technical solutions provided in embodiments of the present invention. The present invention is intended to cover these modifications and variations provided that they fall within the scope of protection defined by the following claims and their equivalent technologies.

Claims (20)

  1. A packet processing method, wherein the method comprises:
    receiving, by a first network device, a packet at a first time;
    processing, by the first network device, the packet to obtain a processed packet, and writing the processed packet into a buffer memory;
    reading, by the first network device, the processed packet from the buffer memory at a second time;
    determining, by the first network device at a time after the second time, a first latency of the processed packet in a first in first out FIFO memory, wherein the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory comprises multiple contiguous storage units;
    setting, by the first network device, a read pointer and/or a write pointer according to the determined first latency;
    writing, by the first network device according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or reading, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    forwarding, by the first network device at the third time through the egress port, the processed packet that is read from the FIFO memory.
  2. The method according to claim 1, wherein the setting, by the first network device, a write pointer according to the determined first latency specifically comprises:
    determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory; and
    setting, by the first network device, the write pointer according to the determined location of the storage unit, wherein the set write pointer points to the storage unit.
  3. The method according to claim 2, wherein the determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory specifically comprises:
    determining, by the first network device, the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
    Figure imgb0009
    wherein P_add indicates a quantity of storage units between a first storage unit and a second storage unit, wherein the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  4. The method according to any one of claims 1 to 3, wherein a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous; and
    a clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  5. The method according to any one of claims 1 to 4, wherein the receiving, by a first network device, a packet at a first time comprises:
    receiving, by the first network device at the first time, the packet that is from a remote radio unit RRU; or
    receiving, by the first network device at the first time, the packet that is from a baseband unit BBU.
  6. A packet processing method, wherein the method comprises:
    receiving, by a first network device, a packet that is from a second network device, wherein the packet carries a first time, and the first time is a time at which the second network device receives the packet;
    processing, by the first network device, the packet to obtain a processed packet, and writing the processed packet into a buffer memory;
    reading, by the first network device, the processed packet from the buffer memory at a second time;
    determining, by the first network device at a time after the second time, a first latency of the processed packet in a first in first out FIFO memory, wherein the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by the first network device through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory comprises multiple contiguous storage units;
    setting, by the first network device, a read pointer and/or a write pointer according to the determined first latency;
    writing, by the first network device according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or reading, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    forwarding, by the first network device at the third time through the egress port, the processed packet that is read from the FIFO memory.
  7. The method according to claim 6, wherein the setting, by the first network device, a write pointer according to the first latency specifically comprises:
    determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory; and
    setting, by the first network device according to the determined location of the storage unit, the write pointer to point to the storage unit.
  8. The method according to claim 7, wherein the determining, by the first network device according to the first latency, a location of the storage unit in the FIFO memory specifically comprises:
    determining, by the first network device, the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
    Figure imgb0010
    wherein P_add indicates a quantity of storage units between a first storage unit and a second storage unit, wherein the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  9. The method according to any one of claims 6 to 8, wherein a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the first network device and the second network device.
  10. The method according to any one of claims 6 to 9, wherein the receiving, by a first network device, a packet comprises:
    receiving, by the first network device, the packet that is from a remote radio unit RRU; or
    receiving, by the first network device, the packet that is from a baseband unit BBU.
  11. A packet processing apparatus, wherein the apparatus comprises:
    a receiving unit, configured to receive a packet at a first time;
    a processing unit, configured to process the packet received by the receiving unit to obtain a processed packet, and write the processed packet into a buffer memory;
    a reading unit, configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit;
    a first latency determining unit, configured to determine, at a time after the second time, a first latency of the processed packet read by the reading unit in a first in first out FIFO memory, wherein the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by a forwarding unit through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory comprises multiple contiguous storage units;
    a setting unit, configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    the forwarding unit, configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  12. The apparatus according to claim 11, wherein the setting unit is specifically configured to:
    determine, according to the first latency, a location of the storage unit in the FIFO memory; and
    set the write pointer according to the determined location of the storage unit, wherein the set write pointer points to the storage unit.
  13. The apparatus according to claim 12, wherein the setting unit is specifically configured to:
    determine the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1,
    Figure imgb0011
    wherein P_add indicates a quantity of storage units between a first storage unit and a second storage unit, wherein the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  14. The apparatus according to any one of claims 11 to 13, wherein a clock frequency at which the write pointer performs a write operation on the FIFO memory and a clock frequency at which the read pointer performs a read operation on the FIFO memory are synchronous; and
    a clock phase at which the write pointer performs a write operation on the FIFO memory and a clock phase at which the read pointer performs a read operation on the FIFO memory are synchronous.
  15. The apparatus according to any one of claims 11 to 14, wherein the receiving unit is specifically configured to:
    receive, at the first time, the packet that is from a remote radio unit RRU; or
    receive, at the first time, the packet that is from a baseband unit BBU.
  16. A packet processing apparatus, wherein the apparatus comprises:
    a receiving unit, configured to receive a packet that is from a second network device, wherein the packet carries a first time, and the first time is a time at which the second network device receives the packet;
    a processing unit, configured to process the packet received by the receiving unit to obtain a processed packet, and write the processed packet into a buffer memory;
    a reading unit, configured to read, from the buffer memory at a second time, the processed packet obtained by the processing unit;
    a first latency determining unit, configured to determine, at a time after the second time, a first latency of the processed packet read by the reading unit in a first in first out FIFO memory, wherein the first latency is equal to a difference obtained by subtracting a second latency from a target latency, the target latency is equal to a period from the first time to a third time at which the processed packet is forwarded by a forwarding unit through an egress port, the second latency is equal to a sum of a third latency and a fourth latency, the third latency is equal to a period from the first time to the second time, the fourth latency is a fixed latency, and the FIFO memory comprises multiple contiguous storage units;
    a setting unit, configured to: set a read pointer and/or a write pointer according to the first latency determined by the first latency determining unit; and write, according to the set write pointer, the processed packet into a storage unit in the FIFO memory, or read, according to the set read pointer, the processed packet from a storage unit in the FIFO memory; and
    the forwarding unit, configured to forward, at the third time through the egress port, the processed packet that is read from the FIFO memory.
  17. The apparatus according to claim 16, wherein the setting unit is specifically configured to:
    determine, according to the first latency, a location of the storage unit in the FIFO memory; and
    set, according to the determined location of the storage unit, the write pointer to point to the storage unit.
  18. The apparatus according to claim 17, wherein the setting unit is specifically configured to:
    determine the location of the storage unit in the FIFO memory according to the following formula: P _ add = T 1 T read 1
    Figure imgb0012
    wherein P_add indicates a quantity of storage units between a first storage unit and a second storage unit, wherein the first storage unit and the second storage unit are storage units in the multiple contiguous storage units, the first storage unit is configured to store the processed packet, the multiple contiguous storage units are configured to store a packet queue, each storage unit is configured to store only one packet or null data, and the second storage unit is configured to store a tail of the packet queue; T 1 indicates the first latency; Tread indicates a clock cycle in which the write pointer performs a write operation on the FIFO memory; and ┌●┐ indicates round-up.
  19. The apparatus according to any one of claims 16 to 18, wherein a precision clock synchronization protocol or the Network Time Protocol is used to perform time synchronization between the apparatus and the second network device.
  20. The apparatus according to any one of claims 16 to 19, wherein the receiving unit is specifically configured to:
    receive the packet that is from a remote radio unit RRU; or
    receive the packet that is from a baseband unit BBU.
EP15892862.2A 2015-05-25 2015-05-25 Packet processing method and apparatus Active EP3255841B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/079716 WO2016187781A1 (en) 2015-05-25 2015-05-25 Packet processing method and apparatus

Publications (3)

Publication Number Publication Date
EP3255841A1 true EP3255841A1 (en) 2017-12-13
EP3255841A4 EP3255841A4 (en) 2018-03-21
EP3255841B1 EP3255841B1 (en) 2019-09-11

Family

ID=57392361

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15892862.2A Active EP3255841B1 (en) 2015-05-25 2015-05-25 Packet processing method and apparatus

Country Status (4)

Country Link
US (1) US10313258B2 (en)
EP (1) EP3255841B1 (en)
CN (1) CN107615718B (en)
WO (1) WO2016187781A1 (en)

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7123675B2 (en) * 2002-09-25 2006-10-17 Lucent Technologies Inc. Clock, data and time recovery using bit-resolved timing registers
US20040151170A1 (en) * 2003-01-31 2004-08-05 Manu Gulati Management of received data within host device using linked lists
CN100401705C (en) * 2005-12-23 2008-07-09 上海大学 Wired local network analog platform construction method of mobile radio self-organizing network
US20070220184A1 (en) * 2006-03-17 2007-09-20 International Business Machines Corporation Latency-locked loop (LLL) circuit, buffer including the circuit, and method of adjusting a data rate
JP4952642B2 (en) * 2008-04-15 2012-06-13 富士通株式会社 Packet transfer apparatus and packet discarding method
EP3327957B1 (en) * 2009-07-27 2020-10-21 Huawei Technologies Co., Ltd. Signal transmission processing method and apparatus and distributed base station
US8386828B1 (en) * 2010-06-16 2013-02-26 Xilinx, Inc. Circuit for estimating latency through a FIFO buffer
CN102781060B (en) * 2011-05-12 2016-01-27 中国移动通信集团广东有限公司 A kind of method, forward node and wireless network realizing route in the wireless network
CN103677732B (en) * 2012-09-03 2016-11-02 上海贝尔股份有限公司 Fifo device and method thereof
US9094307B1 (en) * 2012-09-18 2015-07-28 Cisco Technology, Inc. Measuring latency within a networking device
US9083478B2 (en) * 2012-09-21 2015-07-14 Altera Corporation Apparatus and methods for determining latency of a network port
US10069741B2 (en) * 2013-03-27 2018-09-04 Jacoti Bvba Method and device for latency adjustment
US9250859B2 (en) * 2014-01-17 2016-02-02 Altera Corporation Deterministic FIFO buffer
CN107220204B (en) * 2016-03-21 2020-05-08 华为技术有限公司 Data reading circuit

Also Published As

Publication number Publication date
US10313258B2 (en) 2019-06-04
CN107615718A (en) 2018-01-19
WO2016187781A1 (en) 2016-12-01
US20180077076A1 (en) 2018-03-15
CN107615718B (en) 2020-06-16
EP3255841A4 (en) 2018-03-21
EP3255841B1 (en) 2019-09-11

Similar Documents

Publication Publication Date Title
US9742514B2 (en) Method, apparatus, and system for generating timestamp
US9667370B2 (en) Communication device with peer-to-peer assist to provide synchronization
US11736978B2 (en) Method and apparatus for receiving CPRI data stream, method and apparatus for receiving ethernet frame, and system
CN106162860B (en) Time synchronization method and system, and network device
US10742555B1 (en) Network congestion detection and resolution
WO2016045098A1 (en) Switch, controller, system and link quality detection method
CA2840588C (en) Apparatus and method for use in a spacewire-based network
US11165527B2 (en) Time synchronization for encrypted traffic in a computer network
US20160182214A1 (en) Method and apparatus for determining ethernet clock source
KR20140111011A (en) Method and apparatus for communicating time information between time-aware devices
EP3278518B1 (en) A network node
US20090109966A1 (en) Method and apparatus for performing synchronous time division switch, and ethernet switch
CN109728968B (en) Method, related equipment and system for obtaining target transmission path
US10313258B2 (en) Packet processing method and apparatus
EP2680466A1 (en) Low latency transparent clock
CN106911545B (en) Method and device for transmitting ST _ BUS data through Ethernet
KR20150016735A (en) Network synchronization system using centralized control plane
JP2022025363A (en) Time synchronization method, time synchronization program, time synchronization apparatus, and time synchronization system
CN115868145A (en) Communication method and related equipment

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20170907

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

A4 Supplementary search report drawn up and despatched

Effective date: 20180215

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/863 20130101ALI20180210BHEP

Ipc: G06F 5/06 20060101ALI20180210BHEP

Ipc: H04J 3/06 20060101ALI20180210BHEP

Ipc: H04L 12/815 20130101ALI20180210BHEP

Ipc: H04L 12/879 20130101AFI20180210BHEP

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602015038048

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012540000

Ipc: H04L0012875000

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/26 20060101ALI20190204BHEP

Ipc: H04L 12/875 20130101AFI20190204BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20190321

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1179944

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190915

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015038048

Country of ref document: DE

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190911

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191211

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191211

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191212

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1179944

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190911

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200113

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200224

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015038048

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG2D Information on lapse in contracting state deleted

Ref country code: IS

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200112

26N No opposition filed

Effective date: 20200615

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200525

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200525

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602015038048

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012875000

Ipc: H04L0047560000

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190911

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230331

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230406

Year of fee payment: 9