CN113411262A - Method and device for setting large receiving and unloading function - Google Patents

Method and device for setting large receiving and unloading function Download PDF

Info

Publication number
CN113411262A
CN113411262A CN202110527602.2A CN202110527602A CN113411262A CN 113411262 A CN113411262 A CN 113411262A CN 202110527602 A CN202110527602 A CN 202110527602A CN 113411262 A CN113411262 A CN 113411262A
Authority
CN
China
Prior art keywords
lro
queue
function
receiving
message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110527602.2A
Other languages
Chinese (zh)
Other versions
CN113411262B (en
Inventor
曲会春
徐成
程韬
武雪平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
XFusion Digital Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202110527602.2A priority Critical patent/CN113411262B/en
Publication of CN113411262A publication Critical patent/CN113411262A/en
Application granted granted Critical
Publication of CN113411262B publication Critical patent/CN113411262B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/19Flow control; Congestion control at layers above the network layer
    • H04L47/193Flow control; Congestion control at layers above the network layer at the transport layer, e.g. TCP related

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a method and a device for setting a large receiving and unloading function. The method comprises the following steps: determining start-stop information of a large receiving and unloading LRO function of a receiving queue, wherein the start-stop information is used for indicating to start or stop the LRO function of the receiving queue; and setting the LRO function of the receiving queue according to the start-stop information so as to improve the overall performance of the system.

Description

Method and device for setting large receiving and unloading function
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for setting a Large Receive Offload (LRO) function.
Background
The LRO is an offload technology, and is implemented by a Network Interface Card (NIC), for example, an aggregation processing procedure of a Transmission Control Protocol (TCP) message slice (segment) is offloaded from a processor to the NIC. Specifically, when the network card starts or stops the LRO function, all the receive queues start or stop the function at the same time. When the network card starts the LRO function, the network card aggregates the received TCP message slices belonging to the same data stream into a TCP message or a large TCP message slice. Taking the Linux operating system as an example of an operating system adopted by the server where the network card driver is located, the network card driver may convert the TCP message obtained by aggregation or a large TCP message slice into data of a socket buffer (SKB) structure, then send the data of the SKB structure to the processor, and then complete a subsequent processing process of a protocol stack (such as a TCP/IP protocol stack) by the processor. Therefore, the aggregation behavior of the message slices and the conversion operation of the SKB structure do not need to be executed by the processor, and the processing overhead of the processor is reduced.
However, if the aggregation effect is not good, after the LRO function is started, the network card performs TCP packet slice aggregation, which results in an extended processing time of the TCP packet. Therefore, how to set the start and stop of the LRO function to improve the overall performance of the system becomes a technical problem to be solved urgently.
Disclosure of Invention
The application provides a method and a device for setting an LRO function, which are beneficial to improving the overall performance of a system.
In a first aspect, the present application provides a method for setting an LRO function, where the method may include: determining start-stop information of the LRO function of the receiving queue, wherein the start-stop information is used for indicating to start or stop the LRO function of the receiving queue; and setting the LRO function of the receiving queue according to the start-stop information. Specifically, when the start-stop information is used to indicate that the LRO function of the receive queue is started, the LRO function of the receive queue is started. And stopping the LRO function of the receiving queue when the start-stop information is used for indicating that the LRO function of the receiving queue is stopped. According to the technical scheme, the LRO function is set based on the granularity of the receiving queue, so that the aggregation effect of the message slices and the processing time of the message slices are balanced by setting the condition of reasonably starting or stopping the LRO function of the receiving queue, and the processing efficiency and the overall performance of the system are improved.
The LRO function for opening the receiving queue comprises the following steps: and starting to aggregate the message slices belonging to the same data flow in the receiving queue. It can be understood that whether the message slices can be aggregated also needs to see whether the message slices belonging to the same data flow satisfy the aggregation condition.
Optionally, if a plurality of consecutive packet slices in the receive queue belong to the same data stream and the sequence numbers are consecutive, the plurality of packet slices satisfy the aggregation condition, otherwise, the plurality of packet slices do not satisfy the aggregation condition.
In one possible implementation manner, determining start-stop information of the LRO function of the receive queue includes: under the condition that the LRO function of the receiving queue is opened, counting the probability of interruption of the aggregation process of the receiving queue; wherein, the target object obtained after the LRO function is executed for the receive queue includes: if a target object is a message slice, is not the last message slice in the message to which the target object belongs, and has a length less than or equal to a first threshold value, the aggregation process of the receiving queue is interrupted once; and when the probability of interruption of the aggregation process of the receiving queue is greater than or equal to a second threshold value, determining the start-stop information to be used for indicating to stop the LRO function of the receiving queue. When the probability of interruption of the aggregation process of one receiving queue is greater than or equal to the second threshold, the aggregation process of the receiving queue can be considered to be interrupted frequently, and further, the serious interleaving among the message slices belonging to the multiple data streams in the receiving queue can be estimated. In this case, because the number of interrupts is large, the number of aggregated message slices may be small, and if the LRO function is continuously executed for the receive queue, the effect of reducing the CPU occupancy rate caused by executing the LRO function cannot be well exerted, but instead, the network card needs to determine whether the received message slices satisfy the aggregation condition one by one to slow down the data processing speed due to the execution of the LRO function, thereby affecting the overall performance of the system. Thus, at this point, the LRO functionality of the receive queue may be stopped.
It will be appreciated that which message and/or message slice the target object specifically includes is determined based on the objects in the receive queue. Optionally, at least one of the first threshold and the second threshold is configurable. The first threshold value can be determined according to the aggregation capability of the network card of the receiving end server, and the second threshold value can be determined according to the aggregation effect of the message slices in the receiving queue and the time length for processing the message/message slices by the network card of the receiving end server.
In a possible implementation manner, counting the probability of interruption of the aggregation process of the receive queue includes: counting the times of interruption of the aggregation process of the receiving queue in a first preset time period and the number of target objects obtained after the LRO function is executed aiming at the receiving queue in the first preset time period; and determining the probability of interruption of the aggregation process of the receiving queue according to the number of times (marked as x) of interruption of the aggregation process of the receiving queue in a first preset time period and the number (marked as y) of target objects obtained after the LRO function is executed on the receiving queue in the first preset time period. For example, the probability can be determined by dividing x by y. Of course, the embodiments of the present application are not limited thereto.
In one possible implementation, the method further includes: selecting a set of threshold values from a plurality of sets of threshold values, each set of threshold values comprising a third threshold value and a fourth threshold value; the third threshold value included in the selected threshold value group is taken as the first threshold value, and the fourth threshold value included in the selected threshold value group is taken as the second threshold value. In this way, it is beneficial to implement that different thresholds are selected for different pending services (i.e. services to which objects in the receive queue belong), thereby contributing to improving the overall performance of the system.
In one possible implementation manner, determining start-stop information of the LRO function of the receive queue includes: under the condition that the LRO function of the receiving queue is stopped, counting the probability that the data stream to which the object in the receiving queue belongs is multi-stream, wherein the object comprises a message and/or a message slice; and if the counted probability that the data stream to which the object in the receiving queue belongs is a multi-stream is smaller than or equal to a fifth threshold, determining that the start-stop information is used for indicating to start the LRO function of the receiving queue. Optionally, the fifth threshold is configurable. When the probability that the data stream to which the object in one receive queue belongs is a multi-stream is less than or equal to the fifth threshold, it may be considered that interleaving between multiple data streams of the receive queue is not serious. In this case, the LRO function is executed for the receive queue, the interruption probability is low when aggregating the message slices belonging to the same data stream, and the number of aggregated message slices is large, so that the effect of reducing the CPU occupancy rate caused by executing the LRO function can be well exerted, thereby improving the overall performance of the system.
In a possible implementation manner, the counting the probability that the data stream to which the object in the receiving queue belongs is a multi-stream includes: counting the number of times that the data stream to which the object in the receiving queue belongs is multi-stream within a second preset time period and the number of the objects of the receiving queue within the second preset time period; if the hash values of two adjacent objects in the receiving queue are different, adding 1 to the number of times that the object of the receiving queue belongs to the multi-stream; and determining the probability that the data stream to which the object in the receiving queue belongs is a multi-stream according to the number (marked as c) of times that the data stream to which the object in the receiving queue belongs is a multi-stream in a second preset time period and the number (marked as d) of the objects in the receiving queue in the second preset time period. For example, the probability may be obtained by dividing c by d, but the embodiment of the present application is not limited thereto.
In one possible implementation, the method may further include: selecting one threshold value from a plurality of threshold values, and taking the selected threshold value as the fifth threshold value. In this way, it is beneficial to implement that different thresholds are selected for different pending services (i.e. services to which objects in the receive queue belong), thereby contributing to improving the overall performance of the system.
The execution main body of the first aspect or any possible implementation manner of the first aspect may be a receiving end server or a network card of the receiving end server. The receiving end server refers to a server for receiving data (including messages and message slices). For a server, when the server is used to send data, the server is called the sending server, and when the server is used to receive data, the server is called the receiving server.
In a second aspect, the present application provides a large-scale receiving offload LRO function setup apparatus, where the apparatus includes various modules for performing the LRO function setup method in the first aspect or any one of the possible implementations of the first aspect.
In a third aspect, the present application provides a setup device for bulk receive offload LRO functionality, where the setup device includes a memory and a processor, where the memory is used to store computer executable instructions, and when the device runs, the processor executes the computer executable instructions in the memory to perform the operation steps of the method in the first aspect or any one of the possible implementation manners of the first aspect by using hardware resources in the setup device for LRO functionality. The device can be a receiving end server or a network card of the receiving end server.
In a fourth aspect, the present application further provides a computer-readable storage medium, on which a computer program is stored, which, when run on a computer, causes the computer to perform the operational steps of any one of the above-mentioned first aspect or any one of the above-mentioned possible implementations of the first aspect.
In a fifth aspect, the present application further provides a computer program product enabling the operating steps of any one of the possible implementations of the first aspect or the first aspect when the computer program product runs on a computer.
It is understood that any one of the apparatuses, computer readable storage media, computer program products, etc. provided above is used for executing the corresponding method provided above, and therefore, the beneficial effects achieved by the apparatuses can refer to the beneficial effects in the corresponding method, and are not described herein again.
Drawings
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application;
fig. 2 is a schematic diagram of a cache region and a corresponding relationship between cache region descriptors, a receiving queue and an object according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a correspondence relationship between a network card, a port, and a receive queue according to an embodiment of the present application;
FIG. 4 is a schematic diagram of a relationship between an object and a target object according to an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a method for setting an LRO function according to an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating another method for setting an LRO function according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an LRO function setting apparatus according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a hardware structure of a network card according to an embodiment of the present application;
fig. 10 is a schematic hardware structure diagram of a server according to an embodiment of the present application.
Detailed Description
The technical scheme provided by the application is further described in the following with reference to the attached drawings.
Fig. 1 is a schematic structural diagram of a communication system according to an embodiment of the present application. As shown, the communication system includes a server 100 and a server 200. The server 100 and the server 200 can communicate messages through a network 300, and the network 300 comprises an ethernet network, that is, the servers communicate with each other using a TCP/IP protocol. The server that sends data (including messages and message slices) is also called a sending-end server, and the server that receives data is also called a receiving-end server. For a server, when the server is used to send data, the server is called the sending server, and when the server is used to receive data, the server is called the receiving server.
Each of the server 100 and the server 200 includes a hardware layer and a software layer, and only the structure of the server 100 is illustrated in fig. 1. The hardware layer of the server 100 includes a network card, a memory, and one or more processors, such as a Central Processing Unit (CPU). The software layer is program code running on the hardware layer. Specifically, the software layer may be divided into several layers, and the layers communicate with each other through software interfaces. The software layer comprises an application layer, an operating system layer and a driving layer from top to bottom. Wherein the application layer comprises a series of program codes for running the application program. The operating system layer includes operating system program code and a protocol stack. The operating system may be Linux, Windows, vxWarks, or the like. A protocol stack refers to a collection of program codes that are divided according to different levels involved in a communication protocol and process data processing of the corresponding levels. For convenience of description, in the following description of the embodiments of the present application, a case where the operating system is a Linux operating system, the protocol stack is a TCP/IP protocol stack, and a data structure processed by the TCP/IP protocol stack is an SKB structure is taken as an example for description. The driving layer is used for realizing message interaction between the hardware layer and the software layer. The driving layer comprises a network card driver and the like.
In order to better understand the technical solutions provided by the embodiments of the present application, first, the terms and techniques related to the embodiments of the present application are briefly described.
1) Messages, message slices, objects
The message related in the embodiment of the present application may be a TCP message or a User Datagram Protocol (UDP) message. The lengths of different messages may be the same or different.
After a message is divided into several segments, each segment is called a message slice. Specifically, after a network card in a sending-end server receives a data instruction from a processor of the sending-end server, if it is determined that the length of a message for sending the data instruction is greater than a threshold, dividing the data instruction into a plurality of data segments according to a Maximum Transmission Unit (MTU) such as 1500B, and the like, wherein each data segment is transmitted by using one message, the message for transmitting one data segment can be called a message slice, and serial numbers are sequentially allocated to each message slice, and the serial numbers of two adjacent message slices are continuous; then, the network card sends each message slice to the receiving end server.
For convenience of description, in the embodiment of the present application, the message and the message slice sent by the network card in the sending-end server are collectively referred to as an "object", or the message and the message slice received by the network card in the receiving-end server are collectively referred to as an "object".
2) Data flow
The data flow refers to a set formed by messages and/or message slices with the same quintuple. The quintuple refers to an Interconnection Protocol (IP), a destination IP, a source port number, a destination port number, and a protocol number between source networks. That is, the quintuple information of each packet and/or packet slice in the same data stream is the same, and the quintuple information of different data streams is different. The difference between the five tuples is understood to mean that at least one item of information in the two five tuples is different.
3) Buffer and buffer descriptor
The cache region is a storage space for caching objects. Alternatively, the buffer may be a storage space in the network card. Alternatively, the cache region may also be a storage space divided in other memories of the server, and is used for implementing the function of caching the object.
The buffer descriptor is information for describing a buffer, for example, whether the buffer is free, the size of the buffer, and the like.
4) Receive Queue (RQ)
The receiving end server may store the received object in the buffer described by the buffer descriptor of the receiving end queue. For convenience of description, in the embodiments of the present application, these objects are referred to as objects belonging to a receive queue or objects in a receive queue. One or more objects may be stored in one cache region, and one object may also be stored in a plurality of cache regions.
Fig. 2 is a schematic diagram of a cache region and a corresponding relationship between a cache region descriptor, a receive queue, and an object according to an embodiment of the present application. As shown, the buffer descriptor connected by the dashed line with double arrows is used to describe the buffer connected by the dashed line. The arrow to the right above receive queue 1 indicates the order of the buffer descriptors in the receive queue. Fig. 2 illustrates an example in which the objects belonging to the receive queue 1 include an object 1 to an object w, and each object is stored in one buffer, where w is an integer greater than or equal to 1.
The data streams to which different objects in a receive queue belong may be the same or different. In a possible implementation manner, for the network card of the receiving end server, a preset hash algorithm may be used to calculate five tuples of each object to obtain a hash value, and the hash value is used to determine whether two adjacent objects in one receiving queue belong to the same data stream. If the hash values of two adjacent objects in one receiving queue are determined to be the same, the two objects are considered to belong to the same data stream; if the hash values of the two objects are determined to be different, the two objects are not considered to belong to the same data stream. As another possible embodiment, for the network card of the receiving-end server, it may also be determined through other manners whether two adjacent objects of the same receiving queue belong to the same data stream, for example, each object carries a field identifying the data stream, which is not limited to this, in this embodiment of the present application.
5) Network card and physical function (physical function, PF)
The network card, which may also be referred to as a network interface card, has a main function of connecting a plurality of servers to a network so that the servers can communicate with each other through the network. The network card may be connected to the network by external optical fibers, cables, etc. The network card may be inserted into a peripheral component interconnect express (PCIe) slot of the computer, and connected to the server through PCIe. Or the network card may be connected to the server through a specific (or private) bus, which is not limited in this embodiment. It is understood that in physical implementation, the network card may be a part of the server or may be a device/apparatus independent of the server. For convenience of description, the network card is hereinafter described as the network card of the server.
The network card may include one or more ports, particularly ports for receiving data. Typically, there is one PF per port. Of course, there may be scenarios where one port corresponds to multiple PFs or where multiple ports correspond to one PF. A PF is understood to be a logical network card that is capable of performing all the logical functions of a network card.
One PF may support one or more receive queues. Wherein which receive queues a PF supports may be predefined. A PF supports receive queues that may be considered a PF-managed receive queue.
Fig. 3 is a schematic diagram of a correspondence relationship between a network card, a port, and a receive queue according to an embodiment of the present application. As shown in the figure, the network card includes ports 1 to n, where n is an integer greater than or equal to 1, and one port corresponds to one PF. PF1 supports m receive queues (labeled receive queues 11-1 m), PF2 supports k receive queues (labeled receive queues 21-1 k), … …, PFn supports t receive queues (labeled receive queues n 1-nt), and m, k, and t are integers greater than or equal to 1.
Each PF may correspond to a hash algorithm, and hash algorithms corresponding to different PFs may be the same or different. The network card can perform hash operation on an object received from a port corresponding to a PF according to a hash algorithm corresponding to the PF to obtain a hash value of the object; then, according to the mapping relationship between the preset hash values and the receiving queues (i.e. the receiving queues corresponding to the PF), the receiving queue corresponding to the object is determined, and the object is used as the object in the receiving queue.
As a possible embodiment, when the server supports a single-root I/O virtualization (SR-IOV) function, each PF may also correspond to multiple Virtual Functions (VFs), and each VF corresponds to one or more receiving queues, and different data streams may be stored in the one or more receiving queues respectively.
It should be noted that, in the embodiment of the present application, the processing procedures are similar for the above situations of the PF and the VF, and for convenience of description, a PF supporting one or more receive queues is taken as an example for description.
6) LRO and target object
The LRO is a network card acceleration technology, and specifically, aggregation of message slices is realized through a network card of a receiving-end server. After the LRO function of the network card is started, the CPU does not need to execute the aggregation operation of the message slices, so that the occupancy rate of the CPU can be reduced.
The network card executes the LRO function aiming at one receiving queue, and the method comprises the following steps:
step 1: the network card judges whether the objects in the receiving queue need to be subjected to aggregation operation or not.
Specifically, if the object is a message, the network card does not need to aggregate the message, that is, if the message received by the network card of the receiving-end server includes a complete data instruction, aggregation operation is not needed. If the object received by the network card of the receiving end server is a message slice, the network card needs to aggregate a plurality of message slices. Or only message slices need to be aggregated.
Step 2: if the object needs to perform LRO flow processing, it is determined whether the object can participate in the aggregation.
Specifically, firstly, whether a plurality of message slices in the receiving queue meet an aggregation condition is judged, and if so, the plurality of message slices can participate in aggregation. If the message slices are continuous message slices in the receiving queue, belong to the same data stream and have continuous sequence numbers, the aggregation condition is met. More specifically, whether two adjacent message slices meet the aggregation condition is judged, and if yes, both the two message slices can participate in aggregation. The preceding message slice in the two message slices may be a message slice belonging to the receive queue, or a message slice obtained after aggregation of a plurality of message slices belonging to the receive queue; the following message slice is the message slice belonging to the receive queue. If the two message slices belong to the same data stream and the serial numbers are continuous, the aggregation condition is met, otherwise, the aggregation condition is not met.
For example, it is assumed that the receiving of an object belonging to a receive queue by a network card of a receiving end server sequentially includes: the message slice 11, the message slice 12, and the message slice 13, and the 3 message slices are all obtained by splitting the message 1, when the LRO function is executed, the network card may first aggregate the message slice 11 and the message slice 12 to obtain a message slice 11+12, and then aggregate the message slice 11+12 and the message slice 13.
Optionally, the polymerization conditions further comprise: the length of the message slice or the message obtained after aggregation does not exceed the maximum aggregation length which can be supported by the network card. The maximum aggregation length supportable by the network card may be a size of a buffer (e.g., 64k), and other preset lengths may also be used as the maximum aggregation length supportable by the network card, which is not limited in this embodiment of the present invention. For example, if the length of one message is 128K, and the message is divided into 8 message slices, and each message slice is 16K, when the continuous 8 objects included in one receiving queue are the 8 message slices, and the serial numbers of the 8 message slices received by the network card of the receiving-end server are continuous in sequence, the network card may respectively aggregate the first 4 message slices and the last 4 message slices. For convenience of description, the following description will be given by taking as an example that the length of the message slice or the message obtained after aggregation does not exceed the maximum aggregation length supportable by the network card, which is described in a unified manner herein and is not described in detail below.
Optionally, for a plurality of message slices meeting the aggregation condition, only the aggregation operation of two message slices may be performed each time, the aggregation operation of more than two message slices may be performed each time, and all message slices may be aggregated into one large message slice at a time.
In one possible implementation, if the objects adjacent to each other before and after a message slice and the message slice do not belong to the same data stream, the message slice cannot participate in aggregation.
In another possible implementation, although the objects adjacent to each other before and after a message slice belong to the same data stream as the message slice; however, the objects adjacent to each other before and after are all messages, or the objects adjacent to each other before and after are all message slices but the serial numbers are not consecutive to the serial numbers of the message slices, or one of the objects adjacent to each other before and after is a message and the other is a message slice but the serial numbers are not consecutive to the serial numbers of the message slices, then the message slices cannot participate in aggregation.
And step 3: the network card aggregates objects that can participate in the aggregation.
And the target object is obtained after the LRO function is executed on one or more objects in the receiving queue. Wherein the target object includes: the message in the receiving queue, the message slice which can not participate in the aggregation in the receiving queue, and/or the message slice or the message obtained after the aggregation of a plurality of message slices in the receiving queue. That is, if an object does not need to be subjected to an aggregation operation, the object is taken as a target object. If an object needs to be subjected to aggregation operation but cannot participate in aggregation, the object is taken as a target object. If an object can participate in aggregation, all message slices capable of being aggregated with the object and message slices or messages obtained after the object is aggregated are used as a target object.
Fig. 4 is a schematic diagram of a relationship between an object and a target object according to an embodiment of the present disclosure. In fig. 4, the objects in the receive queue 1 sequentially include: message slice 11, message slice 12, message slice 21, message slice 22, message slice 23, message slice 13, and message 3. The message slices 11-13 are obtained by segmenting the message 1, and the message slice 13 is the last message slice of the message 1; the message slices 21-23 are obtained by segmenting the message 2, and the message slice 23 is the last message slice of the message 2. After the LRO function is performed for receive queue 1, the target objects obtained are in turn: message slice 11+12, message 2, message slice 13, and message 3. The message slice 11+12 is a message slice obtained by aggregating the message slice 11 and the message slice 12.
7) TCP slice offload (TCP segment offload, TSO)
TSO is a process in which a sender server divides a message into message slices. The TSO function is coupled with the LRO function to achieve the traffic acceleration of the server system shown in fig. 1. Specifically, the method comprises the following steps:
for the sending-end server, when the processor completes processing in a TCP/IP protocol stack and needs to send a message, since the TSO function is turned on, the processor can send the TCP message with the SKB structure to the network card driver, and the network card driver sequentially stores the TCP message to be sent to the buffer described by the buffer descriptor of the Send Queue (SQ) and notifies the network card to send the TCP message out. After the network card receives the notification, if the length of the TCP message is determined to be larger than the threshold value, the TCP message is divided into a plurality of TCP message slices, and the TCP message slices are packaged with new message headers and sent out. The sending queue is a sequence formed by descriptors of non-free buffer areas, and is used for sending data by a sending end server, for example, the server may sequentially send data stored in the buffer areas described by the descriptors of the buffer areas of the queue.
For the receiving end server, after the network card receives the TCP message slices, since the LRO function is turned on, the network card aggregates the TCP message slices, and then stores the aggregated TCP message or large TCP message slices to the buffer described by the buffer descriptor of the receiving queue by using a Direct Memory Access (DMA) technology, and after one or more TCP messages or TCP message slices are DMA-stored in the buffer, the network card may notify the network card to drive through an interrupt mechanism. The network card driver can integrate the TCP messages or TCP message slices in the cache area described by the cache area descriptor of the receiving queue into an SKB structure, and send the SKB structure to the processor, and the processor continues to complete the processing of the TCP/IP protocol stack.
8) Other terms
The term "at least one" in the embodiments of the present application includes one or more. "plurality" means two (species) or more than two (species). For example, at least one of A, B and C, comprising: a alone, B alone, a and B in combination, a and C in combination, B and C in combination, and A, B and C in combination. In the description of the present application, "/" indicates an OR meaning, for example, A/B may indicate A or B; "and/or" herein is merely an association describing an associated object, and means that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. "plurality" means two or more than two. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
In the conventional technology, LRO functions are uniformly turned on or stopped for all receiving queues based on the granularity of a network card, that is, if one network card supports multiple receiving queues, the network card can only turn on or stop LRO functions of the multiple receiving queues at the same time. Assuming that one network card supports multiple receive queues, there may be: for part of the receiving queues, the aggregation process is frequently interrupted due to the crossing of message slices belonging to different data flows, so that a good aggregation effect cannot be achieved; for the other part of the receive queue, the aggregation works well. If the traditional technology is adopted, the overall performance of the system cannot be effectively improved no matter the LRO function of the network card is started or stopped. An example of the aggregation effect that cannot be achieved due to frequent interruption of the aggregation process caused by the intersection of message slices belonging to different data streams is as follows: in an extreme case, any two adjacent message slices in one receiving queue do not meet the aggregation condition, and in this case, each message slice enters the aggregation process, so that the time for the message slice to pass through the network card is slowed down, and the overall performance of the system is affected.
The technical scheme provided by the embodiment of the application starts or stops the LRO function based on the granularity of the receiving queue. For any receiving queue in the same network card, the LRO function of each receiving queue can be dynamically set according to the condition of the polymerizable message slice in the receiving queue. At the same time, only a portion of the receive queue's LRO functionality may be turned on, and another portion of the receive queue's LRO functionality may be turned off. Whether each receiving queue supports the LRO function or not is set in a dynamic self-adaptive mode, the aggregation effect of the network card of the receiving end can be effectively guaranteed, the problem that the aggregation process is frequently interrupted and the good aggregation effect cannot be achieved due to the fact that message slices of different data streams are crossed is avoided, the aggregation effect and the data processing efficiency of the network card of a server of the receiving end are improved, and the performance of the whole system is further improved.
Next, a method for setting an LRO function provided in the embodiments of the present application will be described in detail with reference to the drawings.
Fig. 5 is a schematic flow chart of a method for setting an LRO function according to an embodiment of the present disclosure. The execution subject of the method shown in fig. 5 may be a sink server. The LRO function can be respectively started or stopped for each receiving queue of the network card of the receiving-end server in the embodiment of the present application, and the starting and stopping methods for each receiving queue are the same. The method shown in fig. 5 comprises the following steps:
s101: under the condition that the LRO function of the receiving queue is opened, counting the times x of interruption of the aggregation process of the receiving queue in a first preset time period and the number y of target objects obtained by executing the LRO function on the receiving queue in the first preset time period. The target object includes: the message in the receiving queue, the message slice which can not participate in the aggregation in the receiving queue, and/or the message slice or the message obtained after the aggregation of a plurality of message slices in the receiving queue.
If a target object in the receiving queue is a message slice, is not the last message slice in the message to which the target object belongs, and has a length less than or equal to a first threshold, the aggregation process of the receiving queue is interrupted once. Accordingly, the number x of times the aggregation process of the receive queue is interrupted within the first preset time period may be determined. The message slice that causes the aggregation process of the receive queue to be interrupted once may be a message slice that does not participate in aggregation in the receive queue, or may be a message slice that is obtained after participation in aggregation and has a length smaller than or equal to a first threshold.
S102: and determining the probability of interruption of the aggregation process of the receiving queue according to the times x of interruption of the aggregation process of the receiving queue in a first preset time period and the number y of target objects obtained by executing the LRO function on the receiving queue in the first preset time period. For example, the probability of interruption of the aggregation process of the receive queue may be obtained by dividing x by y.
S103: and when the probability of interruption of the aggregation process of the receiving queue is greater than or equal to a second threshold value, determining start-stop information for indicating to stop the LRO function of the receiving queue.
When the probability of interruption of the aggregation process of one receiving queue is greater than or equal to the second threshold, the aggregation process of the receiving queue can be considered to be interrupted frequently, and further, the serious interleaving among the message slices belonging to the multiple data streams in the receiving queue can be estimated. In this case, because the number of interrupts is large, the number of aggregated message slices may be small, and if the LRO function is continuously executed for the receive queue, the effect of reducing the CPU occupancy rate caused by executing the LRO function cannot be well exerted, but instead, the network card needs to determine whether the received message slices satisfy the aggregation condition one by one to slow down the data processing speed due to the execution of the LRO function, thereby affecting the overall performance of the system. Thus, at this point, the LRO functionality of the receive queue may be stopped.
It will be appreciated that when the probability of interruption of the aggregation process for a receive queue is less than a second threshold, LRO functionality may continue to be performed for the receive queue. At this time, the timing may be restarted, and the probability of interruption of the aggregation process of the receive queue within the first preset time period is counted again, and so on, and when the probability of interruption of the aggregation process of the receive queue obtained within a certain counting period is greater than or equal to the second threshold, the LRO function of the receive queue is stopped.
S101 to S102 may be considered as one possible implementation manner of counting the probability of interruption of the aggregation process of the receive queue. The above-mentioned S101 to S103 may be regarded as one possible implementation manner of determining start-stop information of the LRO function of the receive queue. S104 is a possible implementation manner of setting the LRO function of the receive queue according to the start-stop information.
S104: and stopping the LRO function of the receiving queue according to the start-stop information.
Fig. 6 is a schematic flow chart of a method for setting an LRO function according to an embodiment of the present disclosure. The execution subject of the method shown in fig. 6 may be a sink server. The method shown in fig. 6 comprises the following steps:
s201: under the condition that the LRO function of the receive queue is stopped, counting the number c of times that the data stream to which the object in the receive queue belongs is a multi-stream in a second preset time period, and the number d of the objects belonging to the receive queue in the second preset time period. If the hash values of two adjacent objects in the receive queue are different, which indicates that the data streams to which the two objects belong are different, the number of times that the data stream to which the object of the receive queue belongs is a multi-stream is increased by 1. The object is a message slice and a general name of a message.
For example, assume that the objects in the receiving queue in the second preset time period are: the method comprises the steps of message slicing 11, message slicing 21, message slicing 12 and message slicing 22, wherein data streams to which the message slicing 11-12 belong are data streams 1, and data streams to which the message slicing 21-22 belong are data streams 2, and the number of times that the data stream to which the receiving queue belongs is a multi-stream in a second preset time period is 3.
S202: and determining the probability that the data stream to which the object in the receiving queue belongs is a multi-stream according to the number c of times that the data stream to which the object in the receiving queue belongs is a multi-stream in a second preset time period and the number d of the objects belonging to the receiving queue in the second preset time period. For example, the probability that the data stream to which the object in the receive queue belongs is a multi-stream can be obtained by dividing c by d.
S203: and if the probability that the data stream to which the object in the receiving queue belongs is in the multi-stream state is smaller than or equal to a fifth threshold value, determining start-stop information for indicating to start the LRO function of the receiving queue.
When the probability that the data stream to which the object in one receive queue belongs is a multi-stream is less than or equal to the fifth threshold, it may be considered that interleaving between multiple data streams of the receive queue is not serious. In this case, the LRO function is executed for the receive queue, the interruption probability is low when the message slices belonging to the same data stream are aggregated, and the number of aggregated message slices is large, so that the effect of reducing the CPU occupancy rate caused by the LRO function can be well exerted, thereby improving the overall performance of the system.
It can be understood that when the probability that the data stream to which the object in one receive queue belongs is a multi-stream is greater than the fifth threshold, it may be considered that interleaving among multiple data streams of the receive queue is severe. At this time, timing may be restarted, and the probability that the data stream to which the object in the receive queue belongs is a multi-stream in a second preset time period is counted again, and so on, and when the probability that the data stream to which the object in the receive queue belongs is a multi-stream obtained in a certain counting period is smaller than or equal to a fifth threshold, it is determined to start the LRO function of the receive queue.
S204: and starting the LRO function of the receiving queue according to the start-stop information.
S201 to S202 may be considered as one possible implementation manner for counting the probability that the data stream to which the object in the receiving queue belongs is a multi-stream. S201 to S203 may be regarded as another possible implementation manner of determining start-stop information of the LRO function of the receive queue, and S204 is another possible implementation manner of setting the LRO function of the receive queue according to the start-stop information.
According to the embodiments shown in fig. 5 and fig. 6, the setting method of the LRO function provided by the embodiments of the present application sets the LRO function based on the granularity of the receive queue. Therefore, by setting the condition of reasonably starting or stopping the LRO function of the receiving queue, the aggregation effect of the message slices and the processing time of the message slices are balanced, and the overall performance of the system is improved.
Optionally, the first threshold, the second threshold, and the fifth threshold above are all configurable, such as configuration according to the characteristics of the service to be processed, the CPU occupancy, and the aggregation effect of the LRO function. Of course, one or more of these thresholds may also be predefined. This is not limited in the embodiments of the present application.
As an alternative implementation, a plurality of threshold value groups are predefined, each threshold value group comprising a third threshold value and a fourth threshold value. Based on this, before performing S101, the method may further include: selecting a set of threshold values from the plurality of sets of threshold values; and the third threshold value included in the selected threshold value group is taken as the first threshold value, and the fourth threshold value included in the selected threshold value group is taken as the second threshold value.
As another possible implementation, multiple thresholds are predefined. Based on this, before performing S201, the method may further include: one of the plurality of thresholds is selected as a fifth threshold.
As another possible implementation, multiple threshold value groups may be predefined, each threshold value group including a third threshold value, a fourth threshold value, and a sixth threshold value. Based on this, the method may further comprise: selecting one threshold set from a plurality of threshold sets; and a third threshold included in the selected threshold group is set as the first threshold, a fourth threshold included in the threshold group is set as the second threshold, and a sixth threshold included in the threshold group is set as the fifth threshold.
For example, assume that the first threshold is labeled pkt. Then, a plurality of threshold sets predefined in the receiver server may be as shown in table 1:
TABLE 1
Set of thresholds pkt.len.threshold lro.interrupt.rate multiple.flow.rate
Threshold value set one 32K 50% 5%
Threshold set two 16K 50% 15%
……. ……. ……. …….
Set of thresholds N 8K 50% 10%
It can be understood that, when the network card is in operation, the network card may sense the bandwidth capability of the port, and may also obtain configuration information of the timer (for example, the size of the first preset time period and the second preset time period), and when the services to be processed are different, the information sensed or obtained by the network card may not be able, so that different threshold value groups may be predefined according to characteristics of different services, and when the LRO function of the receive queue is set, according to characteristics of the service to which the object in the receive queue belongs, an appropriate threshold value group may be selected from the multiple threshold value groups, and the LRO function of the receive queue may be set according to the selected threshold value group, so as to achieve the purpose of maximizing system performance.
Optionally, the server may refer to, but not limited to, at least one of the following factors when predefining the size of the threshold in each set of thresholds: port rate (e.g., 10GE, 25GE, 40GE, 50GE, 100GE, or 200 GE), traffic typical IO size, LRO aggregation parameters (e.g., buffer size and timer configuration information).
The above-mentioned pre-defining a plurality of thresholds and selecting one of the thresholds as the first threshold (or the second threshold or the fifth threshold) facilitates the selection of different thresholds for different pending services (i.e. services to which the objects in the receiving queue belong), that is, providing personalized services for different pending services. Based on this, by predefining a reasonable threshold value and reasonably selecting the threshold value, the overall performance of the system is improved.
Next, the setting method of the LRO function described above is specifically explained by a specific example.
Fig. 7 is a schematic diagram of a data processing method according to an embodiment of the present application. The data processing method includes a setting method of an LRO function. The method shown in fig. 7 comprises the following steps:
s300: when the processor of the sending end server needs to send a message to the receiving end server, the processor sends the message to the network card of the sending end server.
S301: and after the network card of the sending end server receives the message, generating an object.
Specifically, if it is determined that the length of the packet is greater than the threshold, the packet is divided into a plurality of packet slices, so that the length of each packet slice is less than the threshold, a packet header is encapsulated for each packet slice, and the packet slice after the packet header is encapsulated is used as an object. And if the length of the message is less than or equal to the threshold value, taking the message as an object.
Each object includes header information including description information of the object, such as information indicating whether the object is a message slice or a message. In addition, for the last message slice in a message, the header information of the message slice may further include an end flag, which is used to flag that the message slice is the last message slice of the message to which the message slice belongs. Subsequently, the network card of the receiving end server can identify whether the object is a message slice or a message according to the header information of the object. And whether a message slice is the last message slice of the message to which it belongs.
S302: and the network card of the sending terminal server sends the object to the receiving terminal server.
S303: the network card of the receiving end server receives the object through the N ports. N is not less than 1, and N is an integer.
S304: for each object received through the port corresponding to the target PF, the network card of the receiving end server calculates the hash value of the object according to the hash algorithm corresponding to the target PF, and determines the receiving queue corresponding to the hash value of the object according to the mapping relation between the plurality of hash values and the plurality of receiving queues. The object is subsequently considered to be attributed to the receive queue.
The target PF may be any PF supported by the network card of the receiving-end server. One port may support one or more PFs, or, multiple ports may support one PF. The hash algorithms corresponding to different PFs may be the same or different.
The mapping relationships between the plurality of hash values and the plurality of receive queues may each be predefined. The number of receive queues may be the number of receive queues corresponding to the target PF. The number of hash values may be the number of hash values of the hash algorithm corresponding to the target PF. For example, assuming that the receiving queues corresponding to the target PF are receiving queues 1 to 16, and the hash value of the hash algorithm corresponding to the target PF includes hash values 1 to 32, the corresponding relationship between the hash value and the receiving queue may be hash value a and receiving queue a corresponding to hash value 16+ a, where a is greater than or equal to 1 and less than or equal to 16, and a is an integer.
S305: for the target receiving queue, the network card of the receiving end server may sequentially store each object belonging to the target receiving queue to the buffer described by the buffer descriptor included in the receiving queue. One or more objects may be stored in one cache region, and one object may also be stored in a plurality of cache regions.
If the LRO function of the target receive queue is already on, S306 is performed.
If the LRO function of the target receive queue has stopped, S313 is performed.
S306: for objects belonging to the target receive queue, the network card of the receiving end server performs an LRO function.
For example, for a plurality of consecutive message slices belonging to a target receive queue, if an aggregation condition is satisfied, aggregation is performed, otherwise, aggregation is not performed. The target receive queue may be any receive queue corresponding to the target PF. Of course, there may be a case where each message slice belonging to the target receive queue does not satisfy the aggregation condition.
The sequence of S305 and S306 is not limited in this embodiment. For example, S306 may be executed in the process of executing S305, and for each time the network card determines the receiving queue to which an object belongs, the network card stores the object to the buffer described by the buffer descriptor included in the receiving queue, and simultaneously or sequentially determines whether the object needs to perform the aggregation operation.
S307: in the process of executing the LRO function for the target receive queue, the network card of the receiving end device generates one or more Complete Queue Elements (CQEs) of the target object.
After the network card of the receiving end device determines each target object, a CQE of the target object is generated. The target object is a message in a target receiving queue, or a message slice or a message obtained by aggregating a plurality of message slices in the target receiving queue, or a message slice which cannot participate in aggregation in the target receiving queue.
The network card of the receiving end device can judge and record whether each target object needs to be subjected to aggregation operation. If the current target object is a message, the aggregation operation is not required. If the current target object is a message slice, an aggregation operation is required.
The network card of the receiving end device can judge and record whether the current message slice is the last message slice in the message to which the current message slice belongs. For example, whether the current message slice is the last message slice in the message to which the current message slice belongs is determined according to whether the header information of the current message slice contains an end mark.
The network card of the receiving end device can judge and record the length of each target object.
Optionally, the CQE of the current target object may include: information indicating whether the current target object needs to be subjected to the aggregation operation, for example, may be marked as Lro _ flag. Specifically, if the current target object needs to perform the aggregation operation, Lro _ flag is 1, otherwise, Lro _ flag is 0.
Further, if the current target object needs to perform the aggregation operation, the CQE may further include information indicating whether the current target object is the last message slice in the message to which the current target object belongs. For example, the information indicating whether the current target object is the last message slice in the affiliated message may be marked as a push _ flag, specifically, if the current target object is the last message slice in the affiliated message, the push _ flag is 1, otherwise, the push _ flag is 0.
Furthermore, if the current target object is not the last message slice in the message to which the current target object belongs, the CQE may further include information indicating the length of the current target object. Alternatively, the information may be the length of the current target object, or information whether the length of the current target object is less than or equal to a first target preset threshold. For example, information indicating the length of the current target object may be marked Lro _ length. When Lro _ length is information on whether the length of the current target object is greater than or equal to the first target preset threshold, if the length of the current target object is less than or equal to the first target preset threshold, Lro _ length is 1, otherwise, Lro _ length is 0.
S308: after the network card of the receiving end device generates the CQEs of the preset number of target objects, a notification message is sent to the processor of the receiving end device. The specific value of the preset number may be predefined, for example, predefined by a protocol.
S309: after receiving the notification message, the processor of the receiving end device reads the CQEs of the preset number of target objects from the network card of the receiving end device.
S310: and the processor of the receiving end equipment analyzes the CQE of the target object read in the first preset time period, and counts the interruption probability of the aggregation process of the target receiving queue according to the information obtained by analysis. If the probability is greater than or equal to a second threshold, the processor determines to stall the LRO function of the target receive queue.
The probability is obtained by dividing the number of times of interruption of the aggregation process of the reception queue in the first preset time period by the number of target objects indicated by the CQEs of the target objects read in the first preset time period (i.e., the number of CQEs of the target objects read in the first preset time period).
Based on the example in S307, the processor of the receiving end device obtains the number of times of interruption of the aggregation process of the receive queue in the first preset time period, which may be implemented by the following manner 1 or manner 2:
mode 1: if Lro _ length is the length of the current target object, when the processor of the receiving end device parses the CQE of the current target object to obtain Lro _ flag being 1, push _ flag being 0, and the value of Lro _ length being less than or equal to the first preset threshold, it is determined that the current target object causes the interruption of the aggregation process of the target receiving queue. Otherwise, determining that the current target object does not cause interruption of the aggregation process of the target receiving queue.
Mode 2: if Lro _ length is information on whether the length of the current target object is greater than or equal to a first preset threshold, when the processor parses the CQE of the current target object to obtain Lro _ flag ═ 1, push _ flag ═ 0, and Lro _ length ═ 1, it is determined that the current target object causes an interruption of the aggregation process of the target receive queue. Otherwise, determining that the current target object does not cause interruption of the aggregation process of the target receiving queue.
In the above manner 1 or manner 2, it may be determined whether the target object indicated by each CQE fetched within the first preset time period causes interruption of the aggregation process of the target receive queue, so as to count the probability of interruption of the aggregation process of the target receive queue within the first preset time period.
S311: and the processor of the receiving end equipment sends first indication information to the network card, and the first indication information is used for indicating the LRO function of the target receiving queue to stop.
S312: and the network card of the receiving end equipment stops the LRO function of the target receiving queue according to the first indication information.
After execution of S312, the process for how to stop the LRO function of the target receive queue ends.
S313: the network card of the receiving end device generates CQE of one or more objects in the target receiving queue.
The CQE of the current target object may include: hash value information of the current object.
In one implementation, the hash value information of the current object may be a hash value of the current object. In another implementation, the hash value information of the current object may be whether the hash value of the current object has changed with respect to the hash value of an object previous to the current object.
S314: after the network card of the receiving end device generates the CQEs of the preset number of target objects, a notification message is sent to the processor of the receiving end device.
The specific value of the preset number may be predefined, for example, predefined by a protocol.
S315: after receiving the notification message, the processor of the receiving end device reads the CQEs of the preset number of target objects from the network card of the receiving end device.
S316: and the processor of the receiving end equipment analyzes the CQE of the target object read in the second preset time period, and counts the probability that the data stream to which the object in the receiving queue belongs is a multi-stream according to the information obtained by analysis. If the probability is less than or equal to the fifth threshold, the processor of the receiving end device determines to open the LRO function of the target receive queue. The probability is a value obtained by dividing the number of times that the data stream to which the object in the target receiving queue belongs is multi-streamed, which is counted within a second preset time period, by the number of the objects in the target receiving queue within the second preset time period.
S317: and the processor of the receiving end equipment sends second indication information to the network card of the receiving end equipment, and the second indication information is used for indicating the starting of the LRO function of the target receiving queue.
S318: and the network card of the receiving end equipment starts the LRO function of the target receiving queue according to the second indication information.
After execution of S318, the process for how to open the LRO function of the target receive queue ends.
It should be noted that, in the specific implementation process, the steps executed by the processor of the receiving end device in the steps S303 to S318 may all be replaced by being executed by the network card of the receiving end device, and in this case, the step of interaction between the network card and the processor in the steps S303 to S318 may not be executed, so as to obtain a new embodiment, and for the sake of brevity, the description is not repeated here.
The scheme provided by the embodiment of the application is mainly introduced from the perspective of a method. To implement the above functions, it includes hardware structures and/or software modules for performing the respective functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the setting apparatus of the LRO function may be divided into functional modules according to the above method, for example, each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation.
The method for setting the LRO function provided by the embodiment of the present application is described in detail above with reference to fig. 5 to 7, and the setting device, the network card, and the server for the LRO function provided by the embodiment of the present application are described below with reference to fig. 8 to 10.
Fig. 8 is a schematic structural diagram of an LRO function setting device 80 according to an embodiment of the present disclosure. The apparatus 80 may be used to perform the setting method of the LRO function shown in any one of fig. 5 to 7. The apparatus 80 may include: a determination unit 801 and a setting unit 802. The determining unit 801 is configured to determine start-stop information of an LRO function of a receive queue, where the start-stop information is used to instruct to start or stop the LRO function of the receive queue. A setting unit 802, configured to set an LRO function of the receive queue according to the start-stop information. For example, in conjunction with fig. 5, the determination unit 801 may be used to perform S101 to S102, and the setting unit 802 may be used to perform S103. For another example, in conjunction with fig. 6, the determination unit 801 may be configured to perform S201 to S202, and the setting unit 802 may be configured to perform S203.
In a possible implementation manner, the setting unit 802 is specifically configured to start an LRO function of the receive queue according to start-stop information; wherein, the starting of the LRO function of the receiving queue comprises: and starting to aggregate a plurality of message slices belonging to the same data flow in the receiving queue.
In a possible implementation manner, the determining unit 801 is specifically configured to: under the condition that the LRO function of the receiving queue is opened, counting the probability of interruption of the aggregation process of the receiving queue; wherein, the target object obtained after the LRO function is executed for the receive queue includes: if a target object is a message slice, is not the last message slice in the message to which the target object belongs, and has a length less than or equal to a first threshold value, the aggregation process of the receiving queue is interrupted once; and when the probability of interruption of the aggregation process of the receiving queue is greater than or equal to a second threshold value, determining start-stop information for indicating to stop the LRO function of the receiving queue. For example, in conjunction with fig. 5, the determination unit 801 may be used to perform S101 to S102.
In a possible implementation manner, the determining unit 801 is specifically configured to: counting the times of interruption of the aggregation process of the receiving queue in a first preset time period and the number of target objects obtained after the LRO function is executed aiming at the receiving queue in the first preset time period; and determining the probability of interruption of the aggregation process of the receiving queue according to the times of interruption of the aggregation process of the receiving queue in a first preset time period and the number of target objects obtained after the LRO function is executed on the receiving queue in the first preset time period. For example, in conjunction with fig. 5, the determination unit 801 may be configured to perform S101.
In a possible implementation manner, the apparatus 80 further includes a selecting unit 803 configured to: selecting a set of threshold values from a plurality of sets of threshold values, each set of threshold values comprising a third threshold value and a fourth threshold value; the third threshold included in the selected set of thresholds is taken as the first threshold and the fourth threshold included in the selected set of thresholds is taken as the second threshold.
In a possible implementation manner, the determining unit 801 is specifically configured to: under the condition that the LRO function of the receiving queue is stopped, counting the probability that the data stream to which the object in the receiving queue belongs is multi-stream, wherein the object comprises a message and/or a message slice; and if the counted probability that the data stream to which the object in the receiving queue belongs is a multi-stream is smaller than or equal to a fifth threshold, determining start-stop information for indicating to start the LRO function of the receiving queue. For example, in conjunction with fig. 6, the determination unit 801 may be used to perform S201 to S202.
In a possible implementation manner, the determining unit 801 is specifically configured to: counting the number of times that the data stream to which the object in the receiving queue belongs is multi-stream within a second preset time period and the number of the objects of the receiving queue within the second preset time period; if the hash values of two adjacent objects in the receiving queue are different, adding 1 to the number of times that the object of the receiving queue belongs to the multi-stream; and determining the probability that the data stream to which the object in the receiving queue belongs is a multi-stream according to the number of times that the data stream to which the object in the receiving queue belongs is a multi-stream in a second preset time period and the number of the objects in the receiving queue in the second preset time period. As another example, in conjunction with fig. 6, the determination unit 801 may be configured to perform S201.
In a possible implementation manner, the apparatus 80 further includes a selecting unit 803, configured to select one threshold from the multiple thresholds, and use the selected threshold as a fifth threshold.
It should be understood that the apparatus 80 of the embodiments of the present application may be implemented by an application-specific integrated circuit (ASIC), or a Programmable Logic Device (PLD), which may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof. When the methods shown in fig. 5 to 6 can also be implemented by software, the apparatus 80 and its respective modules may also be software modules.
The explanation of the relevant contents and the description of the beneficial effects in the embodiment can refer to the above method embodiment.
Fig. 9 is a schematic diagram of a hardware structure of a network card 90 according to an embodiment of the present disclosure. As shown, the network card 90 includes: at least one processor 901, communication lines 902, memory 903, and a communication interface 904. The communication lines 902 may include a path for communicating information between the at least one processor 901, the memory 902, and the communication interface 904, among other things. The communication interface 904 is used for the network card 904 to communicate with other devices or apparatuses. Communication interface 904 may include a wired transceiver or a wireless transceiver. The wireless transceiver may include a communication chip. The at least one processor 901 and the communication chip may be integrated together or may be provided separately. The memory 903 is used for storing computer-executable instructions for implementing the present solution, and is controlled by the processor 901 for execution. The processor 901 is configured to execute computer-executable instructions stored in the memory 903, so as to implement the setting method of the LRO function provided by the above-mentioned embodiments of the present application. The explanation of the relevant contents and the description of the beneficial effects in the embodiment can refer to the above method embodiment.
Fig. 10 is a schematic structural diagram of a server 1000 according to an embodiment of the present application. As shown, server 1000 includes at least one processor 1001, communication lines 1002, memory 1003, network card 1004, and communication interface 1005. Communication interface 1005 may include a wired transceiver or a wireless transceiver. The wireless transceiver may include a communication chip. At least one processor 1001 and the communication chip may be integrated together or may be provided separately.
The processor 1001 may be a general purpose CPU, and the processor 1001 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general purpose processor may be a microprocessor or any conventional processor or the like. The processor 1001 may also be a Graphics Processing Unit (GPU), a neural Network Processing Unit (NPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of programs according to the present disclosure.
Communication link 1002 may include a path for communicating information between and among the aforementioned components, such as processor 1001, memory 1003, network card 1004, and communication interface 1005.
The memory 1003 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 1003 may be separate and coupled to the processor 1001 via a communication link 1002. The memory 1003 may also be integrated with the processor 1002. The memory 1003 provided by the embodiment of the present application may generally have a nonvolatile property. The memory 1003 is used for storing computer-executable instructions for implementing the present invention, and is controlled by the processor 1001. The processor 1001 is configured to execute the computer-executable instructions stored in the memory 1003, so as to implement the setting method of the LRO function provided by the above-described embodiment of the present application.
The structure of the network card 1004 can refer to fig. 9 described above, and will not be described here.
Communication interface 1005, which may be any transceiver or the like, is used for the server 1000 to communicate with other devices.
Alternatively, the computer-executable instructions in the embodiments of the present application may also be referred to as application program code.
As one example, processor 1001 may include one or more CPUs. As one example, the server 1000 may include multiple processors. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
The server 1000 may be a general-purpose device or a special-purpose device. For example, the server 1000 may be a server based on X86, ARM, or may be other dedicated servers, such as Policy Control and Charging (PCC) server. The embodiment of the present application does not limit the type of the server 1000. ARM is an English abbreviation for advanced reduced instruction processors (advanced RISC machines), which is an English abbreviation for reduced instruction set computers (reduced instruction set computers).
The embodiment of the present application further provides a communication system, which may include the server 1000, where the server 1000 may serve as a receiving end server. In addition, the communication system further includes a sending-end server for sending the object to the receiving-end server so that the receiving-end device performs the above-described setting method of the LRO function.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The processes or functions according to the embodiments of the present application are generated in whole or in part when the computer-executable instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). Computer-readable storage media can be any available media that can be accessed by a computer or can comprise one or more data storage devices, such as servers, data centers, and the like, that can be integrated with the media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The foregoing is only illustrative of the present application. Those skilled in the art can conceive of changes or substitutions based on the specific embodiments provided in the present application, and all such changes or substitutions are intended to be included within the scope of the present application.

Claims (21)

1. A method for setting a Large Receive Offload (LRO) function, the method comprising:
determining start-stop information of an LRO function of the network card according to an LRO starting condition, wherein the preset condition is used for indicating a rule for starting or closing the LRO function of a receiving queue in the network card;
and setting the LRO function of the network card according to the start-stop information.
2. The method of claim 1, wherein the network card comprises a plurality of receiving queues, and each receiving queue can set the LRO turning-on condition.
3. The method according to claim 1 or 2, wherein the start-stop information includes an LRO on function and an LRO off function, the LRO on function is configured to instruct to start aggregation of multiple message slices belonging to the same data flow in the first queue, and the LRO off function is configured to instruct to close aggregation of multiple message slices belonging to the same data flow.
4. The method according to any one of claims 1 to 3, wherein the determining start-stop information of the LRO function of the network card according to the LRO start-up condition includes:
obtaining the interruption probability of the aggregation process of a first receiving queue in the network card, wherein the first receiving queue is any one of the plurality of receiving queues;
and determining start-stop information of the first receiving queue according to the LRO starting condition and the interruption probability.
5. The method of claim 4, wherein determining start-stop information of the first receive queue according to the LRO turn-on condition and the outage probability comprises:
under the condition that the LRO function of the receiving queue is opened, counting the probability of interruption of the aggregation process of the receiving queue; wherein, the target object obtained after the LRO function is executed for the receive queue includes: if one target object is a message slice, is not the last message slice in the message to which the target object belongs, and has a length less than or equal to a first threshold value, the aggregation process of the receiving queue is interrupted once;
and when the probability of interruption of the aggregation process of the receiving queue is greater than or equal to a second threshold value, determining that the start-stop information is used for indicating to stop the LRO function of the receiving queue.
6. The method of claim 5, wherein the counting the probability of interruption of the aggregation process of the receive queue comprises:
counting the times of interruption of the aggregation process of the receiving queues in a first preset time period and the number of target objects obtained after the LRO function is executed on the receiving queues in the first preset time period;
and determining the probability of interruption of the aggregation process of the receiving queue according to the times of interruption of the aggregation process of the receiving queue in the first preset time period and the number of target objects obtained after the LRO function is executed on the receiving queue in the first preset time period.
7. The method of claim 5 or 6, further comprising:
selecting a set of threshold values from a plurality of sets of threshold values, each set of threshold values comprising a third threshold value and a fourth threshold value;
the third threshold value included in the selected threshold value group is taken as the first threshold value, and the fourth threshold value included in the selected threshold value group is taken as the second threshold value.
8. The method of claim 4, wherein determining start-stop information of the first receive queue according to the LRO turn-on condition and the outage probability comprises:
under the condition that the LRO function of the receiving queue is stopped, counting the probability that the data stream to which the object in the receiving queue belongs is a multi-stream, wherein the object comprises a message and/or a message slice;
and if the counted probability that the data stream to which the object in the receiving queue belongs is a multi-stream is smaller than or equal to a fifth threshold, determining that the start-stop information is used for indicating to start the LRO function of the receiving queue.
9. The method of claim 8, wherein the counting the probability that the data stream to which the object in the receive queue belongs is a multi-stream comprises:
counting the number of times that the data stream to which the object in the receiving queue belongs is multi-stream within the second preset time period and the number of the objects in the receiving queue within the second preset time period; if the hash values of two adjacent objects in the receiving queue are different, adding 1 to the number of times of multi-stream of the data stream to which the object of the receiving queue belongs;
determining the probability that the data stream to which the object in the receive queue belongs is a multi-stream according to the number of times that the data stream to which the object in the receive queue belongs is a multi-stream in the second preset time period and the number of the objects in the receive queue in the second preset time period.
10. The method according to claim 8 or 9, characterized in that the method further comprises:
selecting one threshold value from a plurality of threshold values, and taking the selected threshold value as the fifth threshold value.
11. A setup device for large receive offload LRO functionality, the device comprising:
the network card comprises a determining unit, a judging unit and a judging unit, wherein the determining unit is used for determining starting and stopping information of an LRO function of the network card according to an LRO starting condition, and the preset condition is used for indicating a rule for starting or closing the LRO function of a receiving queue in the network card;
and the setting unit is used for setting the LRO function of the network card according to the start-stop information.
12. The apparatus of claim 11, wherein the network card comprises a plurality of receive queues, and each receive queue is capable of setting the LRO enable condition.
13. The apparatus of claim 11 or 12, wherein the start-stop information comprises an open LRO function and a close LRO function, the open LRO function is configured to instruct to open aggregation of multiple message slices belonging to the same data flow in the first queue, and the close LRO function is configured to instruct to close aggregation of multiple message slices belonging to the same data flow.
14. The apparatus according to claims 11 to 13,
the setting unit is further configured to obtain an interruption probability of an aggregation process of a first receiving queue in the network card, where the first receiving queue is any one of the plurality of receiving queues; and determining start-stop information of the first receiving queue according to the LRO starting condition and the interruption probability.
15. The apparatus of claim 14,
the determining unit is further configured to count a probability of interruption of the aggregation process of the receive queue when the LRO function of the receive queue is turned on; wherein, the target object obtained after the LRO function is executed for the receive queue includes: if one target object is a message slice, is not the last message slice in the message to which the target object belongs, and has a length less than or equal to a first threshold value, the aggregation process of the receiving queue is interrupted once; and when the probability of interruption of the aggregation process of the receiving queue is greater than or equal to a second threshold value, determining that the start-stop information is used for indicating to stop the LRO function of the receiving queue.
16. The apparatus of claim 15,
the determining unit is further configured to count the number of times of interruption of the aggregation process of the receive queue within a first preset time period, and the number of target objects obtained after an LRO function is executed for the receive queue within the first preset time period; and determining the probability of interruption of the aggregation process of the receiving queue according to the times of interruption of the aggregation process of the receiving queue in the first preset time period and the number of target objects obtained after the LRO function is executed on the receiving queue in the first preset time period.
17. The apparatus of claim 15 or 16,
the determining unit is further configured to select a threshold set from a plurality of threshold sets, each threshold set including a third threshold and a fourth threshold; the third threshold value included in the selected threshold value group is taken as the first threshold value, and the fourth threshold value included in the selected threshold value group is taken as the second threshold value.
18. The apparatus of claim 14,
the determining unit is further configured to, when the LRO function of the receive queue is stopped, count a probability that a data stream to which an object in the receive queue belongs is a multi-stream, where the object includes a packet and/or a packet slice; and if the counted probability that the data stream to which the object in the receiving queue belongs is a multi-stream is smaller than or equal to a fifth threshold, determining that the start-stop information is used for indicating to start the LRO function of the receiving queue.
19. The apparatus of claim 18,
the determining unit is further configured to count the number of times that a data stream to which an object in the receive queue belongs is a multi-stream in the second preset time period and the number of objects in the receive queue in the second preset time period; if the hash values of two adjacent objects in the receiving queue are different, adding 1 to the number of times of multi-stream of the data stream to which the object of the receiving queue belongs; determining the probability that the data stream to which the object in the receive queue belongs is a multi-stream according to the number of times that the data stream to which the object in the receive queue belongs is a multi-stream in the second preset time period and the number of the objects in the receive queue in the second preset time period.
20. The apparatus according to claim 18 or 19, wherein the apparatus further comprises a selecting unit configured to select one threshold value from a plurality of threshold values, and use the selected threshold value as the fifth threshold value.
21. A setup device for bulk receive offload LRO functionality, the setup device comprising a memory for storing computer executable instructions and a processor for invoking the computer executable instructions such that the setup device when executed performs the computer executable instructions to implement the operational steps of the method of any of claims 1 to 8.
CN202110527602.2A 2018-11-14 2018-11-14 Method and device for setting large-scale receiving and unloading functions Active CN113411262B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110527602.2A CN113411262B (en) 2018-11-14 2018-11-14 Method and device for setting large-scale receiving and unloading functions

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811356753.0A CN109688063B (en) 2018-11-14 2018-11-14 Method and device for setting large receiving and unloading function
CN202110527602.2A CN113411262B (en) 2018-11-14 2018-11-14 Method and device for setting large-scale receiving and unloading functions

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201811356753.0A Division CN109688063B (en) 2018-11-14 2018-11-14 Method and device for setting large receiving and unloading function

Publications (2)

Publication Number Publication Date
CN113411262A true CN113411262A (en) 2021-09-17
CN113411262B CN113411262B (en) 2023-09-05

Family

ID=66184666

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110527602.2A Active CN113411262B (en) 2018-11-14 2018-11-14 Method and device for setting large-scale receiving and unloading functions
CN201811356753.0A Active CN109688063B (en) 2018-11-14 2018-11-14 Method and device for setting large receiving and unloading function

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811356753.0A Active CN109688063B (en) 2018-11-14 2018-11-14 Method and device for setting large receiving and unloading function

Country Status (1)

Country Link
CN (2) CN113411262B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114448916A (en) * 2021-12-24 2022-05-06 锐捷网络股份有限公司 TIPC message processing method, device, equipment and storage medium
CN117354254A (en) * 2023-10-17 2024-01-05 无锡众星微系统技术有限公司 Combined interrupt control method and device based on LRO timeout and interrupt ITR timeout

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111694781A (en) * 2020-04-21 2020-09-22 恒信大友(北京)科技有限公司 ARM main control board based on data acquisition system
CN112214968A (en) * 2020-10-12 2021-01-12 中国民航信息网络股份有限公司 Message conversion method and device and electronic equipment
CN115733897A (en) * 2021-08-27 2023-03-03 华为技术有限公司 Data processing method and device
CN115665073B (en) * 2022-12-06 2023-04-07 江苏为是科技有限公司 Message processing method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6529911B1 (en) * 1998-05-27 2003-03-04 Thomas C. Mielenhausen Data processing system and method for organizing, analyzing, recording, storing and reporting research results
US20090232137A1 (en) * 2008-03-12 2009-09-17 Dell Products L.P. System and Method for Enhancing TCP Large Send and Large Receive Offload Performance
CN101841545A (en) * 2010-05-14 2010-09-22 中国科学院计算技术研究所 TCP stream restructuring and/or packetizing method and device
US8386644B1 (en) * 2010-10-11 2013-02-26 Qlogic, Corporation Systems and methods for efficiently processing large data segments
EP2843891A1 (en) * 2013-08-26 2015-03-04 VMWare, Inc. Traffic and load aware dynamic queue management
US20150263974A1 (en) * 2014-03-11 2015-09-17 Vmware, Inc. Large receive offload for virtual machines
US20150261556A1 (en) * 2014-03-11 2015-09-17 Vmware, Inc. Large receive offload for virtual machines
JP2016012801A (en) * 2014-06-27 2016-01-21 富士通株式会社 Communication apparatus, communication system, and communication apparatus control method
CN108337188A (en) * 2013-08-26 2018-07-27 Vm维尔股份有限公司 The traffic and the management of Load-aware dynamic queue

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6529911B1 (en) * 1998-05-27 2003-03-04 Thomas C. Mielenhausen Data processing system and method for organizing, analyzing, recording, storing and reporting research results
US20090232137A1 (en) * 2008-03-12 2009-09-17 Dell Products L.P. System and Method for Enhancing TCP Large Send and Large Receive Offload Performance
CN101841545A (en) * 2010-05-14 2010-09-22 中国科学院计算技术研究所 TCP stream restructuring and/or packetizing method and device
US8386644B1 (en) * 2010-10-11 2013-02-26 Qlogic, Corporation Systems and methods for efficiently processing large data segments
EP2843891A1 (en) * 2013-08-26 2015-03-04 VMWare, Inc. Traffic and load aware dynamic queue management
CN108337188A (en) * 2013-08-26 2018-07-27 Vm维尔股份有限公司 The traffic and the management of Load-aware dynamic queue
US20150263974A1 (en) * 2014-03-11 2015-09-17 Vmware, Inc. Large receive offload for virtual machines
US20150261556A1 (en) * 2014-03-11 2015-09-17 Vmware, Inc. Large receive offload for virtual machines
JP2016012801A (en) * 2014-06-27 2016-01-21 富士通株式会社 Communication apparatus, communication system, and communication apparatus control method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114448916A (en) * 2021-12-24 2022-05-06 锐捷网络股份有限公司 TIPC message processing method, device, equipment and storage medium
CN117354254A (en) * 2023-10-17 2024-01-05 无锡众星微系统技术有限公司 Combined interrupt control method and device based on LRO timeout and interrupt ITR timeout
CN117354254B (en) * 2023-10-17 2024-04-02 无锡众星微系统技术有限公司 Combined interrupt control method and device based on LRO timeout and interrupt ITR timeout

Also Published As

Publication number Publication date
CN113411262B (en) 2023-09-05
CN109688063B (en) 2021-05-18
CN109688063A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109688063B (en) Method and device for setting large receiving and unloading function
CN107852413B (en) Network device, method and storage medium for offloading network packet processing to a GPU
CN114189571B (en) Apparatus and method for implementing accelerated network packet processing
US10530846B2 (en) Scheduling packets to destination virtual machines based on identified deep flow
CN107003905B (en) Techniques to dynamically allocate resources for local service chains of configurable computing resources
CN108768873B (en) Flow control method and related equipment
US7836195B2 (en) Preserving packet order when migrating network flows between cores
US9544238B2 (en) Reducing network congestion by preferentially dropping packets sent by high bandwidth sources
US20210152675A1 (en) Message segmentation
CN111641566B (en) Data processing method, network card and server
KR101698648B1 (en) A method and an apparatus for virtualization of quality-of-service
WO2021208092A1 (en) Method and device for processing stateful service
EP3823228A1 (en) Message processing method and apparatus, communication device, and switching circuit
CN117157957A (en) Switch-induced congestion messages
WO2023186046A1 (en) Method and apparatus for transmitting message
CN105978821B (en) The method and device that network congestion avoids
CN110519302A (en) A kind of method and apparatus of anti-message aggression
CN113328953B (en) Method, device and storage medium for network congestion adjustment
WO2020252763A1 (en) Adaptive pipeline selection for accelerating memory copy operations
CN117097679A (en) Aggregation method and device for network interruption and network communication equipment
CN115514708B (en) Congestion control method and device
US8006006B2 (en) System and method for aggregating transmit completion interrupts
CN111835652A (en) Method and device for setting virtual channel of data stream
CN115866103A (en) Message processing method and device, intelligent network card and server
US10951526B2 (en) Technologies for efficiently determining a root of congestion with a multi-stage network switch

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211223

Address after: 450046 Floor 9, building 1, Zhengshang Boya Plaza, Longzihu wisdom Island, Zhengdong New Area, Zhengzhou City, Henan Province

Applicant after: Super fusion Digital Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Applicant before: HUAWEI TECHNOLOGIES Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant