CN110708260A - Data packet transmission method and related device - Google Patents

Data packet transmission method and related device Download PDF

Info

Publication number
CN110708260A
CN110708260A CN201911107825.2A CN201911107825A CN110708260A CN 110708260 A CN110708260 A CN 110708260A CN 201911107825 A CN201911107825 A CN 201911107825A CN 110708260 A CN110708260 A CN 110708260A
Authority
CN
China
Prior art keywords
data packet
cache region
priority
target
attribute
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911107825.2A
Other languages
Chinese (zh)
Inventor
江勇
祝轲轲
李清
宛考
李伟超
吴宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Peng Cheng Laboratory
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Peng Cheng Laboratory
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University, Peng Cheng Laboratory filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN201911107825.2A priority Critical patent/CN110708260A/en
Publication of CN110708260A publication Critical patent/CN110708260A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching

Abstract

The application provides a data packet transmission method, which is applied to network equipment, wherein the network equipment is provided with a preset first cache region, and the first cache region is used for caching a data packet; the method comprises the following steps: after receiving a data packet, determining an attribute value in the aspect of the attribute of the data packet label, wherein the target attribute is an attribute related to time delay; determining the sending priority of the data packet in the first cache region according to the attribute value of the data packet; storing the data packet into the first cache region according to the sending priority of the data packet; and the data packets in the first buffer area are sequentially sent out according to the sending priority. In addition, the application also provides a data packet transmission method applied to the terminal equipment, the network equipment, the terminal equipment, a storage medium and a computer program product.

Description

Data packet transmission method and related device
Technical Field
The present application relates to the field of communications, and in particular, to a data packet transmission method and related apparatus.
Background
With the development of the internet, the importance of the network to the life of people is increasing day by day, and the importance of the data center as an important place for network information transmission is self-evident. The data center network is a network applied to the inside of the data center and is responsible for information exchange among various servers in the data center, so that the data center network is an important component of the data center. Optimizing the transmission of a data center network is of great significance.
Data transmission of a data center network is generally performed through links between switches, and in order to ensure low delay of data center network transmission, a method generally adopted by a current data network is load balancing: the flow in the data center is uniformly dispersed to the equal cost multipath of different exchangers as much as possible, thereby reducing the load of a single link, avoiding the generation of packet loss to the maximum extent and improving the performance. The load balancing based method can be divided into two schemes of central scheduling and distributed scheduling. The load balancing scheme based on central scheduling makes a global optimal load balancing decision by collecting global information of the switch and the server, and the distributed load balancing scheme mostly only depends on local information of the switch and the server, but has the advantages of quick response, convenient arrangement and the like, and has stronger expansibility because of not being limited by a central structure, but the effect is probably not as good as that of the centralized scheme.
The core of the load balancing method lies in the control of the transmission path of the traffic, and the requirement of low-delay transmission of part of the traffic cannot be met under the condition that a link is fixed.
Disclosure of Invention
A first aspect of the embodiments of the present application provides a data packet transmission method, which is applied to a network device, where the network device has a preset first cache region, and the first cache region is used for caching a data packet; the method comprises the following steps:
after receiving a data packet, determining an attribute value of the data packet in the aspect of a target attribute, wherein the target attribute is an attribute related to time delay;
determining the sending priority of the data packet in the first cache region according to the attribute value of the data packet;
storing the data packet into the first cache region according to the sending priority of the data packet; and the data packets in the first buffer area are sequentially sent out according to the sending priority.
Based on the first aspect, an embodiment of the present application further provides a second implementation manner of the first aspect:
the data packet carries an attribute value of the target attribute; the determining an attribute value of the data packet in terms of a target attribute comprises:
and extracting the attribute value of the target attribute carried by the data packet from the data packet.
Based on the first aspect, the embodiments of the present application further provide a third implementation manner of the first aspect:
the target attributes comprise a first target attribute and a second target attribute, and the priority level of the first target attribute is higher than that of the second target attribute; correspondingly, the first cache region comprises a first priority queue corresponding to a preset attribute value of a first target attribute, and the first priority queue comprises a second priority queue corresponding to a preset attribute value of a second target attribute.
Based on the third implementation manner of the first aspect, the present application provides an example of the fourth implementation manner of the first aspect:
determining the sending priority of the data packet in the first cache region according to the attribute value of the data packet, including:
determining a target first priority queue corresponding to a first attribute value in a first priority queue of the first cache region according to the first attribute value of the data packet in the aspect of a first target attribute;
according to a second attribute value of the data packet in a second target attribute aspect, determining a target second priority queue corresponding to the second attribute value in a second priority queue included in the target first priority queue;
and determining the priority of the target second priority queue as the transmission priority of the data packet in the first buffer area.
Based on the fourth implementation manner of the first aspect, the present application provides a fifth implementation manner of the first aspect:
storing the data packet into the first buffer area according to the sending priority of the data packet, including:
and storing the data packet to the tail of the target second priority queue corresponding to the sending priority in the first buffer area.
Based on the first aspect, an embodiment of the present application further provides a sixth implementation manner of the first aspect:
the target attributes include: a traffic type level and/or a traffic size level.
Based on the first aspect, an embodiment of the present application further provides a seventh implementation manner of the first aspect:
the network equipment also comprises a second cache region; the method further comprises the following steps:
periodically acquiring the storage state of the first cache region;
if the storage state of the first cache region meets a preset full condition, transferring a part of data packets in the first cache region to the second cache region;
and if the storage state of the first cache region meets a preset idle condition, playing back a part of data packets in the second cache region to the first cache region.
Based on the seventh implementation manner of the first aspect, the present application provides an eighth implementation manner of the first aspect:
transferring a part of the data packets in the first cache region to the second cache region, including:
and transferring the data packets in the first cache region to the second cache region according to the sequence of the sending priorities from low to high until the storage state of the first cache region meets a preset loose condition or no data packet with the sending priority lower than a preset priority threshold exists in the first cache region.
Based on the seventh implementation manner of the first aspect, the present application provides a ninth implementation manner of the first aspect:
playing back a part of data packets in the second cache region to the first cache region, including:
and playing back the data packets in the second cache region to the first cache region according to the sequence of transfer time from first to last until the storage state of the first cache region meets a preset full condition or no data packet exists in the second cache region.
Based on the seventh implementation manner of the first aspect, the present application provides a tenth implementation manner of the first aspect:
after receiving a promotion packet sent by a terminal device, determining a target data packet promoted by the promotion packet;
if the target data packet exists in the second cache region, the target data packet is played back to the first cache region, and the sending priority of the target data packet in the first cache region is set as the highest sending priority.
Based on the first aspect, the present application provides an eleventh implementation manner of the first aspect:
the first cache region comprises two sub-regions; the first sub-area is used for storing the data packets with the sending priority reaching the preset priority threshold, and the second sub-area is used for storing the data packets with the sending priority not reaching the preset priority threshold.
Based on the eleventh implementation manner of the first aspect, the present application provides a twelfth implementation manner of the first aspect:
before storing the data packet into the first buffer area according to the sending priority of the data packet, the method further includes:
determining a target sub-area to which the data packet belongs in the two sub-areas of the first cache area according to the sending priority of the data packet;
and if the storage state of the target sub-region meets a preset congestion condition, adding an explicit congestion feedback mark to the data packet.
A second aspect of the embodiments of the present application provides a method for transmitting a data packet, where the method includes:
obtaining a data packet to be sent to network equipment;
determining an attribute value of the data packet in the aspect of target attributes by using a pre-constructed neural network classification model; the attribute value is used for the network device to determine the sending priority of the data packet in the first cache region according to the attribute value of the data packet and store the data packet into the first cache region according to the sending priority of the data packet, wherein the data packet in the first cache region is sent out in sequence according to the sending priority of the data packet;
and after the attribute value of the target attribute is added into the data packet, sending the data packet to the network equipment.
Based on the second aspect, the embodiments of the present application further provide a first implementation manner of the second aspect:
the target attributes include: a traffic type level and/or a traffic size level; accordingly, the neural network classification model comprises: a first neural classification network model for determining a traffic type class and/or a second neural classification network model for determining a traffic size class.
Based on the first implementation manner of the second aspect, the present application provides a second implementation manner of the second aspect:
when the target attribute comprises a traffic type grade and a traffic size grade, the second neural network classification model comprises a plurality of neural network classification models, and different neural network classification models correspond to different traffic type grades;
the determination mode of the flow threshold value parameter in the second neural network classification model is as follows:
determining a target traffic type grade corresponding to a second neural network classification model;
obtaining a traffic accumulation distribution map corresponding to the target traffic type grade, wherein the abscissa of the traffic accumulation distribution map is the traffic of a data stream transmitted by a data center to which the terminal device belongs, and the ordinate is the traffic distribution percentage;
calculating the target flow distribution percentage corresponding to the target flow type grade;
and in the flow accumulation distribution map, determining the flow size corresponding to the target flow distribution percentage, and determining the flow size as a flow threshold parameter in the second neural network classification model.
Based on the second aspect, the present application provides a third implementation manner of the second aspect:
the neural network classification model is trained by using a preset number of data packets in the data stream sent by the terminal equipment.
Based on the second aspect, the present application provides a fourth implementation manner of the second aspect:
after sending the data packet to the network device, the method further includes:
starting timing from the transmission time point of the data packet;
if the timing duration reaches the preset duration and the confirmation information of the data packet is not received, sending a promotion packet to the network equipment; wherein the preset duration is set based on an timeout retransmission duration of the data packet.
A third aspect of an embodiment of the present application provides a network device, where the network device has a function of implementing the network device in the first aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
A fourth aspect of the embodiments of the present application provides a network device, where the network device has a function of implementing the network device in the second aspect. The function can be realized by hardware, and can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the functions described above.
A fifth aspect of the embodiments of the present application provides a computer storage medium, where the computer storage medium is used to store computer software instructions for the network device or the terminal device, and includes a program for executing the program designed for the network device or the terminal device.
A sixth aspect of embodiments of the present application provides a computer program product, where the computer program product includes computer software instructions, and the computer software instructions are loadable by a processor to implement a flow in a data packet transmission method according to any one of the second aspect of the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages: when the network equipment acquires the data packet, the information about the transmission delay of the data packet is confirmed, and the data packet is sequenced according to the information about the delay, so that the data packet can be transmitted according to the information about the transmission delay when being transmitted, and the delay of the transmission process of the data packet can be reduced under the condition that a link is fixed.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a network framework diagram of a data center according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a network transmission method according to an embodiment of the present application;
fig. 3 is another schematic flow chart of a network transmission method according to an embodiment of the present application;
fig. 4 is another schematic flow chart of a network transmission method according to an embodiment of the present application;
FIG. 5 is a diagram illustrating a storage manner of data packets according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a network device in an embodiment of the present application;
fig. 7 is a schematic structural diagram of a terminal device in an embodiment of the present application;
fig. 8 is a schematic structural diagram of a network device in the embodiment of the present application;
fig. 9 is another schematic structural diagram of the terminal device in the embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The application provides a data packet transmission method which can reduce the time delay of a data packet transmission process under the condition that a link is fixed.
The embodiments of the present application may be applied to a network centric architecture as shown in fig. 1. The network center architecture in the embodiment of the application comprises: switches 101 to 104, servers 201 to 202.
In this embodiment, the network device is regarded as a switch, and it should be noted that the network device may be other devices having a function of data transmission between terminals, and specifically may be: network elements, data hubs, and are not limited herein.
In this embodiment, the terminal device is regarded as a server, and it should be noted that the terminal device may enable other devices having a function of transmitting data to other devices, such as: a mobile phone and a personal computer, which are not limited herein.
In the embodiment of the present application, only four switches 101, 102, 103, and 104 and two servers 201 and 202 are taken as an example for description, and in an actual application, there may be more servers and switches.
The mode of accessing the switch by the server may be different, and a plurality of servers may access one switch, or one server may access one switch.
There is link connection between the switches, and the network formed by the switches may be a fat tree network or a leaf network. However, during data transmission, one terminal can always transmit data to another terminal through one or several switches.
The servers 201 and 202 are mainly responsible for data acquisition and data packet generation, the switches 101 to 104 are responsible for transmitting data between the servers, links between the servers form a data center network, the data center network is a network applied to the inside of a data center, and is responsible for information exchange between the servers in the data center and data input and output in the data center, so that the data center network is an important component of the data center.
The following describes a data packet transmission method in the embodiment of the present application with reference to the network framework of fig. 1:
referring to fig. 2, an embodiment of the present application shows a flow of a data packet transmission method applied to a terminal device, which specifically includes steps 201 and 203.
201. A data packet to be sent to the network device is obtained.
When the terminal device operates, data which needs to interact with other terminals, such as search request data and download request data, is generated, the terminal device needs to generate a data packet from the data, and sends the data packet to other terminals through the network device, and the data packet has sent address information.
202. Determining an attribute value of the data packet in terms of a target attribute.
The terminal equipment determines the attribute value of the data packet in the aspect of target attribute by using a pre-constructed neural network classification model, the neural network is obtained by training data packets with different target attributes as a training set, and has the capability of determining the target attribute value of the data packet, the target attribute is the attribute related to time delay in the data packet and is used for determining the sending priority of the data packet in a first cache region according to the attribute value of the data packet and storing the data packet into the first cache region according to the sending priority of the data packet by the network equipment, wherein the data packet in the first cache region is sent out in sequence according to the sending priority of the data packet. The attribute value determines the priority of packet transmission.
203. And adding the attribute value of the target attribute in a data packet, and sending the data packet to the network equipment.
After the terminal device obtains the attribute value of the target attribute, the attribute value is appended to an appropriate position in the data packet, for example, the DSCP field of the packet header. After the attribute value is added, the terminal device transmits the data packet to which the addition is completed to a network device scheduled to be transmitted.
Referring to fig. 3, an embodiment of the present application illustrates another flow of a data packet transmission method applied to a network device, which specifically includes steps 301 and 304.
301. A data packet is received.
The network device is provided with a first cache region and used for receiving the data packet sent by the terminal device, the network device is used for data communication between the terminal device and has the functions of receiving the data packet, sending the data packet and caching the data packet, and the network device is provided with the first cache region which is preset and used for caching the received data packet.
302. An attribute value of the data packet in terms of the target attribute is determined.
After receiving the data packet sent by the terminal device, the network device reads the data packet, determines the attribute value of the target attribute in the data packet, the attribute value is attached to the data packet in a readable form, and the target attribute is the attribute related to the time delay of the data packet.
303. And determining the transmission priority of the data packet in the first buffer area.
And the network equipment determines the sending priority of the data packet in the first cache region according to the attribute value of the target attribute of the data packet.
304. And storing the data packet to a first cache region.
And storing the data packet into an area corresponding to the sending priority in the first cache area according to the sending priority of the data packet, wherein the data packets in the first cache area are sent out in sequence according to the sending priority.
Referring to fig. 4, another flow of a data packet transmission method is shown in the embodiment of the present application. In this embodiment, the network device is regarded as a switch, and the terminal device is regarded as a server. Specifically, the present embodiment may include the following steps 401-413.
401. The server obtains a data packet to be sent to the switch.
In the operation process of the server, data needing to interact with other servers, such as search request data and download request data, needs to be generated into a plurality of data packets by the server and is used for sending the data to other terminals through network equipment, the data packets can have address information sent to the server, the data packets form a data stream, and one data stream expresses complete information.
402. The server determines the attribute value of the data packet in terms of the target attribute.
Attribute values of the data stream in terms of the target attributes are determined using a pre-constructed neural network classification model. The determined attribute values may be carried within each packet or otherwise bound to the packets. The target attribute may specifically include a traffic type level, a traffic size level, a data flow destination, and the like.
It should be noted that the target attributes, such as the traffic type and the traffic size, are delay-related attributes, and the target attributes may reflect the requirement of the packet for high or low delay. For example, if the traffic types of the data packets are different, the requirements for the delay may be different, specifically, for example, if the data packets are data packets of the instant messaging type, the requirements for the delay are higher; the data packet is a data packet of a mail communication type, so that the requirement on time delay is low. Similarly, the traffic size may reflect the requirement of the data packet for the delay to some extent. It should be noted that the traffic type level is also replaced with a Retransmission Timeout (RTO) level, and the RTO level is determined in the same manner as the traffic type level.
The pre-constructed neural network classification model comprises the following steps: a data flow type prediction network (a first neural classification network model) for predicting a traffic type class, and a data flow size prediction network (a second neural classification network model) for predicting a traffic size class. The method for constructing and using the neural network classification model comprises the following steps:
1. and (5) building a neural classification network model.
The construction of the data flow type prediction network and the data flow size prediction network comprises the following steps: a certain amount of data flow of the switch and/or the server is obtained as a data set, and the data set is used as a sample set to train a bidirectional recurrent neural network for data packet classification. Wherein the bidirectional recurrent neural network employs a Gated Recurrent Unit (GRU) unit.
For the training of the data stream type prediction network, the bidirectional Recurrent Neural Network (RNN) model is used, because in this embodiment, the data stream can be regarded as a language for interaction between servers, and a certain sequence relationship exists, and this relationship can be better described by using the RNN model. Each byte in a data stream is considered a word, each byte is composed of 8 bits and can be translated into numbers from 0 to 255, and each number is considered a word, so the data stream can be considered a language with only 256 words. In the embodiment, the first three data packets of each data stream are converted into 0-255 digits, then every 4 digits form a sentence in a group, the sentences are input into a bidirectional recurrent neural network for training, the network firstly embeds words into each word, then inputs the words into the network to output the result of each sentence, then inputs the results of all sentences into the network of the next layer again, and performs logistic regression (softmax) on the output result of the next layer through a full connection layer and outputs the prediction of the flow type grade of the data stream.
It should be noted that the second neural network classification model, that is, the data flow size prediction network, has a traffic threshold parameter, and the determination method of the traffic threshold parameter is as follows: determining a target traffic type grade corresponding to a second neural network classification model; obtaining a flow accumulation distribution map corresponding to the target flow type grade, wherein the abscissa of the flow accumulation distribution map is the flow of a data stream transmitted by a data center to which the terminal equipment belongs, and the ordinate is the flow distribution percentage; calculating the target flow distribution percentage corresponding to the target flow type grade; and in the flow accumulation distribution map, determining the flow size corresponding to the target flow distribution percentage, and determining the flow size as a flow threshold parameter in the second neural network classification model.
For example: the ith threshold is represented by θ i, where θ i is a percentage and corresponds to a value on the vertical axis of the cumulative distribution function, for example, when θ is 0.5, the value on the horizontal axis corresponds to a median of all the flow sizes in the current flow type class. Assuming that K +1 magnitude levels are set for the traffic type level, there are K threshold requirements, and the current load is set to be p, and the solving formula is as follows:
after each θ is found, each threshold value can be found from the flow rate distribution.
2. Use of a neural classification network model.
The bidirectional recurrent neural network of the data flow classification can output the traffic type of the data packet. Presetting a corresponding relation between the flow type and the flow type, and determining the flow type grade of the data packet according to the corresponding relation after the bidirectional recurrent neural network of the data flow classification outputs the flow type of the data packet. Alternatively, the bidirectional recurrent neural network for data flow classification can be used directly to determine the traffic type class of the data packet, i.e. the network directly outputs the traffic type class of the data packet.
The two-way recurrent neural network for predicting the sizes of the data streams of different types is trained for the data streams of different types respectively, because the size difference between the data streams of different types is possibly large, if the same network is used for predicting the sizes of all the data streams, the data stream of a certain type belongs to a certain size grade, and the difference between the data streams cannot be reflected.
The data stream size prediction network and the data stream type prediction network have similar training methods, which are not described herein again, and the prediction of the data stream size does not predict a certain numerical value precisely, but predicts in which interval whether the size of the data stream is larger than a certain threshold. The output is the interval in which the size of the data stream is located.
403. The server sends the data packet.
The server sends a packet with a traffic type class identifier and a traffic size class identifier to the switch.
404. The switch determines an attribute value for the packet in terms of the target attribute.
The exchanger receives the data packet with the identification, extracts the specific numerical values of the flow type grade and the flow size grade and identifies the flow type grade and the flow size grade.
405. The switch determines a transmission priority of the packet in the first buffer area.
The method comprises the steps that a switch determines that the class of the flow type is a first priority and the class of the flow size is a second priority, the priority determined in the process is the same as the priority set by a server, a first cache region of the switch is provided with a first priority queue corresponding to the first priority, and each first priority queue comprises a second priority queue corresponding to the second priority; determining a first priority queue to which the data packet belongs according to the flow type grade of the data packet, determining a second priority queue to which the data packet belongs in the determined first priority queue according to the flow size grade of the data packet; and taking the second priority of the data packet as the transmission priority of the data packet.
This step determines the transmission priority of the data packet to be different from the transmission queue determined by two factors (traffic type class and traffic size class) according to the difference of the priority of the data packet, and takes the traffic type class as the first priority and the traffic size class as the second priority below the first priority because different types of data packets have different transmission requirements, but in the current switch mechanism, all data flows are treated equally. Therefore, compared with setting the same traffic type for all applications, in this embodiment, different traffic type levels are set for different types of data streams, and the different traffic type levels reflect the sensitivity of different data streams to delay, and the traffic type is used as the most important index for assigning priority, which can further ensure user experience.
The same is an important factor affecting Flow Completion Time (FCT) with respect to the size of the traffic. When a large stream precedes a small stream, if the large stream times out, the FCT of all the small streams behind it will increase dramatically, or even time out. Thus, in this embodiment, taking the size of the data flow as a second consideration, one or more priority queues may be included in a traffic class, and each priority queue is referred to as a traffic class. When two data flows have the same flow type grade, the flow size grade is determined according to the size of the data flow, and the priority is higher when the data flow is smaller. If there is only one priority queue in a traffic type class, it means that the traffic size is not considered when giving priority, since it is possible that all flows are approximately distributed like search traffic. The manager has the flexibility to adjust the traffic type level and the traffic size level for optimal performance depending on the service running in the switch, e.g. the search traffic may be relatively small and therefore does not need to optimize the flow completion time FCT through multiple queues.
In this embodiment, the traffic type class is used as the first priority and the traffic size is used as the second priority, but in other embodiments, the traffic size may be used as the first priority and the traffic type class as the second priority, or other classes may be used as the first priority or the second priority, for example, information such as a data flow destination and a single packet size is not limited herein.
406. The switch determines the target sub-area to which the packet belongs.
The first cache region of the switch comprises two sub-regions; the first sub-area is used for storing non-cacheable data packets, namely data packets with sending priorities reaching a preset priority threshold, and the second sub-area is used for storing cacheable data packets, namely data packets with sending priorities not reaching the preset priority threshold. The threshold is set in the switch, and is used to distinguish a storage space of a data packet that can be cached from a storage space of a data packet that cannot be cached, specifically referring to fig. 5, where an un-cached area is a first sub-area, a cached area is a second sub-area, and several sequentially arranged flow size levels in each large flow type level (referred to as a type level in fig. 5) are flow size levels.
It should be noted that the execution condition of this step is that the switch executes the first buffer area under the condition that two sub-areas with different sending priority thresholds are set in the first buffer area, and it is understood that this step may not be executed when the first buffer area does not have the sub-area setting, and the corresponding step 407 is not executed, and the step 408 is directly executed. The aim of this step is to make this solution more feasible.
407. The switch adds an explicit congestion feedback flag to the packet.
Before the data packet is stored in the buffer area, the data packet passes through the congestion display marking module, the congestion display marking module respectively reads the state of the buffer subareas where the data packet is to be stored according to the storage priority of the data packet, and adds a congestion display feedback mark (ECN) to the data packet according to the state of the corresponding buffer subarea, wherein the condition of whether the ECN is added between the subareas does not influence each other. For non-cacheable packets, the ECN-added threshold is set to a predetermined ratio of the bandwidth multiplied by the Round-Trip Time (RTT), and for cacheable packets, the ECN-added threshold is set to the same size as the buffer region. The preset proportional value can be any value set according to actual conditions.
The purpose of this step is to set different ECN-added thresholds according to the attributes of the data packets in different areas, so that the processing of the data packets is more refined, and the low time delay of sending the data packets is further ensured.
408. The switch stores the data packet to a first cache region.
The server stores the data packets to a corresponding sending priority queue in the first buffer area according to the priority of the data packets and stores the data packets at the tail of the queue.
The step makes the storage of the data packets after identification more organized, and meanwhile, the data packets are sent out in the same queue in sequence, thereby avoiding the possibility that the data packets stay in a certain queue for a long time.
409. The switch acquires the storage state of the first cache region.
The switch is provided with a first cache region for caching, a second cache region for sharing caching tasks and a cache access control module (CIODM).
The cache access control module is used for periodically detecting the storage state of the first cache region and sending out cache access and cache exit instructions according to the state. CIODM uses a hydraulic mechanism of memory management in a Linux kernel, and the hydraulic mechanism is provided with 3 thresholds: minimum reserved value, early warning value and loose value. The switch sets a minimum early warning value and a minimum loose warning value according to the bandwidth and the application type of operation, when the residual space in the first cache region is smaller than the minimum reserved value, the cache access control module sends a cache access command to start moving the data packet in the first cache region to the second cache region, and when the residual space in the first cache region is larger than the early warning value, the switch stops moving. When the remaining space is larger than the loose value, the cache access control module sends out a cache command and sends the data packet in the second cache region back to the first cache region again until the remaining space in the first cache region is smaller than the early warning value or no data packet in the second cache region is reached.
The purpose of this step is that the storage capacity of the switch can be improved, and then the throughput capacity of the switch is improved, and the second buffer area is matched with the multi-priority queue and the hydraulic mechanism to make the flow insensitive to time delay give way for the time delay flow, so that the throughput capacity of the network is improved. The contradiction between low time delay and high throughput of the data center network is relieved.
The second cache area in the switch may be implemented by a Dynamic Random Access Memory (DRAM) or a Solid State Drive (SSD). In actual use, some switches are already equipped with a second cache area. The relationship between the first cache region and the second cache region is equivalent to the relationship between the internal memory and the SWAP space in the Linux system.
Besides the second buffer area, the existing function of another switch can be used, and the packet loss is prioritized. When the first buffer area has less space and is not time to transfer to the second buffer area, the switch will preferentially discard the data packets with the maximum traffic type level and the maximum flow size level.
It should be noted that the execution condition of step 409 in this embodiment is that the switch executes under the condition that the switch not only has the first cache region for caching but also has the second cache region for sharing the caching task, it is understood that this step may not be executed if the first cache region does not have the second cache region, and corresponding step 410 is not executed either, and step 411 is executed directly or the process is ended. The step 409 may be executed at any time point as long as the time meets the requirement of periodically detecting the storage state of the first cache region, the corresponding step 410 is executed only after the step 409 is executed and the corresponding execution instruction is issued, and if the step 409 is not executed or the step 409 is not issued, the step 410 is not executed, and the step 411 is directly executed or the process is ended.
410. And the switch adjusts the data packets of the first cache region and the second cache region according to the state of the first cache region.
And if the storage capacity of the first cache region reaches the set minimum reserved value, the CIODM sends a cache entering instruction, and the data packets in the second cache region are sent to the second cache region according to the sequence of the sending priorities from low to high until the first cache region reaches a loose value.
And if the memory capacity of the first cache region is less than the set loose value, the CIODM sends out a cache instruction, and sends the data packets in the second cache region to the second cache region according to the sequence of the data packets in the second cache region until the first cache region reaches the minimum reserved value or no data packet exists in the second cache region.
411. The server sends a promotion packet to the switch.
The data packet may not be sent for a long time after being cached, and at this time, the server sends a promotion packet to the switch. The urging package has a load of only one byte and is used for representing the information of the urged data package, timing can be started when the server sends a certain data package, and the server can send the urging package aiming at the data package when the timing duration reaches the preset duration and does not receive the confirmation information about the data package.
It can be understood that the preset duration in the present process is set based on the traffic type of the data packet, that is, different preset durations may be set for data packets of different traffic types. A specific implementation manner is to set different timeout retransmission durations for data packets of different traffic types in advance, where the preset duration is a preset proportional value of the timeout retransmission duration of the data packet, for example, the preset proportional value may be 3/4, and of course, the preset proportional value may also be other numerical values set according to actual situations, which is not limited in this application.
The purpose of this step is to ensure that the data packet is sent as far as possible with a relatively low delay, and to send a promotion packet to supervise the data packet, so that the timeout does not occur as far as possible.
It should be noted that the execution condition of this step is that no response is received for a long time after the data packet is sent, and this step, and steps 412 and 413 may be executed at any time after the data packet is sent, and are not limited herein.
412. The switch determines a target data packet urged by the urging packet.
And after receiving the urging package sent by the terminal equipment, the switch determines a target data package urged by the urging package.
And if the target data packet exists in the second cache region, the target data packet is played back to the first cache region, and the sending priority of the target data packet in the first cache region is set as the highest sending priority.
And after receiving the urging packet, the switch reads the information in the urging packet, searches for a target data packet urged by the urging packet, and if the target data packet exists, executes step 413, and if the target data packet does not exist, forwards the urging packet, and discards the urging packet at the previous-hop switching point of the receiving end.
It should be noted that the execution condition step 411 of this step is executed, and this step may be executed at any time point after the urging package is sent, and is not limited herein.
413. The switch plays back the target data packet to the first buffer area, and sets the sending priority of the target data packet in the first buffer area as the highest sending priority.
And the switch determines that target data packets promoted by promotion packets exist in the second cache region, transmits the data packets to the highest priority queue in the first cache region, and modifies the traffic type grade and the data flow size grade of the data packets to be highest so that the data packets are transmitted out at the highest speed.
It should be noted that, the execution condition step 412 of this step is executed, and this step may be executed at any time point after the urging package is sent, or may not be executed, and is not limited herein.
The data packet transmission method in the embodiment of the present application is explained above, and the structures of the network device and the terminal device provided in the embodiment of the present application are described below. It should be noted that the following apparatus embodiments and the above method embodiments may be referred to with each other.
Fig. 6 shows a structure of a network device provided in an embodiment of the present application, which specifically includes: a receiving unit 601, a first determining unit 602, a second determining unit 603 and a storing unit 604.
The receiving unit 601 is configured to receive a data packet.
A first determining unit 602, configured to determine an attribute value of the data packet in terms of a target attribute, where the target attribute is an attribute related to latency.
A second determining unit 603, configured to determine a sending priority of the data packet in the first buffer area according to the attribute value of the data packet.
The storage unit 604 is configured to store the data packet into the first buffer area according to the sending priority of the data packet.
In this embodiment, the flow executed by each unit and module in the network device is similar to the method flow described in the embodiment shown in fig. 3, and is not described again here.
In one implementation, the data packet carries an attribute value of a target attribute; the first determining unit 602 is specifically configured to: and extracting the attribute value of the target attribute carried by the data packet from the data packet.
In one implementation, the target attribute includes a first target attribute and a second target attribute, and a priority level of the first target attribute is higher than a priority level of the second target attribute; correspondingly, the first cache region comprises a first priority queue corresponding to a preset attribute value of a first target attribute, and the first priority queue comprises a second priority queue corresponding to a preset attribute value of a second target attribute.
In an implementation manner, the second determining unit 603 is specifically configured to: determining a target first priority queue corresponding to a first attribute value in a first priority queue of the first cache region according to the first attribute value of the data packet in the aspect of a first target attribute; according to a second attribute value of the data packet in a second target attribute aspect, determining a target second priority queue corresponding to the second attribute value in a second priority queue included in the target first priority queue; and determining the priority of the target second priority queue as the transmission priority of the data packet in the first buffer area.
In one implementation, the target attribute includes a traffic type level and/or a traffic size level.
In one implementation, the network device further includes a second cache region; the network device further includes: and a data packet buffer unit. A data packet cache unit, configured to periodically obtain a storage state of the first cache region; if the storage state of the first cache region meets a preset full condition, transferring a part of data packets in the first cache region to the second cache region; and if the storage state of the first cache region meets a preset idle condition, playing back a part of data packets in the second cache region to the first cache region.
Specifically, when the packet cache unit transfers a part of the packets in the first cache region to the second cache region, the packet cache unit is specifically configured to: and transferring the data packets in the first cache region to the second cache region according to the sequence of the sending priorities from low to high until the storage state of the first cache region meets a preset loose condition or no data packet with the sending priority lower than a preset priority threshold exists in the first cache region.
Specifically, when playing back a part of the data packets in the second buffer area to the first buffer area, the data packet buffer unit is specifically configured to: and playing back the data packets in the second cache region to the first cache region according to the sequence of transfer time from first to last until the storage state of the first cache region meets a preset full condition or no data packet exists in the second cache region.
In one implementation, the network device further includes: and a data packet playback unit.
The data packet playback unit is used for determining a target data packet promoted by a promotion packet after receiving the promotion packet sent by the terminal equipment; if the target data packet exists in the second cache region, the target data packet is played back to the first cache region, and the sending priority of the target data packet in the first cache region is set as the highest sending priority.
In one implementation, the first cache region includes two sub-regions; the first sub-area is used for storing the data packets with the sending priority reaching the preset priority threshold, and the second sub-area is used for storing the data packets with the sending priority not reaching the preset priority threshold.
Fig. 7 shows a structure of a terminal device provided in an embodiment of the present application, which specifically includes: an obtaining unit 701, a determining unit 702, an adding unit 703, and a transmitting unit 704.
An obtaining unit 701 is configured to obtain a data packet to be sent to a network device.
A determining unit 702, configured to determine an attribute value of the data packet in terms of a target attribute by using a pre-constructed neural network classification model; the attribute value is used for the network device to determine the sending priority of the data packet in the first cache region according to the attribute value of the data packet and store the data packet into the first cache region according to the sending priority of the data packet, wherein the data packet in the first cache region is sent out in sequence according to the sending priority of the data packet.
An adding unit 703 is configured to add an attribute value of the target attribute to the data packet.
A sending unit 704, configured to send the data packet to a network device.
In this embodiment, the flow executed by each unit and module in the terminal device is similar to the method flow described in the embodiment shown in fig. 2, and is not described again here.
In one implementation, the target attributes include: a traffic type level and/or a traffic size level; accordingly, the neural network classification model comprises: a first neural classification network model for determining a traffic type class and/or a second neural classification network model for determining a traffic size class.
In one implementation, the neural network classification model is trained by using a preset number of data packets in a data stream transmitted by the terminal device.
In one implementation, the second neural network classification model has a traffic threshold parameter determined by: determining a target traffic type grade corresponding to a second neural network classification model; obtaining a traffic accumulation distribution map corresponding to the target traffic type grade, wherein the abscissa of the traffic accumulation distribution map is the traffic of a data stream transmitted by a data center to which the terminal device belongs, and the ordinate is the traffic distribution percentage; calculating the target flow distribution percentage corresponding to the target flow type grade; and in the flow accumulation distribution map, determining the flow size corresponding to the target flow distribution percentage, and determining the flow size as a flow threshold parameter in the second neural network classification model.
In one implementation, the terminal device may further include: and a promotion unit. The urging unit is used for starting timing from the sending time point of the data packet; if the timing duration reaches the preset duration and the confirmation information of the data packet is not received, sending a promotion packet to the network equipment; wherein the preset duration is set based on a traffic type of the data packet.
In this embodiment, the process executed by each unit in the terminal device is similar to the process executed by the server in the embodiment shown in fig. 4, and is not described again here.
Fig. 8 is a schematic structural diagram of a network device provided in this embodiment, where the server 800 may include one or more Central Processing Units (CPUs) 801 and a memory 805, where the memory 805 stores one or more application programs or data.
In this embodiment, the specific functional module division in the central processing unit 801 may be similar to the functional module division manner of the receiving unit, the first determining unit, the second determining unit, the fourth determining unit, the adding unit, the storing unit, the obtaining unit, the transferring unit, the playback unit, the third determining unit, the setting unit, and the like described in fig. 6 and fig. 7, and is not described herein again.
Memory 805 may be volatile storage or persistent storage, among others. The program stored in the memory 805 may include one or more modules, each of which may include a sequence of instructions for operating on the server. Still further, the central processor 801 may be configured to communicate with the memory 805 to execute a series of instruction operations in the memory 805 on the server 800.
The Server 800 may also include one or more power supplies 802, one or more wired or wireless network interfaces 803, one or more input-output interfaces 804, and/or one or more operating systems, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc.
The central processing unit 801 may perform the operations performed by the switch in the embodiment shown in fig. 4, which are not described herein again.
Fig. 9 is a schematic structural diagram of a terminal device provided in this embodiment, where the server 900 may include one or more Central Processing Units (CPUs) 901 and a memory 905, and one or more applications or data are stored in the memory 905.
In this embodiment, the specific functional module division in the central processing unit 901 may be similar to the functional module division manner of the obtaining unit, the determining unit, the adding unit, the sending unit, the timing unit and the like described in fig. 8 and fig. 9, and is not described herein again.
Memory 905 may be volatile storage or persistent storage, among others. The program stored in the memory 905 may include one or more modules, each of which may include a sequence of instruction operations for a server. Still further, the central processor 901 may be arranged to communicate with the memory 905, and to execute a series of instruction operations in the memory 905 on the server 900.
The Server 900 may also include one or more power supplies 902, one or more wired or wireless network interfaces 903, one or more input-output interfaces 904, and/or one or more operating systems, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, etc.
The central processor 901 may perform the operations performed by the switch in the embodiment shown in fig. 4, which are not described herein again.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium is used to store computer software instructions for the network device or the terminal device, and includes a program for executing the program designed for the network device or the terminal device.
The network device may be the network device described in the foregoing fig. 3 and fig. 4.
The terminal device may be as described in the foregoing fig. 2 and fig. 4.
An embodiment of the present application further provides a computer program product, where the computer program product includes computer software instructions, and the computer software instructions may be loaded by a processor to implement the flow of the data packet transmission method in any one of fig. 2 to fig. 4.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
It is further noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the same element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (20)

1. The data packet transmission method is applied to network equipment, wherein the network equipment is provided with a preset first cache region, and the first cache region is used for caching data packets; the method comprises the following steps:
after receiving a data packet, determining an attribute value of the data packet in the aspect of a target attribute, wherein the target attribute is an attribute related to time delay;
determining the sending priority of the data packet in the first cache region according to the attribute value of the data packet;
storing the data packet into the first cache region according to the sending priority of the data packet; and the data packets in the first buffer area are sequentially sent out according to the sending priority.
2. The method according to claim 1, wherein the data packet carries an attribute value of a target attribute; the determining an attribute value of the data packet in terms of a target attribute comprises:
and extracting the attribute value of the target attribute carried by the data packet from the data packet.
3. The method according to claim 1, wherein the target attribute comprises a first target attribute and a second target attribute, the priority level of the first target attribute being higher than the priority level of the second target attribute; correspondingly, the first cache region comprises a first priority queue corresponding to a preset attribute value of a first target attribute, and the first priority queue comprises a second priority queue corresponding to a preset attribute value of a second target attribute.
4. The method according to claim 3, wherein the determining the transmission priority of the packet in the first buffer area according to the attribute value of the packet comprises:
determining a target first priority queue corresponding to a first attribute value in a first priority queue of the first cache region according to the first attribute value of the data packet in the aspect of a first target attribute;
according to a second attribute value of the data packet in a second target attribute aspect, determining a target second priority queue corresponding to the second attribute value in a second priority queue included in the target first priority queue;
and determining the priority of the target second priority queue as the transmission priority of the data packet in the first buffer area.
5. The method of claim 1, wherein the target attribute comprises: a traffic type level and/or a traffic size level.
6. The method according to claim 1, wherein the network device further comprises a second buffer area; the method further comprises the following steps:
periodically acquiring the storage state of the first cache region;
if the storage state of the first cache region meets a preset full condition, transferring a part of data packets in the first cache region to the second cache region;
and if the storage state of the first cache region meets a preset idle condition, playing back a part of data packets in the second cache region to the first cache region.
7. The method according to claim 6, wherein the transferring the portion of the packet in the first buffer area to the second buffer area comprises:
and transferring the data packets in the first cache region to the second cache region according to the sequence of the sending priorities from low to high until the storage state of the first cache region meets a preset loose condition or no data packet with the sending priority lower than a preset priority threshold exists in the first cache region.
8. The method according to claim 6, wherein the playing back a part of the data packets in the second buffer area to the first buffer area comprises:
and playing back the data packets in the second cache region to the first cache region according to the sequence of transfer time from first to last until the storage state of the first cache region meets a preset full condition or no data packet exists in the second cache region.
9. The method for transmitting data packets according to claim 6, further comprising:
after receiving a promotion packet sent by a terminal device, determining a target data packet promoted by the promotion packet;
if the target data packet exists in the second cache region, the target data packet is played back to the first cache region, and the sending priority of the target data packet in the first cache region is set as the highest sending priority.
10. The method according to claim 1, wherein the first buffer area comprises two sub-areas; the first sub-area is used for storing the data packets with the sending priority reaching the preset priority threshold, and the second sub-area is used for storing the data packets with the sending priority not reaching the preset priority threshold.
11. A data packet transmission method is applied to a terminal device, and comprises the following steps:
obtaining a data packet to be sent to network equipment;
determining an attribute value of the data packet in the aspect of target attributes by using a pre-constructed neural network classification model; the attribute value is used for the network device to determine the sending priority of the data packet in the first cache region according to the attribute value of the data packet and store the data packet into the first cache region according to the sending priority of the data packet, wherein the data packet in the first cache region is sent out in sequence according to the sending priority of the data packet;
and after the attribute value of the target attribute is added into the data packet, sending the data packet to the network equipment.
12. The method of claim 11, wherein the target attribute comprises: a traffic type level and/or a traffic size level; accordingly, the neural network classification model comprises: a first neural classification network model for determining a traffic type class and/or a second neural classification network model for determining a traffic size class.
13. The method according to claim 11, wherein the neural network classification model is trained using a pre-determined number of packets in the data stream transmitted by the terminal device.
14. The method according to claim 11, further comprising, after sending the data packet to the network device:
starting timing from the transmission time point of the data packet;
if the timing duration reaches the preset duration and the confirmation information of the data packet is not received, sending a promotion packet to the network equipment; wherein the preset duration is set based on a traffic type of the data packet.
15. The network equipment is characterized by comprising a preset first cache region, a first cache region and a second cache region, wherein the first cache region is used for caching data packets; the network device includes:
a receiving unit for receiving a data packet;
a first determining unit, configured to determine an attribute value of the data packet in terms of a target attribute, where the target attribute is an attribute related to latency;
a second determining unit, configured to determine, according to the attribute value of the data packet, a sending priority of the data packet in the first cache region;
and the storage unit is used for storing the data packet to the first cache region according to the sending priority of the data packet.
16. A terminal device, comprising:
an obtaining unit, configured to obtain a data packet to be sent to a network device;
the determining unit is used for determining the attribute value of the data packet in the aspect of the target attribute by using a pre-constructed neural network classification model; the attribute value is used for the network device to determine the sending priority of the data packet in the first cache region according to the attribute value of the data packet and store the data packet into the first cache region according to the sending priority of the data packet, wherein the data packet in the first cache region is sent out in sequence according to the sending priority of the data packet;
an adding unit, configured to add an attribute value of the target attribute to the data packet;
a sending unit, configured to send the data packet to the network device.
17. A network device, comprising:
the system comprises a central processing unit, a memory, an input/output interface, a wired or wireless network interface and a power supply;
the memory is a transient memory or a persistent memory;
the central processor is configured to communicate with the memory, the instructions in the memory being executable on the network device to perform the method of any of claims 1 to 10.
18. A terminal device, comprising:
the system comprises a central processing unit, a memory, an input/output interface, a wired or wireless network interface and a power supply;
the memory is a transient memory or a persistent memory;
the central processor is configured to communicate with the memory, the instructions in the memory being executable on the terminal device to perform the method of any one of claims 11 to 14.
19. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 14.
20. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 14.
CN201911107825.2A 2019-11-13 2019-11-13 Data packet transmission method and related device Pending CN110708260A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911107825.2A CN110708260A (en) 2019-11-13 2019-11-13 Data packet transmission method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911107825.2A CN110708260A (en) 2019-11-13 2019-11-13 Data packet transmission method and related device

Publications (1)

Publication Number Publication Date
CN110708260A true CN110708260A (en) 2020-01-17

Family

ID=69205345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911107825.2A Pending CN110708260A (en) 2019-11-13 2019-11-13 Data packet transmission method and related device

Country Status (1)

Country Link
CN (1) CN110708260A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111770027A (en) * 2020-05-28 2020-10-13 南方科技大学 Differentiated transmission method based on in-network cache
CN111835658A (en) * 2020-06-23 2020-10-27 武汉菲奥达物联科技有限公司 Data priority response method and device based on LPWAN
CN112702433A (en) * 2020-12-23 2021-04-23 南方电网电力科技股份有限公司 Data scheduling method and device for intelligent electric meter, intelligent electric meter and storage medium
WO2021169304A1 (en) * 2020-02-29 2021-09-02 华为技术有限公司 Network device, data processing method, apparatus and system, and readable storage medium
CN113347112A (en) * 2021-06-08 2021-09-03 北京邮电大学 Data packet forwarding method and device based on multi-level cache
CN113783798A (en) * 2021-09-24 2021-12-10 上海明胜品智人工智能科技有限公司 Data transmission method and system and edge service equipment
CN114884902A (en) * 2022-05-09 2022-08-09 中国联合网络通信集团有限公司 Data stream transmission method, device, network equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103534997A (en) * 2011-04-29 2014-01-22 华为技术有限公司 Port and priority based flow control mechanism for lossless ethernet
WO2016090539A1 (en) * 2014-12-08 2016-06-16 华为技术有限公司 Data transmission method and device
CN108881028A (en) * 2018-06-06 2018-11-23 北京邮电大学 The SDN network resource regulating method of application perception is realized based on deep learning
CN110352584A (en) * 2016-12-28 2019-10-18 谷歌有限责任公司 Across the automatic priority ranking of the equipment flow of local network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103534997A (en) * 2011-04-29 2014-01-22 华为技术有限公司 Port and priority based flow control mechanism for lossless ethernet
WO2016090539A1 (en) * 2014-12-08 2016-06-16 华为技术有限公司 Data transmission method and device
CN110352584A (en) * 2016-12-28 2019-10-18 谷歌有限责任公司 Across the automatic priority ranking of the equipment flow of local network
CN108881028A (en) * 2018-06-06 2018-11-23 北京邮电大学 The SDN network resource regulating method of application perception is realized based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
S.BASHIR: "Handling Elephant Flows in a Multi-tenant Data Center Network", 《PH.D.DISSERTATION》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021169304A1 (en) * 2020-02-29 2021-09-02 华为技术有限公司 Network device, data processing method, apparatus and system, and readable storage medium
CN111770027A (en) * 2020-05-28 2020-10-13 南方科技大学 Differentiated transmission method based on in-network cache
WO2021238764A1 (en) * 2020-05-28 2021-12-02 南方科技大学 Intra-network cache-based differentiated transmission method
CN111770027B (en) * 2020-05-28 2022-03-08 南方科技大学 Differentiated transmission method based on in-network cache
CN111835658A (en) * 2020-06-23 2020-10-27 武汉菲奥达物联科技有限公司 Data priority response method and device based on LPWAN
CN111835658B (en) * 2020-06-23 2022-06-10 武汉菲奥达物联科技有限公司 Data priority response method and device based on LPWAN
CN112702433A (en) * 2020-12-23 2021-04-23 南方电网电力科技股份有限公司 Data scheduling method and device for intelligent electric meter, intelligent electric meter and storage medium
CN112702433B (en) * 2020-12-23 2022-08-02 南方电网电力科技股份有限公司 Data scheduling method and device for intelligent electric meter, intelligent electric meter and storage medium
CN113347112A (en) * 2021-06-08 2021-09-03 北京邮电大学 Data packet forwarding method and device based on multi-level cache
CN113347112B (en) * 2021-06-08 2022-06-07 北京邮电大学 Data packet forwarding method and device based on multi-level cache
CN113783798A (en) * 2021-09-24 2021-12-10 上海明胜品智人工智能科技有限公司 Data transmission method and system and edge service equipment
CN114884902A (en) * 2022-05-09 2022-08-09 中国联合网络通信集团有限公司 Data stream transmission method, device, network equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110708260A (en) Data packet transmission method and related device
US20180351869A1 (en) Learning Or Emulation Approach to Traffic Engineering in Information-Centric Networks
CN102170396B (en) QoS control method of cloud storage system based on differentiated service
US8806142B2 (en) Anticipatory response pre-caching
US6490615B1 (en) Scalable cache
CN109104373B (en) Method, device and system for processing network congestion
CN112737823A (en) Resource slice allocation method and device and computer equipment
WO2021052162A1 (en) Network parameter configuration method and apparatus, computer device, and storage medium
CN104158755A (en) Method, device and system used for transmitting messages
Banaie et al. Load-balancing algorithm for multiple gateways in Fog-based Internet of Things
CN110177055B (en) Pre-allocation method of edge domain resources in edge computing scene
CN112822050A (en) Method and apparatus for deploying network slices
CN109151070B (en) Block chain-based service scheduling method and electronic device for point-to-point CDN (content delivery network)
Li et al. HQTimer: a hybrid ${Q} $-Learning-Based timeout mechanism in software-defined networks
KR101055548B1 (en) Semantic Computing-based Dynamic Job Scheduling System for Distributed Processing
Banaie et al. Performance analysis of multithreaded IoT gateway
CN113573320B (en) SFC deployment method based on improved actor-critter algorithm in edge network
Liebeherr et al. Rate allocation and buffer management for differentiated services
CN116896511B (en) Special line cloud service speed limiting method, device, equipment and storage medium
Li et al. Efficient cooperative cache management for latency-aware data intelligent processing in edge environment
CN110191362B (en) Data transmission method and device, storage medium and electronic equipment
CN105227665A (en) A kind of caching replacement method for cache node
CN115086307B (en) Network target range data transmission method and system
CN110677352A (en) Single-switch single-controller transmission control method in software defined network
US11606418B2 (en) Apparatus and method for establishing connection and CLAT aware affinity (CAA)-based scheduling in multi-core processor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200117