CN107786371A - A kind of accelerated method of data, device and storage medium - Google Patents

A kind of accelerated method of data, device and storage medium Download PDF

Info

Publication number
CN107786371A
CN107786371A CN201710931628.7A CN201710931628A CN107786371A CN 107786371 A CN107786371 A CN 107786371A CN 201710931628 A CN201710931628 A CN 201710931628A CN 107786371 A CN107786371 A CN 107786371A
Authority
CN
China
Prior art keywords
network node
target network
bandwidth
acceleration
tcp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710931628.7A
Other languages
Chinese (zh)
Other versions
CN107786371B (en
Inventor
袁松翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Video Technology Co Ltd
Original Assignee
MIGU Video Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Video Technology Co Ltd filed Critical MIGU Video Technology Co Ltd
Priority to CN201710931628.7A priority Critical patent/CN107786371B/en
Publication of CN107786371A publication Critical patent/CN107786371A/en
Application granted granted Critical
Publication of CN107786371B publication Critical patent/CN107786371B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level

Abstract

The invention discloses a kind of accelerated method of data, methods described includes:Determine the bandwidth load factor of target network node;When the bandwidth load factor reaches the default load threshold of the target network node, the notification message of adjustment acceleration strategy is sent to the target network node, acceleration pattern is comprised at least in the notification message.The present invention further simultaneously discloses the accelerator and storage medium of a kind of data.

Description

Data acceleration method and device and storage medium
Technical Field
The invention relates to a data processing technology, in particular to a data acceleration method, a data acceleration device and a storage medium.
Background
Nowadays, the Internet mainly uses a Transmission Control Protocol (TCP Transmission Control Protocol) and/or an Internet Protocol (IP) for network Transmission, and since the two protocols are designed too early (1983), the flow Control and congestion Control algorithms cannot match the current network environment, which results in network resource waste.
Wherein, the TCP protocol includes: a Loss-base-based TCP acceleration technology and a Delay-base-based TCP acceleration technology;
however, the Loss-base-based TCP acceleration technology has followed a method of determining congestion and adjusting transmission rate by packet Loss in mainstream TCP, and this method often causes a large amount of data packets to be lost, which may increase node congestion and cause transmission blocking.
However, the Delay-base-based TCP acceleration technology adopts Delay variation to determine the congestion degree and adjust the transmission speed accordingly in principle, and cannot cope with the situation of large Delay variation of the network path, thereby causing unnecessary reduction of the transmission rate.
Therefore, how to improve the transmission efficiency of TCP and increase the utilization rate of network resources is a problem that needs to be solved by the internet.
Disclosure of Invention
In order to solve the existing technical problem, embodiments of the present invention are expected to provide a data acceleration method, apparatus, and storage medium, which can solve the problem in the prior art that the TCP transmission efficiency cannot be improved under the condition of full bandwidth data.
The technical scheme of the embodiment of the invention is realized as follows:
according to an aspect of the embodiments of the present invention, there is provided a method for accelerating data, the method including:
determining the bandwidth full load rate of a target network node;
and when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
In the above scheme, the bandwidth full rate of the target network node is determined by the following formula:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
According to another aspect of the embodiments of the present invention, there is provided a method for accelerating data, the method including:
receiving a notification message for adjusting an acceleration strategy, which is sent by a dispatching center, wherein the notification message at least comprises an acceleration mode;
monitoring a first network parameter of a target network node according to the notification message, wherein the first network parameter at least comprises real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node;
and according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node so as to adjust the acceleration strategy.
In the foregoing scheme, the real-time bandwidth data includes at least one of: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the data packet receiving queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate of the current connection of the target network node;
correspondingly, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node includes:
calculating a first acceleration factor for parameter configuration of the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections and the length of a data packet receiving queue by using the following matrix formula:
a first acceleration factor = [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate (SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/(TCP connection number + TCP half-open connection number 0.5)/connection number queue length;
calculating a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate by using the following matrix formula;
second acceleration factor = [ S-win, TCP-ret, bitrate] T
Wherein S-win represents the sliding window size, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, bitrate represents a transmission rate, [ 2 ]] T Representing a transpose;
and according to the first acceleration factor and the second acceleration factor, performing parameter configuration on the target network node.
In the foregoing solution, performing parameter configuration on the target network node according to the first network parameter and the acceleration mode further includes:
comparing the size of the sliding window with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result;
or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result;
or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result;
and performing parameter configuration on the target network node according to the second network parameter.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for accelerating data, the apparatus including:
the determining unit is used for determining the bandwidth full load rate of the target network node;
a sending unit, configured to send a notification message for adjusting an acceleration policy to the target network node when the bandwidth full load rate reaches a preset load threshold of the target network node, where the notification message at least includes an acceleration mode.
In the foregoing solution, the determining unit specifically determines the bandwidth full rate of the target network node by using the following formula:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for accelerating data, the apparatus including:
a receiving unit, configured to receive a notification message sent by a scheduling center for adjusting an acceleration policy, where the notification message at least includes an acceleration mode;
a monitoring unit, configured to monitor a first network parameter of a target network node according to the notification message, where the first network parameter at least includes real-time bandwidth data, network resource utilization data, and TCP connection data of the target network node;
a configuration unit, configured to perform parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration policy.
According to another aspect of the embodiments of the present invention, there is also provided an apparatus for accelerating data, the apparatus including: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the steps of the method of any of claims 1 to 5.
According to another aspect of the embodiments of the present invention, there is also provided a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the method of any one of claims 1 to 5
According to the data acceleration method, the data acceleration device and the storage medium, which are provided by the embodiment of the invention, for a schedulable CDN node, the flow change of the CDN node can be monitored through a scheduling system, and a TCP acceleration strategy and an algorithm rule are adjusted according to a monitoring result, so that the TCP transmission efficiency is improved, and the network resource utilization rate is increased.
Drawings
FIG. 1 is a flow chart illustrating a data acceleration method according to an embodiment of the present invention;
FIG. 2 is a flow chart illustrating another data acceleration method according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a data acceleration apparatus according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of another data accelerator according to an embodiment of the invention;
fig. 5 is a schematic structural diagram of a data acceleration apparatus according to another embodiment of the present invention.
Detailed Description
The following describes in detail embodiments of the present invention with reference to the drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
FIG. 1 is a flow chart illustrating a data acceleration method according to an embodiment of the present invention; as shown in fig. 1, the method includes:
step 101, determining the bandwidth full load rate of a target network node;
in the embodiment of the invention, the method is mainly applied to a scheduling center deployed in a Content Delivery Network (CDN) system architecture, the scheduling center maintains a CDN whole-Network flow load condition table and a scheduling information table, and updates the flow load condition table and the scheduling information table in real time according to flow conditions reported by Network nodes and scheduling conditions of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when contents in a traffic load condition table and a scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, a notification message for adjusting an acceleration policy is sent to a target network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may be a Network Identification (NID).
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, which is a parameter determined according to the price of the bandwidth of the network node, the quality of the bandwidth outlet and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, the operating system refers to a process ID number (PID).
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the region according to historical data of the region in a certain number of days.
3. Operator ID identifying the operator to which the area belongs, also known as Group ID (GID) is a unique identifier used to identify a subscriber group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which facilitates assigning permissions, and some important files are set as all admin group users can read and write, which enables permission assignment.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node ID (MIDi): the node ID(s) scheduled for that region may be multiple.
5. Backup scheduling node ID (BIDj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being dispatched to the MIDi or BIDj nodes.
In the embodiment of the invention, the dispatching center determines the bandwidth full load rate of the target network node through the following formula:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
And 102, when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
In the embodiment of the invention, when the dispatching center traverses the area ID, the bandwidth detection is carried out on each area with the dispatching strategy of the node N, when the detection result represents that the Rv =0 of the node N, the node is indicated to be failed, and then the dispatching information table is traversed, and the Pij of the node is set to be 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight Ve scheduled to this node.
And when the dispatching center traverses the area ID, performing bandwidth detection on each area with the node N dispatching strategy, and when the detection result represents that the Rv > =1 of the node N, indicating that the node is full, traversing the dispatching information table, and reducing Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc values to increment: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is light load, and 0.8 is high load. When the Rv value of the node N increases or decreases to pass the load threshold, adjusting the Pij value of the target network node according to the corresponding load condition, and adjusting the scheduling policies of the corresponding area of the target network node and the target network node, and setting the Tc value to increase or decrease as: weight Ve scheduled to the node.
And after the scheduling center calculates the Rv values of all the network nodes, adjusting the scheduling mode of the target network node. Specifically, after setting the Tc value, for a network node, an accelerated adjustment parameter Tu = Tc/V/α is defined, which means the degree of influence of scheduling adjustment on the node bandwidth. Issuing an acceleration strategy mode according to the values of Rv and Tu, as shown in Table 1:
TABLE 1
Specifically, when the bandwidth full-loading rate Rv of the target network node is between 0 and Rv & lt 0.5 and the influence degree TU on the bandwidth of the target network node is TU & lt 0.5, the notification message for adjusting the acceleration policy, which is sent to the target network node by the scheduling center, is an aggressive acceleration mode; and when the Tu is greater than 0.5, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode.
When the bandwidth full-load rate Rv of the target network node is 0.5 & ltrv & gt 0.8 and the influence degree TU on the bandwidth of the target network node is TU <0, the notification message for adjusting the acceleration strategy sent by the scheduling center to the target network node is in a rapid acceleration mode; when the influence degree TU on the bandwidth of the target network node is less than or equal to 0 and less than or equal to 0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a common acceleration mode; and when the influence degree TU on the bandwidth of the target network node is Tu >0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is a congestion prevention mode.
When the bandwidth full load rate Rv of the target network node is 0.8 & ltrv & gt & lt 1 & gt and the influence degree TU on the bandwidth of the target network node is Tu < -0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in an aggressive acceleration mode; when the influence degree TU on the bandwidth of the target network node is-0.3 and Tu is less than 0, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a common acceleration mode; and when the influence degree TU on the bandwidth of the target network node is Tu >0, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a congestion prevention mode.
Fig. 2 is a schematic flow chart of another data acceleration method in an embodiment of the present invention, where the method includes:
step 201, receiving a notification message for adjusting an acceleration strategy sent by a scheduling center, where the notification message at least includes an acceleration mode.
In the embodiment of the invention, the method is mainly applied to edge network nodes deployed in a CDN system architecture. One or more servers are included in each edge network node. The CDN system is used for releasing the website content to the edge node closest to the user, so that a netizen can obtain the required content nearby, the response speed and success rate of netizen access are improved, bottlenecks and links which possibly influence the data transmission speed and stability on the Internet can be avoided as far as possible, and the content transmission is faster and more stable.
In the embodiment of the present invention, the CDN system architecture further includes a scheduling center, and the scheduling center also maintains a CDN whole-network traffic load condition table and a scheduling information table, and the scheduling center updates the traffic load condition table and the scheduling information table in real time according to a traffic condition reported by each edge network node and a scheduling condition of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when contents in a traffic load condition table and a scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, a notification message for adjusting an acceleration policy is sent to an edge network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may refer to NID.
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, which is a parameter determined according to the price of the bandwidth of the network node, the quality of the bandwidth outlet and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, a PID is referred to in the operating system.
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the area according to historical data of the area within a certain number of days.
3. Operator ID-a unique identifier identifying the operator to which the area belongs, also known as GID, is used to identify the subscriber group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which is convenient for assigning permissions, and some important files are set as all admin group users can read and write, which can assign permissions.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node ID (MIDi): the node ID(s) scheduled for that region may be multiple.
5. Backup scheduling node ID (BIDj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being dispatched to the MIDi or BIDj nodes.
In the embodiment of the present invention, receiving a notification message for adjusting an acceleration policy sent by a scheduling center includes:
and when the dispatching center determines that the bandwidth full load rate of the target network node reaches a preset load threshold value of the target network node, receiving the notification message sent by the dispatching center.
In the embodiment of the invention, the full load rate Rv of each node is calculated according to the information of the flow load table passing through every minute by the following formula:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
In the embodiment of the invention, when the dispatching center traverses the area ID, the bandwidth detection is carried out on each area with the dispatching strategy of the node N, when the detection result represents that the Rv =0 of the node N indicates the node fault, the dispatching information table is traversed, and the Pij of the node is set to be 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight scheduled to this node Ve.
And when the dispatching center traverses the area ID, performing bandwidth detection on each area with the node N dispatching strategy, and when the detection result represents that the Rv > =1 of the node N, indicating that the node is full, traversing the dispatching information table and reducing the Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc value to increment as: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is a light load, and 0.8 is a high load. When the Rv value of the node N increases or decreases to pass the load threshold, adjusting the Pij value of the target network node according to the corresponding load condition, and adjusting the scheduling policies of the corresponding area of the target network node and the target network node, and setting the Tc value to increase or decrease as: weight Ve scheduled to the node.
And after the scheduling center calculates the Rv values of all the network nodes, adjusting the scheduling mode of the network nodes. Specifically, after setting the Tc value, for a network node, an accelerated adjustment parameter Tu = Tc/V/α is defined, which means the degree of influence of scheduling adjustment on the node bandwidth. And issuing an acceleration strategy mode according to the values of Rv and Tu, which is specifically shown in the table 1.
Specifically, when the bandwidth full-loading rate Rv of the target network node is between 0 and Rv & lt 0.5 and the influence degree TU on the bandwidth of the target network node is TU & lt 0.5, the notification message for adjusting the acceleration policy, which is sent to the target network node by the scheduling center, is an aggressive acceleration mode; and when the Tu is greater than 0.5, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode.
When the bandwidth full-load rate Rv of the target network node is 0.5 & ltrv & gt 0.8 and the influence degree TU on the bandwidth of the target network node is TU <0, the notification message for adjusting the acceleration strategy sent by the scheduling center to the target network node is in a rapid acceleration mode; when the influence degree TU on the bandwidth of the target network node is less than or equal to 0 and less than or equal to 0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a common acceleration mode; and when the influence degree TU on the bandwidth of the target network node is Tu >0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is a congestion prevention mode.
When the bandwidth full load rate Rv of the target network node is 0.8 & lt Rv & gt & lt 1 & gt and the influence degree TU on the bandwidth of the target network node is Tu < -0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in an aggressive acceleration mode; when the influence degree TU on the bandwidth of the target network node is-0.3 and Tu is less than 0, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a common acceleration mode; and when the influence degree TU on the bandwidth of the target network node is Tu >0, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a congestion prevention mode.
Step 202, according to the notification message, monitoring a first network parameter of the target network node, where the first network parameter includes at least one of: real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node.
In this embodiment of the present invention, the real-time bandwidth data includes at least one of the following: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the data packet receiving queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate of the current connection of the target network node.
Step 203, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node to adjust the network acceleration policy.
Specifically, an edge node deployed in the CDN system architecture that needs TCP acceleration maintains a self-learning state machine. For each user's TCP connection, there is a state machine to control its TCP acceleration policy. When the target network node monitors a first network parameter of the target network node according to the notification message, determining a first acceleration factor and a second acceleration factor according to the first network parameter, and then performing parameter configuration on the target network node according to the first acceleration factor and the second acceleration factor to adjust the acceleration strategy.
Specifically, according to the network card actual throughput flow, the network card actual throughput data packet quantity, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections, and the data packet receiving queue length, a first acceleration factor for performing parameter configuration on the target network node is calculated by using the following matrix formula:
a first acceleration factor = [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate (SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/(TCP connection number + TCP half-open connection number 0.5)/connection number queue length;
specifically, according to the size of a sliding window, the size of a congestion window, the retransmission ratio of a TCP packet and the transmission rate, a second acceleration factor for parameter configuration of the target network node is calculated through the following matrix formula;
second acceleration factor = [ S-win, TCP-ret, bitrate] T
Wherein S-win represents the sliding window size, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, bitrate represents a transmission rate, [ 2 ]] T Representing a transposition.
In the implementation of the present invention, configuring parameters of the target network node according to the first network parameter and the acceleration mode further includes:
comparing the size of the sliding window with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result; or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result; or detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result; and performing parameter configuration on the target network node according to the second network parameter.
For example, when the first acceleration factor indicates that the network is lightly loaded and the traffic does not increase greatly in a short time, and the second acceleration factor indicates that the current TCP connection is currently connected, the size of the sliding window of the current connection can be adjusted to the maximum, the size of the congestion window can be adjusted to the maximum, the repeated transmission ratio of the TCP packets can be adjusted to the maximum, and the like.
Then, an adjustment parameter C, C = a × B = [ S-win-g, TCP-ret-g, bite-g ] is calculated, and the sliding window is gradually increased or decreased according to S-win-g (sliding window size) (compared to a set threshold); TCP-ret-g gradually adjusts the TCP packet sending proportion (compared with a set threshold), bitrate-g is a target transmission rate, and after the rate is reached, the adjustment is stopped and the next acceleration logic is waited.
In the implementation of the invention, TCP is a sliding window protocol, that is, how much data can be sent by a sending end of a TCP connection at a certain moment is controlled by a sliding window, the size of the sliding window is actually determined by two windows together, one is an announcement window of a receiving end, the window value is in TCP protocol header information and can be sent to the sending end along with an ACK packet of the data, the value indicates how much residual space is left in a TCP protocol cache of the receiving end, the sending end must ensure that the sent data does not exceed the residual space so as to avoid causing buffer overflow, the window is used for flow limitation by the receiving end, and in the transmission process, the size of the announcement window is related to the speed of taking out the data by a process of the receiving end. The other window is a Congestion window (Congestion window) of the sender, which is maintained by the sender, and the size of the sliding window is the smaller value of the advertisement window and the Congestion window, which is not included in the header information of the protocol, so that the Congestion window is also regarded as a window used by the sender to perform flow control. The right edge of the sliding window moves to the right, which is called window opening, and occurs when the receiving process takes out data from the protocol buffer of the receiving end. With the ACK packet of the transmitted data continuously received by the transmitting end, the sliding window can be continuously closed and opened according to the confirmation sequence number and the size of the notification window in the ACK packet, and forward sliding of the sliding window is formed. If the receiving process does not take data all the time, a 0 window phenomenon occurs, namely the left edge and the right edge of the sliding window coincide, and at the moment, the window size is 0, and data can not be sent again.
Here, the bandwidth refers to the "highest data rate" that can be passed from the transmitting end to the receiving end per unit time, and is a hardware limitation. The data transmission rate of the TCP sender and receiver cannot exceed the bandwidth limit between the two points.
By placing node servers at various positions of the network to form a layer of intelligent virtual network on the basis of the existing internet, the CDN system can redirect the request of a user to a service node closest to the user in real time according to network flow, connection of each node, load condition, distance to the user, response time and other comprehensive information. The user can obtain the required content nearby, the congestion of the Internet network is solved, and the response speed of the user for accessing the website is improved.
Fig. 3 is a schematic structural component diagram of a data acceleration device in an embodiment of the present invention, and as shown in fig. 3, the device includes:
a determining unit 301, configured to determine a bandwidth full rate of a target network node;
a sending unit 302, configured to send a notification message for adjusting an acceleration policy to the target network node when the bandwidth full load reaches a preset load threshold of the target network node, where the notification message at least includes an acceleration mode.
In the embodiment of the present invention, the device may specifically be a scheduling center in a CDN system architecture, where the scheduling center maintains a table of traffic load conditions of the CDN whole network and a table of scheduling information. And updating the traffic load condition table and the scheduling information table in real time according to the traffic condition reported by each network node and the scheduling condition of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when contents in the traffic load condition table and the scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, the sending unit 302 is triggered to send a notification message for adjusting an acceleration policy to a target network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may be a Network Identification (NID).
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, and parameters determined according to the bandwidth price of the network node, the bandwidth outlet quality and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, the operating system refers to a program ID number (PID, processID).
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the area according to historical data of the area within a certain number of days.
3. Operator ID identifying the operator to which the area belongs, also known as Group ID (GID) is a unique identifier used to identify a subscriber group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which facilitates assigning permissions, and some important files are set as all admin group users can read and write, which enables permission assignment.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node ID (MIDi): the node ID(s) scheduled for that region may be multiple.
5. Backup scheduling node ID (BIDj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being dispatched to the MIDi or BIDj nodes.
In the embodiment of the present invention, the dispatch center triggers the determining unit 301 to determine the bandwidth full load rate of the target network node according to the following formula;
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
In this embodiment of the present invention, when the determining unit 301 determines that the bandwidth full load rate reaches the preset load threshold of the target network node, the sending unit 302 is triggered to send a notification message for adjusting an acceleration policy to the target network node, where the notification message at least includes an acceleration mode.
Specifically, when the scheduling center traverses the area ID, the bandwidth detection is performed on each area where the scheduling policy of the node N exists, and when the detection result indicates that Rv =0 of the node N, the determining unit 301 determines that the node is faulty, and traverses the scheduling information table, and sets Pij of the node to 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight Ve scheduled to this node.
When the scheduling center traverses the area ID, bandwidth detection is performed on each area where the node N scheduling policy exists, and when the detection result represents Rv > =1 of the node N, the determining unit 301 determines that the node is fully loaded, and traverses the scheduling information table, and reduces Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc value to increment as: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is light load, and 0.8 is high load. When the Rv value of the node N increases or decreases by the load threshold, then the Pij value of the target network node is adjusted according to the corresponding load condition, and the scheduling policies of the corresponding region of the target network node and the target network node are adjusted at the same time, and the Tc value is set to increase or decrease as: weight Ve scheduled to the node.
And calculating the Rv values of all the network nodes and adjusting the scheduling modes of the network nodes. Specifically, after setting the Tc value, for a network node, an accelerated adjustment parameter Tu = Tc/V/α is defined, which means the degree of influence of scheduling adjustment on the node bandwidth. And issuing an acceleration strategy mode according to the values of Rv and Tu, which is specifically shown in the table 1.
Specifically, when the bandwidth full load rates Rv of the target network nodes are 0 & lt Rv & gt 0.5 and the influence degree TU on the bandwidth of the target network nodes is TU <0.5, triggering the notification message for adjusting the acceleration policy, which is sent by the sending unit 302 to the target network nodes, to be the aggressive acceleration mode; when Tu >0.5, the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, is triggered to be in the normal acceleration mode.
When the bandwidth full load rate Rv of the target network node is 0.5 plus Rv plus 0.8 and the influence degree TU on the bandwidth of the target network node is TU <0, triggering the notification message for adjusting the acceleration policy, sent by the sending unit 302 to the target network node, to be in an aggressive acceleration mode; when the influence degree TU on the bandwidth of the target network node is between 0 and TU 0.3, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be a normal acceleration mode; when the influence degree TU on the bandwidth of the target network node is TU >0.3, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be the congestion prevention mode.
When the bandwidth full-loading rate Rv of the target network node is 0.8 & ltrv & gt & lt 1 & gt and the influence degree TU on the bandwidth of the target network node is TU < -0.3, triggering the notification message for adjusting the acceleration strategy, which is sent to the target network node by the sending unit 302, to be an aggressive acceleration mode; when the influence degree TU on the bandwidth of the target network node is-0.3 and TU is restricted to 0, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be a normal acceleration mode; when the influence degree TU on the bandwidth of the target network node is TU >0, triggering the notification message for adjusting the acceleration policy, which is sent to the target network node by the sending unit 302, to be the congestion prevention mode.
Fig. 4 is a schematic structural diagram of another data acceleration apparatus according to an embodiment of the present invention, as shown in fig. 4: the device comprises: a receiving unit 401, a monitoring unit 402 and a configuration unit 403;
the receiving unit 401 is configured to receive a notification message sent by a scheduling center for adjusting an acceleration policy, where the notification message at least includes an acceleration mode;
the monitoring unit 402 is configured to monitor a first network parameter of a target network node according to the notification message, where the first network parameter at least includes real-time bandwidth data, network resource utilization data, and TCP connection data of the target network node;
the configuring unit 403 is configured to perform parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration policy.
In the embodiment of the present invention, the device may specifically be an edge network node deployed in a CDN system architecture. One or more servers are included in each edge network node. The CDN system is used for releasing the website content to the edge node closest to the user, so that netizens can obtain the required content nearby, response speed and success rate of netizen access are provided, bottlenecks and links which possibly influence data transmission speed and stability on the Internet can be avoided as far as possible, and content transmission is faster and more stable.
In the embodiment of the present invention, the CDN system architecture further includes a scheduling center, and the scheduling center also maintains a table of traffic load conditions and a table of scheduling information of the CDN entire network, and the scheduling center updates the table of traffic load conditions and the table of scheduling information in real time according to the traffic conditions reported by each edge network node and the scheduling conditions of the scheduling center.
Specifically, in the scheduling center, a combination condition of two or more level thresholds needs to be set in advance, when contents in a traffic load condition table and a scheduling information table change, whether the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition is checked, and when the contents in the traffic load condition table and the scheduling information table reach the threshold of the combination condition, a notification message for adjusting an acceleration policy is sent to an edge network node that needs to adjust a data rate in a message interface manner, where the notification message at least includes an acceleration mode.
The edge network node that needs to adjust the data rate triggers the receiving unit 401 to receive the notification message sent by the dispatch center.
Specifically, the information in the traffic load table specifically includes:
1. a network node ID, the network node ID being an identification code that uniquely identifies the target network node information. For example, it may be referred to as NID.
2. Capacity peak (V) of each network node: the bandwidth that the network node is capable of providing is identified, typically taking the smaller of the bandwidth of the network node egress and the server capacity bandwidth.
3. Bandwidth availability factor (α) for each network node: 0< alpha <1, which is a parameter determined according to the price of the bandwidth of the network node, the quality of the bandwidth outlet and other factors.
4. Current bandwidth per network node (Tn): and the dynamic value is used for calculating the current bandwidth value of the network node by collecting the current bandwidth value of each server network card in the network node.
5. Availability of each network node (a): boolean type, normal is 1 and abnormal is 0.
In the embodiment of the present invention, the information in the scheduling information table specifically includes:
1. area ID: and identifying the area of the piece of data in the scheduling information table. For example, a PID is referred to in the operating system.
2. Bandwidth predicted peak (Ve) for this region: and calculating the latest possible bandwidth peak value of the area according to historical data of the area within a certain number of days.
3. Operator ID-a unique identifier identifying the operator to which the area belongs, also known as GID, identifying the user group.
Here, the same type of users are set as the same group, for example, all system administrators can be set as admin groups, which is convenient for assigning permissions, and some important files are set as all admin group users can read and write, so that permission assignment can be performed.
Each user has a unique user id and each user group has a unique group id.
4. Scheduling node ID (MIDi): the node ID(s) for this regional schedule.
5. Backup scheduling node ID (BIDj): and the backup scheduling node ID when the main scheduling node fails.
6. Scheduling weight (Eij): and identifying the proportion of the flow to be dispatched to the MIDi or BIDj node, and updating periodically according to parameters such as Ve and Rv.
7. Actual scheduling weight (Pij): identifying what proportion of traffic is currently being scheduled to the MIDi or BIDj nodes.
Specifically, the receiving unit 401 receives the notification message sent by the scheduling center when the scheduling center determines that the bandwidth full load rate of the target network node reaches the preset load threshold of the target network node.
In the embodiment of the invention, the full load rate Rv of each node is calculated by the following formula according to the information of the flow load table per minute:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
Specifically, when the scheduling center traverses the area ID, the bandwidth detection is performed on each area where the scheduling policy of the node N exists, and when the detection result represents Rv =0 of the node N, it indicates that the node is faulty, the scheduling information table is traversed, and Pij of the node is set to 0. Then, the actual scheduling weight of the node N is assigned to other scheduling nodes in the area according to a certain algorithm (e.g., average assignment), and if the Rv value of the other scheduling nodes is greater than or equal to 1, the weight is assigned to the backup scheduling node. And the increment of the adjusted node Tc value (whether the calculation process is changed or not) is set as follows: + weight Ve scheduled to this node.
And when the dispatching center traverses the area ID, performing bandwidth detection on each area with the node N dispatching strategy, and when the detection result represents that the Rv > =1 of the node N, indicating that the node is full, traversing the dispatching information table and reducing the Pij of the node by 10%. And distributing the reduced weight of the node N to other scheduling nodes in the region according to a certain algorithm (such as average distribution), and distributing the weight to the backup scheduling node if the Rv value of the other scheduling nodes is more than or equal to 1. And setting the adjusted node Tc value to increment as: + weight Ve scheduled to this node.
In the embodiment of the present invention, a load threshold of the target network node is defined, for example, 0.5 is light load, and 0.8 is high load. When the Rv value of the node N increases or decreases by the load threshold, then the Pij value of the target network node is adjusted according to the corresponding load condition, and the scheduling policies of the corresponding region of the target network node and the target network node are adjusted at the same time, and the Tc value is set to increase or decrease as: weight Ve scheduled to this node.
And after the scheduling center calculates the Rv values of all the network nodes, adjusting the scheduling mode of the target network node. Specifically, after the scheduling center sets the Tc value, for a network node, the accelerating adjustment parameter Tu = Tc/V/α is defined, which means the degree of influence of scheduling adjustment on the node bandwidth. And issuing an acceleration strategy mode according to the values of Rv and Tu, which is specifically shown in the table 1.
Specifically, when the bandwidth full-loading rate Rv of the target network node is between 0 and Rv & lt 0.5 and the influence degree TU on the bandwidth of the target network node is TU & lt 0.5, the notification message for adjusting the acceleration policy, which is sent to the target network node by the scheduling center, is an aggressive acceleration mode; and when the Tu is greater than 0.5, the notification message for adjusting the acceleration strategy, sent by the scheduling center to the target network node, is in a normal acceleration mode.
When the bandwidth full-load rate Rv of the target network node is 0.5 & ltrv & gt 0.8 and the influence degree TU on the bandwidth of the target network node is TU <0, the notification message for adjusting the acceleration strategy sent by the scheduling center to the target network node is in a rapid acceleration mode; when the influence degree TU on the bandwidth of the target network node is less than or equal to 0 and less than or equal to 0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a common acceleration mode; and when the influence degree TU on the bandwidth of the target network node is Tu >0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is a congestion prevention mode.
When the bandwidth full load rate Rv of the target network node is 0.8 & ltrv & gt & lt 1 & gt and the influence degree TU on the bandwidth of the target network node is Tu < -0.3, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in an aggressive acceleration mode; when the influence degree TU on the bandwidth of the target network node is-0.3 and Tu is less than 0, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a common acceleration mode; and when the influence degree TU on the bandwidth of the target network node is Tu >0, the notification message for adjusting the acceleration strategy, which is sent to the target network node by the scheduling center, is in a congestion prevention mode.
In this embodiment of the present invention, the real-time bandwidth data includes at least one of the following: network card actual throughput flow, network card actual throughput data packet quantity and request number of load balancing equipment SLB scheduling of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the data packet receiving queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate of the current connection of the target network node.
The configuration unit 403 performs parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration policy of the network, specifically, in the following manner.
Specifically, an edge node deployed in the CDN system architecture that needs TCP acceleration maintains a self-learning state machine. For each user's TCP connection, there is a state machine to control its TCP acceleration policy. When the monitoring unit 402 monitors a first network parameter of the target network node according to the notification message, the monitoring unit triggers a first acceleration factor and a second acceleration factor determined by the target network node according to the first network parameter, and then the configuration unit 403 configures parameters of the target network node according to the first acceleration factor and the second acceleration factor.
Specifically, the target network node calculates, according to the network card actual throughput flow, the network card actual throughput data packet amount, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections, and the data packet receiving queue length, a first acceleration factor for performing parameter configuration on the target network node by using the following matrix formula:
a first acceleration factor = [ pat-value, bw-value, ct-value ];
wherein pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card traffic utilization rate) — (1-network card data packet throughput rate) — (1-SLB request number/request number design capacity)/(network card traffic utilization rate — (network card data packet throughput rate — (SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number × 0.5)/connection number queue length)/(TCP connection number + TCP half-open connection number × 0.5)/connection number queue length);
the target network node calculates a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate by using the following matrix formula;
second acceleration factor = [ S-win, TCP-ret, bitrate] T
Wherein S-win represents the sliding window size, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, bitrate represents a transmission rate, [ 2 ]] T Representing a transposition.
In the implementation of the present invention, the configuring unit 403 performs parameter configuration on the target network node according to the first network parameter and the acceleration mode, and further includes:
the target network node compares the size of the sliding window with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result; or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result; or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result; and then trigger the configuration unit 403 to perform parameter configuration on the target network node according to the second network parameter.
For example, when the first acceleration factor indicates that the network is lightly loaded and the traffic does not increase greatly in a short time, and the second acceleration factor indicates that the current TCP connection is currently connected, the size of the sliding window of the current connection can be adjusted to the maximum, the size of the congestion window can be adjusted to the maximum, the repeated transmission ratio of the TCP packets can be adjusted to the maximum, and the like.
Then, an adjustment parameter C, C = a × B = [ S-win-g, TCP-ret-g, bite-g ] is calculated, and the sliding window is gradually increased or decreased according to S-win-g (sliding window size) (compared to a set threshold); TCP-ret-g gradually adjusts the TCP packet sending proportion (compared with a set threshold value), bitrate-g is the target transmission rate, and after the rate is reached, the adjustment is stopped and the next acceleration logic is waited.
In the implementation of the invention, TCP is a sliding window protocol, that is, how much data a sending end of a TCP connection can send at a certain moment is controlled by a sliding window, the size of the sliding window is actually determined by two windows together, one is an announcing window of a receiving end, the window value is in TCP protocol header information and is sent to the sending end along with an ACK packet of the data, the value indicates how much residual space is left in a TCP protocol cache of the receiving end, the sending end must ensure that the sent data does not exceed the residual space so as to avoid buffer overflow, the window is used for flow limitation by the receiving end, and in the transmission process, the announcing window size is related to the speed of taking out the data by a process of the receiving end. The other window is a Congestion window (Congestion window) of the sender, the sender maintains the value, the header information of the protocol does not contain the value, and the size of the sliding window is the smaller value of the notification window and the Congestion window, so the Congestion window is also regarded as a window used by the sender for flow control. The right movement of the right edge of the sliding window is called window opening, and occurs when the receiving process takes out data from the protocol buffer of the receiving end. With the ACK packet of the transmitted data continuously received by the transmitting end, the sliding window can be continuously closed and opened according to the confirmation sequence number and the size of the notification window in the ACK packet, and forward sliding of the sliding window is formed. If the receiving process does not take data all the time, a 0 window phenomenon occurs, namely the left edge and the right edge of the sliding window coincide, and at the moment, the window size is 0, and data can not be sent again.
Here, the bandwidth refers to the "highest data rate" that can be passed from the sending end to the receiving end in a unit time, and is a hardware limitation. The data transmission rate of the TCP sender and receiver cannot exceed the bandwidth limit between the two points.
By placing node servers at various positions of the network to form a layer of intelligent virtual network on the basis of the existing internet, the CDN system can redirect the request of a user to a service node closest to the user in real time according to network flow, connection of each node, load condition, distance to the user, response time and other comprehensive information. The user can obtain the required content nearby, the congestion of the Internet network is solved, and the response speed of the user for accessing the website is improved.
According to another embodiment of the present invention, there is also provided an apparatus for accelerating data, the apparatus including: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is used for executing and determining the bandwidth full load rate of the target network node when the computer program is operated;
and when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
The processor is used for determining the bandwidth full load rate of the target network node by the following formula when the computer program is run;
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
According to another embodiment of the present invention, there is also provided an apparatus for accelerating data, the apparatus including: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor is configured to execute a notification message for adjusting an acceleration policy sent by a scheduling center when the computer program is run, where the notification message at least includes an acceleration mode;
monitoring a first network parameter of a target network node according to the notification message, wherein the first network parameter at least comprises real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node;
and according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node so as to adjust the acceleration strategy.
The processor, when executing the computer program, is further configured to execute the real-time bandwidth data to include at least one of: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the data packet receiving queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate of the current connection of the target network node;
correspondingly, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node includes:
calculating a first acceleration factor for parameter configuration of the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections and the length of a data packet receiving queue by using the following matrix formula:
a first acceleration factor = [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate (SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/(TCP connection number + TCP half-open connection number 0.5)/connection number queue length;
calculating a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate by using the following matrix formula;
second acceleration factor = [ S-win, TCP-ret, bitrate] T
Wherein S-win represents the sliding window size, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, bitrate represents a transmission rate, [ 2 ]] T Representing a transpose;
and performing parameter configuration on the target network node according to the first acceleration factor and the second acceleration factor.
The processor is configured to, when the computer program is run, further perform comparison between the size of the sliding window and a first preset threshold, and generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result;
or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result;
or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result;
and performing parameter configuration on the target network node according to the second network parameter.
Fig. 5 is a schematic structural diagram of a data acceleration apparatus according to another embodiment of the present invention, and the data acceleration apparatus 500 may be a mobile phone, a computer, a digital broadcast terminal, an information transceiver device, a game console, a tablet device, a personal digital assistant, an information push server, a content server, and the like. The data acceleration apparatus 500 shown in fig. 5 includes: at least one processor 501, memory 502, at least one network interface 504, and a user interface 503. The various components of the data acceleration device 500 are coupled together by a bus system 505. It is understood that the bus system 505 is used to enable connection communications between these components. The bus system 505 includes a power bus, a control bus, and a status signal bus in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 505 in FIG. 5.
The user interface 503 may include a display, a keyboard, a mouse, a trackball, a click wheel, a key, a button, a touch pad, a touch screen, or the like, among others.
It will be appreciated that the memory 502 can be either volatile memory or nonvolatile memory, and can include both volatile and nonvolatile memory. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a magnetic random access Memory (Flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), synchronous Static Random Access Memory (SSRAM), dynamic Random Access Memory (DRAM), synchronous Dynamic Random Access Memory (SDRAM), double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), enhanced Synchronous Dynamic Random Access Memory (ESDRAM), enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), synchronous Dynamic Random Access Memory (SLDRAM), direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 402 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 502 in embodiments of the present invention is used to store various types of data to support the operation of the acceleration apparatus 500 of data. Examples of such data include: any computer programs for operating on the acceleration appliance 500 of data, such as an operating system 5021 and application programs 5022; music data; animation data; book information; video, etc. The operating system 5021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application programs 5022 may contain various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. The program for implementing the method according to the embodiment of the present invention may be included in the application program 5022.
The method disclosed by the above-mentioned embodiments of the present invention may be applied to the processor 501, or implemented by the processor 501. The processor 501 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 501. The Processor 501 may be a general purpose Processor, a Digital Signal Processor (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. Processor 501 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present invention. A general purpose processor may be a microprocessor or any conventional processor or the like. The steps of the method disclosed by the embodiment of the invention can be directly implemented by a hardware decoding processor, or can be implemented by combining hardware and software modules in the decoding processor. The software modules may be located in a storage medium that is located in the memory 502, and the processor 501 reads the information in the memory 502 and performs the steps of the aforementioned methods in combination with its hardware.
In an exemplary embodiment, the data accelerator 500 may be implemented by one or more Application Specific Integrated Circuits (ASICs), DSPs, programmable Logic Devices (PLDs), complex Programmable Logic Devices (CPLDs), field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro Controllers (MCUs), microprocessors (microprocessors), or other electronic components for performing the aforementioned methods.
In an exemplary embodiment, the present invention further provides a computer readable storage medium, such as a memory 502 including a computer program, which can be executed by a processor 501 of a data acceleration apparatus 500 to perform the steps of the foregoing method. The computer readable storage medium can be Memory such as FRAM, ROM, PROM, EPROM, EEPROM, flash Memory, magnetic surface Memory, optical disk, or CD-ROM; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, performs: determining the bandwidth full load rate of the target network node;
and when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
The computer program, when executed by the processor, further performs determining a bandwidth full rate of the target network node by:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
According to another embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs: executing a notification message for receiving an acceleration strategy adjustment sent by a scheduling center, wherein the notification message at least comprises an acceleration mode;
monitoring a first network parameter of a target network node according to the notification message, wherein the first network parameter at least comprises real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node;
and according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node to adjust the acceleration strategy.
The computer program, when executed by the processor, further executes the real-time bandwidth data including at least one of: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the data packet receiving queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate of the current connection of the target network node;
correspondingly, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node includes:
calculating a first acceleration factor for parameter configuration of the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections and the length of a data packet receiving queue by using the following matrix formula:
a first acceleration factor = [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate (SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/(TCP connection number + TCP half-open connection number 0.5)/connection number queue length;
calculating a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate by using the following matrix formula;
second acceleration factor = [ S-win, TCP-ret, bitrate] T
Wherein S-win represents the size of the sliding window, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, and Bitrate represents the transmission rate, [ 2 ]] T Representing a transpose;
and according to the first acceleration factor and the second acceleration factor, performing parameter configuration on the target network node.
When the computer program is run by the processor, the size of the sliding window is compared with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result;
or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the transmission ratio of the TCP packet according to the second comparison result;
or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result;
and performing parameter configuration on the target network node according to the second network parameter.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention.

Claims (10)

1. A method for accelerating data, the method comprising:
determining the bandwidth full load rate of a target network node;
and when the bandwidth full load rate reaches a preset load threshold value of the target network node, sending a notification message for adjusting an acceleration strategy to the target network node, wherein the notification message at least comprises an acceleration mode.
2. The method of claim 1, wherein the bandwidth full rate of the target network node is determined by the following formula:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
3. A method for accelerating data, the method comprising:
receiving a notification message for adjusting an acceleration strategy, which is sent by a dispatching center, wherein the notification message at least comprises an acceleration mode;
monitoring a first network parameter of a target network node according to the notification message, wherein the first network parameter at least comprises real-time bandwidth data, network resource utilization data and Transmission Control Protocol (TCP) connection data of the target network node;
and according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node so as to adjust the acceleration strategy.
4. The method of claim 3, wherein the real-time bandwidth data comprises at least one of: network card actual throughput flow, network card actual throughput data packet quantity and request number scheduled by load balancing equipment SLB of the target network node;
the network resource utilization data includes at least one of: the TCP connection number, the TCP half-open connection number and the data packet receiving queue length of the target network node;
the TCP connection data includes at least one of: the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP (transmission control protocol) packet and the transmission rate of the current connection of the target network node;
correspondingly, according to the first network parameter and the acceleration mode, performing parameter configuration on the target network node includes:
calculating a first acceleration factor for parameter configuration of the target network node according to the network card actual throughput flow, the network card actual throughput data packet quantity, the number of requests scheduled by the load balancing device SLB, the number of TCP connections, the number of TCP semi-open connections and the length of a data packet receiving queue by using the following matrix formula:
a first acceleration factor = [ pat-value, bw-value, ct-value ];
wherein, pat-value represents a value corresponding to an acceleration mode, bw-value represents (1-network card flow utilization rate) × (1-network card data packet throughput rate) × (1-SLB request number/request number design capacity)/(network card flow utilization rate × network card data packet throughput rate (SLB request number/request number design capacity)), and ct-value represents (1- (TCP connection number + TCP half-open connection number 0.5)/connection number queue length)/(TCP connection number + TCP half-open connection number 0.5)/connection number queue length;
calculating a second acceleration factor for parameter configuration of the target network node according to the size of a sliding window, the size of a congestion window, the retransmission proportion of a TCP packet and the transmission rate by the following matrix formula;
second acceleration factor = [ S-win, TCP-ret, bitrate] T
Wherein S-win represents the sliding window size, TCP-ret represents the number of TCP retransmission packets/the number of TCP completion transmission packets, bitrate represents a transmission rate, [ 2 ]] T Representing a transposition;
and according to the first acceleration factor and the second acceleration factor, performing parameter configuration on the target network node.
5. The method of claim 4, wherein configuring parameters of the target network node according to the first network parameters and the acceleration pattern further comprises:
comparing the size of the sliding window with a first preset threshold value to generate a first comparison result; determining the second network parameter for increasing or decreasing the size of the sliding window according to the first comparison result;
or, comparing the TCP packet retransmission ratio data with a second preset threshold value to generate a second comparison result; determining the second network parameter for adjusting the sending proportion of the TCP packets according to the second comparison result;
or, detecting whether the transmission rate reaches a preset target transmission rate; generating a detection result; determining the second network parameter for adjusting the transmission rate when determining that the transmission rate reaches a preset target transmission rate according to the detection result;
and performing parameter configuration on the target network node according to the second network parameter.
6. An apparatus for accelerating data, the apparatus comprising:
the determining unit is used for determining the bandwidth full load rate of the target network node;
a sending unit, configured to send a notification message for adjusting an acceleration policy to the target network node when the bandwidth full load rate reaches a preset load threshold of the target network node, where the notification message at least includes an acceleration mode.
7. The apparatus according to claim 6, wherein the determining unit determines the bandwidth full rate of the target network node by using the following formula:
wherein, the RV represents a bandwidth full load rate, tn represents a current bandwidth of the target network node, and a represents a value corresponding to the bandwidth availability of the target network node; v represents the bandwidth capacity peak value of the target network node, and a represents the value corresponding to the bandwidth availability factor of the target network node.
8. An apparatus for accelerating data, the apparatus comprising:
a receiving unit, configured to receive a notification message sent by a scheduling center for adjusting an acceleration policy, where the notification message at least includes an acceleration mode;
a monitoring unit, configured to monitor a first network parameter of a target network node according to the notification message, where the first network parameter at least includes real-time bandwidth data, network resource utilization data, and TCP connection data of the target network node;
a configuration unit, configured to perform parameter configuration on the target network node according to the first network parameter and the acceleration mode, so as to adjust the acceleration policy.
9. An apparatus for accelerating data, the apparatus comprising: a memory and a processor;
wherein the memory is to store a computer program operable on the processor;
the processor, when executing the computer program, is adapted to perform the steps of the method of any of claims 1 to 5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201710931628.7A 2017-10-09 2017-10-09 Data acceleration method and device and storage medium Active CN107786371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710931628.7A CN107786371B (en) 2017-10-09 2017-10-09 Data acceleration method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710931628.7A CN107786371B (en) 2017-10-09 2017-10-09 Data acceleration method and device and storage medium

Publications (2)

Publication Number Publication Date
CN107786371A true CN107786371A (en) 2018-03-09
CN107786371B CN107786371B (en) 2021-06-29

Family

ID=61434218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710931628.7A Active CN107786371B (en) 2017-10-09 2017-10-09 Data acceleration method and device and storage medium

Country Status (1)

Country Link
CN (1) CN107786371B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113098782A (en) * 2021-03-22 2021-07-09 武汉大学 Network congestion control method and computer equipment
CN113391985A (en) * 2021-06-09 2021-09-14 北京猿力未来科技有限公司 Resource allocation method and device
CN114500663A (en) * 2021-12-28 2022-05-13 网宿科技股份有限公司 Scheduling method, device, equipment and storage medium for content distribution network equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668005A (en) * 2009-09-25 2010-03-10 东南大学 Data transmission accelerating engine method based on multiple access passages of transmitting end
CN102104908A (en) * 2011-01-18 2011-06-22 华为技术有限公司 Data transmission control method and equipment
CN102546832A (en) * 2012-02-29 2012-07-04 北京快网科技有限公司 Message transmission method based on transmission control protocol (TCP)
CN102891804A (en) * 2012-10-16 2013-01-23 中兴通讯股份有限公司南京分公司 Control strategy adjustment method and system
CN103391585A (en) * 2012-05-07 2013-11-13 中兴通讯股份有限公司 Bandwidth regulating method and device
US20170163542A1 (en) * 2011-09-28 2017-06-08 Open Text Sa Ulc System and method for data transfer, including protocols for use in data transfer
CN107172179A (en) * 2017-06-05 2017-09-15 网宿科技股份有限公司 A kind of bilateral acceleration transmission method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101668005A (en) * 2009-09-25 2010-03-10 东南大学 Data transmission accelerating engine method based on multiple access passages of transmitting end
CN102104908A (en) * 2011-01-18 2011-06-22 华为技术有限公司 Data transmission control method and equipment
US20170163542A1 (en) * 2011-09-28 2017-06-08 Open Text Sa Ulc System and method for data transfer, including protocols for use in data transfer
CN102546832A (en) * 2012-02-29 2012-07-04 北京快网科技有限公司 Message transmission method based on transmission control protocol (TCP)
CN103391585A (en) * 2012-05-07 2013-11-13 中兴通讯股份有限公司 Bandwidth regulating method and device
CN102891804A (en) * 2012-10-16 2013-01-23 中兴通讯股份有限公司南京分公司 Control strategy adjustment method and system
CN107172179A (en) * 2017-06-05 2017-09-15 网宿科技股份有限公司 A kind of bilateral acceleration transmission method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113098782A (en) * 2021-03-22 2021-07-09 武汉大学 Network congestion control method and computer equipment
CN113391985A (en) * 2021-06-09 2021-09-14 北京猿力未来科技有限公司 Resource allocation method and device
CN114500663A (en) * 2021-12-28 2022-05-13 网宿科技股份有限公司 Scheduling method, device, equipment and storage medium for content distribution network equipment
CN114500663B (en) * 2021-12-28 2024-04-12 网宿科技股份有限公司 Scheduling method, device, equipment and storage medium of content distribution network equipment

Also Published As

Publication number Publication date
CN107786371B (en) 2021-06-29

Similar Documents

Publication Publication Date Title
US7388839B2 (en) Methods, apparatus and computer programs for managing performance and resource utilization within cluster-based systems
EP1892882A1 (en) Method of providing QoS for a flow
EP3264723B1 (en) Method, related apparatus and system for processing service request
CN107786371B (en) Data acceleration method and device and storage medium
WO2009138000A1 (en) Method, device and system for controlling network flow
US11258717B2 (en) Method for sending service packet, network device, and system
KR20120082422A (en) Non-blocking admission control
Abu et al. Interest packets retransmission in lossy CCN networks and its impact on network performance
CN109660467B (en) Method and apparatus for controlling flow
US20100180033A1 (en) Hybrid Server Overload Control Scheme for Maximizing Server Throughput
JP4700290B2 (en) Method for flow control in a communication system
US8069236B2 (en) Flow control of events based on threshold, grace period, and event signature
CN107800574B (en) Storage QOS adjusting method, system, equipment and computer readable memory
US20100008377A1 (en) Queue management based on message age
EP3306866A1 (en) Message processing method, device and system
CN102916906B (en) One realizes the adaptive method of application performance, Apparatus and system
WO2021083160A1 (en) Data transmission method and apparatus
JP2023511918A (en) SERVICE LEVEL ADJUSTMENT METHOD AND APPARATUS, DEVICE AND STORAGE MEDIUM
US8725868B2 (en) Interactive service management
Zhang et al. MoWIE: toward systematic, adaptive network information exposure as an enabling technique for cloud-based applications over 5G and beyond
Zhu et al. A congestion-aware and robust multicast protocol in SDN-based data center networks
CN110858844A (en) Service request processing method, control method, device, system and electronic equipment
WO2020036079A1 (en) Network control device, network control method, and program
US7478158B1 (en) Bandwidth management system
CN113765796A (en) Flow forwarding control method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant