CN112822051B - Service acceleration method based on service perception - Google Patents

Service acceleration method based on service perception Download PDF

Info

Publication number
CN112822051B
CN112822051B CN202110010759.8A CN202110010759A CN112822051B CN 112822051 B CN112822051 B CN 112822051B CN 202110010759 A CN202110010759 A CN 202110010759A CN 112822051 B CN112822051 B CN 112822051B
Authority
CN
China
Prior art keywords
acceleration
service
transmission
service acceleration
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110010759.8A
Other languages
Chinese (zh)
Other versions
CN112822051A (en
Inventor
张锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guiyang Xunyou Network Technology Co ltd
Original Assignee
Guiyang Xunyou Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guiyang Xunyou Network Technology Co ltd filed Critical Guiyang Xunyou Network Technology Co ltd
Priority to CN202110010759.8A priority Critical patent/CN112822051B/en
Publication of CN112822051A publication Critical patent/CN112822051A/en
Application granted granted Critical
Publication of CN112822051B publication Critical patent/CN112822051B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0893Assignment of logical groups to network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/50Network service management, e.g. ensuring proper service fulfilment according to agreements
    • H04L41/5003Managing SLA; Interaction between SLA and QoS
    • H04L41/5019Ensuring fulfilment of SLA
    • H04L41/5022Ensuring fulfilment of SLA by giving priorities, e.g. assigning classes of service
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]

Abstract

The invention provides a service acceleration method based on service perception, which comprises the following steps: s100, receiving a business acceleration request of an enterprise; s200, automatically sensing the service type of the enterprise requesting acceleration, wherein the service type comprises a video conference and high-capacity file transmission; s300, according to the service type, determining and implementing a corresponding acceleration strategy, wherein the acceleration strategy comprises a synchronous transmission acceleration strategy and a transit acceleration strategy passing through an acceleration node, the video conference adopts the synchronous transmission acceleration strategy, and the high-capacity file transmission adopts the transit acceleration strategy passing through the acceleration node. The invention automatically senses the service type of enterprise request acceleration, determines and implements a pertinence acceleration strategy according to the service type, reduces the possibility of video conference jam and/or delay caused by internet fluctuation, prevents incomplete file transfer caused by network in high-capacity file transfer, and improves the enterprise service processing efficiency.

Description

Service acceleration method based on service perception
Technical Field
The invention relates to the technical field of business auxiliary management of enterprises, in particular to a business acceleration method based on business perception.
Background
In enterprise management, there may be a large number of enterprise services, including video conferencing and high-volume file transfer, etc.; these services are all directly performed by means of the internet, but the internet has the problems of network fluctuation, seizure and delay easily exist in a video conference, and high-capacity file transfer exists because the file transfer is incomplete due to the network.
Disclosure of Invention
In order to solve the technical problem, the invention provides a service acceleration method based on service perception, which comprises the following steps:
s100, receiving a business acceleration request of an enterprise;
s200, automatically sensing the service type of an enterprise requesting acceleration, wherein the service type comprises a video conference and high-capacity file transmission;
s300, according to the service type, determining and implementing a corresponding acceleration strategy, wherein the acceleration strategy comprises a synchronous transmission acceleration strategy and a transit acceleration strategy passing through an acceleration node, the video conference adopts the synchronous transmission acceleration strategy, and the high-capacity file transmission adopts the transit acceleration strategy passing through the acceleration node.
Optionally, in step S300, a priority level is set in the acceleration policy, and the priority level is preset according to a business condition of an enterprise.
Optionally, when there are multiple service acceleration requests of the same service type, classifying the service acceleration requests according to priority levels from high to low, and then sequencing the service acceleration requests of each priority level according to the sequence of application time;
the order of the acceleration strategy is: firstly, executing service acceleration on service acceleration requests belonging to a high priority level according to the sequence; after all the service acceleration requests with the high priority level are executed, executing service acceleration on the service acceleration requests with the second priority level according to the sequence, and executing the service acceleration requests in a secondary class push mode;
if a high-priority service acceleration request is received in the process of executing the acceleration of a low-priority service acceleration request, the low-priority service acceleration is immediately suspended, and the service acceleration is executed on the newly received high-priority service acceleration request.
Optionally, a service acceleration manager is set in a switch connected between an internal network of an enterprise and an external network, the service acceleration manager is internally provided with a plurality of service acceleration priority blocks, all connection terminals inside the enterprise are brought into the corresponding service acceleration priority blocks according to priority, each service acceleration priority block is internally provided with a sorting table, the connection terminals sending service acceleration requests are arranged in front of the sorting table according to the sequence of application time, and the connection terminals not sending service acceleration requests are arranged behind the sorting table.
Optionally, in step S100, a service acceleration request from the enterprise client is received by using a virtual private network VPN;
in the step S200, after identifying the service type requested to be accelerated from the service acceleration request, the service type is imported to a service acceleration server and connected to a plurality of preset parallel network transmission lines;
in step S300, querying a transmission line matching the service acceleration request by using a preset service server database, including: searching target service server information of a service acceleration request in a service server database; determining a transmission line matched with a service acceleration request according to the target service server information;
the service server database stores server information of various network services and corresponding transmission line information;
and establishing a data connection path from the enterprise client to a target service server of the service acceleration request through the inquired transmission line so as to transmit the service data of the service acceleration request.
Optionally, in step S300, the acceleration strategy is implemented by a convolutional neural network, a layer of the convolutional neural network is split into at least two subtasks, and each subtask is matched with a convolutional kernel; the convolution kernels are connected in series, so that the corresponding service of the service acceleration request is transmitted in series between the convolution kernels;
performing a first predetermined number of vector dot product operations in parallel based on each convolution kernel, each vector dot product operation comprising a second predetermined number of multiplication operations; the product of the first predetermined number and the second predetermined number is the number of multiplier-adders in a convolution kernel;
and outputting the vector dot product operation result of each convolution kernel according to the output priority sequence.
Optionally, in step S300, the synchronous transmission acceleration policy includes:
the method comprises the steps that an enterprise client and a network server construct a bilateral accelerated transmission protocol based on a user datagram protocol, and the bilateral accelerated transmission protocol is provided with different initial transmission windows according to the network type of the enterprise client;
the enterprise client acquires a universal unique identification code from the network server through registration;
the network server sets an initial transmission window and other protocol parameters according to the network type of the enterprise client;
the enterprise client and the network server perform data transmission based on the bilateral acceleration transmission protocol, including:
the enterprise client sends a service acceleration request to the network server, wherein the service acceleration request comprises the universal unique identification code and feeds back information to the network server in the service acceleration process;
and the network server receives the service acceleration request containing the universal unique identification code, and performs service acceleration containing the universal unique identification code with the enterprise client through an initial transmission window and other protocol parameters.
Optionally, in step S300, the transit acceleration policy passing through the acceleration node includes optimal path selection, transmission data optimization, and a private protocol;
the optimal path selection comprises the steps of sorting according to the path length from short to long, calculating the average performance of each transfer node in the first five sorted paths, and selecting the path with the highest average performance as the optimal path;
the transmission data optimization comprises pruning the service data of the service acceleration request and compressing and transmitting the pruned service data;
the private protocol comprises a private protocol formed by optimizing a general network protocol, and the private protocol is adopted at each transit node of the optimal path.
Optionally, each node in the transit acceleration strategy performs service acceleration through zero copy, file caching and summary check, wherein the service acceleration is performed through the zero copy, the file caching and the summary check
Zero-copy includes loading a high-capacity file into a Socket cache region of the kernel state from a disk memory of the kernel state, or loading the high-capacity file into the disk memory of the kernel state from the Socket cache region of the kernel state;
the file caching comprises the steps of establishing caching according to file information of a high-capacity file, wherein the caching comprises one or more sequence blocks, each sequence block comprises a sequence linked list, the sequence linked lists are used for storing copies, the copies are arranged according to the weight sequence of the high-capacity file, and the weights are obtained through the following formulas:
Figure BDA0002884983750000041
wherein D represents the weight of the high-capacity file; k 1 、K 2 And K 3 Is a coefficient, where K 1 And K 2 Has a value range of 1.5-2, K 3 The value range of (A) is 2.5-3; t is t 1 Is the most recent write time, t, of a file on the sequential linked list 3 Is the most recent read time, t, of a file on a sequential linked list 2 The initial creation time of the file on the sequence linked list is G, and the size of the file on the sequence linked list is G;
the summary verification comprises the steps of extracting a characteristic section of the high-capacity file, wherein the characteristic section comprises a head section, a tail section and one or more randomly selected sections, performing MD5 verification on the characteristic section, and performing XOR summation on the verified characteristic section to obtain an information summary;
when the high-capacity file is read and written, the information abstract of the high-capacity file is obtained through abstract verification and is compared with the information abstract of the file in the cache; if the high-capacity file has the same copy in the cache, updating cache hit information of the copy, refreshing modification time of the copy, updating the weight of the copy, adjusting the link position of the copy in the sequence linked list according to the weight of the updated copy, and transmitting the copy through zero copy; if the high-capacity file does not have the same copy in the cache, establishing the copy of the high-capacity file in the cache, updating the weight of the copy, and transmitting the copy through zero copy.
Optionally, the network server remotely obtains configuration data of a participating client corresponding to the service acceleration request according to the service acceleration request, where the configuration data includes an I/O port, a disk, a CPU, a memory, and a network switch, and assigns values and weight divisions to each configuration data;
the network server establishes transmission channels for the participating clients with data transmission pairwise according to the service acceleration request, determines the ideal transmission speed of each transmission channel, and calculates the execution transmission speed of each transmission channel by adopting the following correction formula:
Figure BDA0002884983750000042
in the above formula, V' τ Represents the execution transmission speed of the Tth transmission channel; ω represents the number of items of client configuration data; gamma ray τ A weight representing the kth item of configuration data; w τk1 And W τk2 Respectively representing the assignment of the kth configuration data of the client participating at the two ends of the τ -th transmission channel, assigning 0-1 according to the index of the configuration data, assigning 1 to the index of the configuration data at the highest, and decreasing the other indexes according to the configuration data; expressing the ideal transmission speed of the Tth transmission channel;
and performing corresponding service acceleration processing according to the execution transmission speed obtained by calculation.
The business acceleration method based on business perception of the invention determines and implements a pertinence acceleration strategy by automatically perceiving the business type of enterprise request acceleration and according to the business type, reduces the possibility of video conference jam and/or delay caused by internet fluctuation, prevents incomplete file transfer caused by network in high-capacity file transfer, and improves the business processing efficiency of enterprises.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a schematic flow diagram of a service acceleration method based on service awareness in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
As shown in fig. 1, an embodiment of the present invention provides a service acceleration method based on service awareness, including the following steps:
s100, receiving a business acceleration request of an enterprise;
s200, automatically sensing the service type of the enterprise requesting acceleration, wherein the service type comprises a video conference and high-capacity file transmission;
s300, according to the service type, determining and implementing a corresponding acceleration strategy, wherein the acceleration strategy comprises a synchronous transmission acceleration strategy and a transit acceleration strategy passing through an acceleration node, the video conference adopts the synchronous transmission acceleration strategy, and the high-capacity file transmission adopts the transit acceleration strategy passing through the acceleration node.
The working principle and the beneficial effects of the technical scheme are as follows: according to the scheme, the business type of enterprise request acceleration is automatically sensed, and a corresponding acceleration strategy is determined and implemented according to the business type, so that the possibility of blocking and/or delay of a video conference caused by fluctuation of the Internet is reduced, incomplete file transfer caused by the network in high-capacity file transfer is prevented, and the business processing efficiency of enterprises is improved.
In one embodiment, in step S300, the acceleration policy includes a priority level, and the priority level is preset according to a business situation of the enterprise.
The working principle and the beneficial effects of the technical scheme are as follows: in the scheme, the priority level setting of business acceleration of the enterprise is to perform priority division according to business conditions of the enterprise, for example: the acceleration levels of the boss business acceleration, the manager business acceleration and the common salesman business acceleration can be set according to the sequence of the boss business acceleration, the manager business acceleration and the common salesman business acceleration, so that the acceleration strategy has key points and is hierarchical, the business importance of the boss of the common enterprise is higher than that of other people, the business importance of the manager is higher than that of the common salesman, and the common salesman refers to common staff who specifically process a certain related business; by the method, important services can be guaranteed preferentially.
In one embodiment, when a plurality of service acceleration requests of the same service type exist, the service acceleration requests are classified from high to low according to priority levels, and then the service acceleration requests of each priority level are respectively sequenced according to the sequence of application time;
the order of the acceleration strategy is: firstly, executing service acceleration on service acceleration requests belonging to a high priority level according to the sequence; after all the service acceleration requests with the high priority level are executed, service acceleration is executed on the service acceleration requests with the second priority level according to the sequence, and the service acceleration is executed in a secondary class push mode;
if a high-priority service acceleration request is received in the process of executing the acceleration of a low-priority service acceleration request, the low-priority service acceleration is immediately suspended, and the service acceleration is executed on the newly received high-priority service acceleration request.
The working principle and the beneficial effects of the technical scheme are as follows: the scheme is that the processing when a plurality of service acceleration requests with the same service type exist is firstly classified according to the priority level from high to low, and then the classification is carried out according to the sequence of application time; the acceleration processing is executed first when the priority is high, and the acceleration processing is executed successively according to the request time when the priority is the same, so that the conflict of acceleration of a plurality of services of the same type is solved, and each service is guaranteed to be effectively processed.
In one embodiment, a service acceleration manager is arranged in a switch connected between an internal network of an enterprise and an external network, the service acceleration manager is internally provided with a plurality of service acceleration priority level blocks, all connection terminals in the enterprise are brought into the corresponding service acceleration priority level blocks according to priority levels, each service acceleration priority level block is internally provided with a sorting table, the connection terminals sending service acceleration requests are arranged in front of the sorting table according to the sequence of application time, and the connection terminals not sending service acceleration requests are arranged behind the sorting table.
The working principle and the beneficial effects of the technical scheme are as follows: the scheme is characterized in that a business acceleration manager is arranged in a switch connected with an internal network and an external network and used for orderly managing business acceleration of an enterprise; a plurality of service acceleration priority level blocks are arranged in the service acceleration manager, priority level definition is carried out on service acceleration requests in an enterprise in advance, the condition that each service acceleration request of the same terminal needs one priority level definition is avoided, the processing amount can be reduced, and the processing process is accelerated.
In one embodiment, in step S100, a service acceleration request is received from an enterprise client using a virtual private network VPN;
in the step S200, after identifying the service type requested to be accelerated from the service acceleration request, the service type is imported to a service acceleration server and connected to a plurality of preset parallel network transmission lines;
in step S300, querying a transmission line matching the service acceleration request by using a preset service server database, including: searching target service server information of a service acceleration request in a service server database; determining a transmission line matched with a service acceleration request according to the target service server information;
the service server database stores server information of various network services and corresponding transmission line information;
and establishing a data connection path from the enterprise client to a target service server of the service acceleration request through the inquired transmission line so as to transmit the service data of the service acceleration request.
The working principle and the beneficial effects of the technical scheme are as follows: the service acceleration server in the scheme is a VPN server, receives the service acceleration request through the VPN server, identifies the service type, determines matched transmission lines on a plurality of preset network transmission lines for transmitting service data in the service acceleration request, determines and verifies the target position of service data transmission, can guarantee the safety of the service data of the service acceleration request, and avoids data transmission errors or leakage.
In one embodiment, in step S300, the acceleration strategy is implemented by a convolutional neural network, and a layer of the convolutional neural network is split into at least two subtasks, where each subtask matches a convolutional kernel; the convolution kernels are connected in series, so that the corresponding service of the service acceleration request is transmitted in series between the convolution kernels;
performing a first predetermined number of vector dot product operations in parallel based on each convolution kernel, each vector dot product operation comprising a second predetermined number of multiplication operations; the product of the first preset number and the second preset number is the number of the multiply-add devices in the convolution kernel;
and outputting the vector dot product operation result of each convolution kernel according to the output priority sequence.
The working principle and the beneficial effects of the technical scheme are as follows: the method further definitely adopts a convolutional neural network for an acceleration strategy, determines a plurality of convolutional kernels, adopts serial connection and can carry out serial transmission, each convolutional kernel carries out vector dot product operation in a parallel mode, the number of multipliers and adders in the convolutional kernels is determined according to the product of a first preset number and a second preset number, and vector dot product operation results are sequentially output; the service acceleration processing is carried out through the convolutional neural network, and the acceleration purpose is achieved by means of vector dot product operation of convolutional kernels.
In one embodiment, in step S300, the synchronous transmission acceleration strategy is as follows:
the method comprises the steps that an enterprise client and a network server construct a bilateral accelerated transmission protocol based on a user datagram protocol, and the bilateral accelerated transmission protocol is provided with different initial transmission windows according to the network type of the enterprise client;
the enterprise client acquires a universal unique identification code from the network server through registration;
the network server sets an initial transmission window and other protocol parameters according to the network type of the enterprise client;
the enterprise client and the network server perform data transmission based on the bilateral acceleration transmission protocol, including:
the enterprise client sends a service acceleration request to the network server, wherein the service acceleration request comprises the universal unique identification code and feeds back information to the network server in the service acceleration process;
and the network server receives the service acceleration request containing the universal unique identification code, and performs service acceleration containing the universal unique identification code with the enterprise client through an initial transmission window and other protocol parameters.
The working principle and the beneficial effects of the technical scheme are as follows: the scheme provides a specific mode of a synchronous transmission acceleration strategy, an enterprise client is required to acquire a universal unique identification code through registration, and when the service acceleration is executed, the universal unique identification code is required to be contained in service data, so that the service differentiation in transmission is facilitated, and the transmission disorder is avoided; and the anti-counterfeiting and anti-disclosure effects can be achieved to a certain extent.
In one embodiment, in step S300, the transit acceleration policy of the acceleration node includes optimal path selection, transmission data optimization, and a proprietary protocol;
the optimal path selection comprises the steps of sorting according to the path length from short to long, calculating the average performance of each transfer node in the first five sorted paths, and selecting the path with the highest average performance as the optimal path;
the transmission data optimization comprises pruning the service data of the service acceleration request and compressing and transmitting the pruned service data;
the private protocol comprises a private protocol formed by optimizing a general network protocol, and the private protocol is adopted at each transit node of the optimal path.
The working principle and the beneficial effects of the technical scheme are as follows: the scheme provides a transfer acceleration strategy through an acceleration node, improves the transmission efficiency by selecting an optimal path and optimizing data, and ensures the transmission safety by establishing a private protocol; the optimization of the transmission data comprises pruning processing and compression transmission, useless data in the service data are removed through the pruning processing, the data volume is reduced, the data transmission volume is further reduced through the compression transmission, and decompression, restoration and reuse can be carried out after the transmission data are received.
In one embodiment, each node in the transit acceleration policy performs service acceleration through zero copy, file caching and digest checking, wherein the transit acceleration policy performs service acceleration through zero copy, file caching and digest checking
Zero-copy includes loading a high-capacity file into a Socket cache region of the kernel state from a disk memory of the kernel state, or loading the high-capacity file into the disk memory of the kernel state from the Socket cache region of the kernel state;
the file caching comprises the steps of establishing caching according to file information of a high-capacity file, wherein the caching comprises one or more sequence blocks, each sequence block comprises a sequence linked list, the sequence linked lists are used for storing copies, the copies are arranged according to the weight sequence of the high-capacity file, and the weights are obtained through the following formulas:
Figure BDA0002884983750000091
wherein D represents the weight of the high-capacity file; k 1 、K 2 And K 3 Is a coefficient, where K 1 And K 2 Has a value range of 1.5-2, K 3 The value range of (A) is 2.5-3; t is t 1 Is the most recent write time, t, of a file on the sequential linked list 3 Is the most recent read time, t, of a file on a sequential linked list 2 The initial creation time of the file on the sequence linked list is G, and the size of the file on the sequence linked list is G;
the summary verification comprises the steps of extracting a characteristic section of the high-capacity file, wherein the characteristic section comprises a head section, a tail section and one or more randomly selected sections, performing MD5 verification on the characteristic section, and performing XOR summation on the verified characteristic section to obtain an information summary;
when the high-capacity file is read and written, the information abstract of the high-capacity file is obtained through abstract verification and is compared with the information abstract of the file in the cache; if the high-capacity file has the same copy in the cache, updating cache hit information of the copy, refreshing modification time of the copy, updating the weight of the copy, adjusting the link position of the copy in the sequence linked list according to the weight of the updated copy, and transmitting the copy through zero copy; if the high-capacity file does not have the same copy in the cache, establishing the copy of the high-capacity file in the cache, updating the weight of the copy, and transmitting the copy through zero copy.
The working principle and the beneficial effects of the technical scheme are as follows: according to the scheme, specific operation of each transfer node in a transfer acceleration strategy through an acceleration node is specified, the transfer node performs service acceleration in a zero copy mode, a file cache mode and a summary check mode, the zero copy mode can guarantee transmission speed, the file cache mode can guarantee the integrity of file transmission, and the summary check mode can check the integrity of the file after the file transmission; and the file loss condition in the transmission process is prevented.
In a preferred embodiment, the selection of the optimal path comprises the steps of:
step 1: acquiring all paths of enterprise data transmission, and generating a path set:
L=(a 1 ,a 2 ,a 3 ……a i )
wherein L represents a set of paths; a is a i Indicating the length of the ith path;
step 2: based on fuzzy sorting, a sorting model of all paths in the path set is constructed;
Figure BDA0002884983750000101
wherein i is 1, 2, 3 … … n; n represents the total number of paths in the path set; p (i) represents the ranking value of the ith path;
and step 3: selecting five routes with the first five ranking values as preferred routes;
and 4, step 4: acquiring the centrality characteristics of each path in the preferred paths, and constructing an average performance calculation model of the preferred paths:
Figure BDA0002884983750000102
wherein k is j An intermediary centrality feature representing the jth node of the preferred path; j. the design is a square j A tight centrality feature representing the jth node of the preferred path; g j A centrality feature representing the jth node of the preferred path; j is 1, 2, 3 … … m; m represents the number of transit nodes of the preferred path; x represents the average performance of the transit nodes in the preferred path;
and 5: and sequentially calculating the average performance of the transit nodes of the five preferred paths according to the average performance calculation model, and determining the path with the maximum average performance value as the optimal path.
In the above technical solution, because there is more than one route in the enterprise transmission, for example: from the terminal equipment to the WIFI network, to the mobile network and then to the WIFI network; directly from the terminal equipment to the mobile network and then to the terminal equipment; the transmission paths are all different with the change of the transit node in the transmission process. The transmission length of each path may be represented by the length of the transmission time as a standard for sorting, in addition to the transmission length as a standard for sorting. Calculating all paths through the path sorting model constructed in the step 2, and determining a sorting sequence as a primary screening of the optimal path; the method selects five routes with the ranking values in the first five as the preferred routes; and in step 4 and step 5, the invention judges that the path is optimal by calculating the average performance of the five preferred paths. In the path transmission, the average performance of the nodes can reflect the standard performance of the whole path transmission. In addition, the most important centrality characteristics of the transit node, namely, the mediation centrality, the transit centrality and the degree centrality, are introduced into the average performance calculation formula. Therefore, the average performance of each preferred path can be accurately calculated.
The degree centrality represents the number of nodes to which the nodes are directly connected.
The tight centrality represents the sum, or the inverse of the sum, of the shortest distances of one node to all other nodes.
The mediation centrality represents the shortest path of how many two-by-two connected nodes a point is located in the network.
In one embodiment, a network server remotely obtains configuration data of a participating client corresponding to a service acceleration request according to the service acceleration request, wherein the configuration data comprises an I/O port, a disk, a CPU, a memory and a network switch, and assigns values and weight division to each configuration data;
the network server establishes transmission channels for the participating clients with data transmission pairwise according to the service acceleration request, determines the ideal transmission speed of each transmission channel, and calculates the execution transmission speed of each transmission channel by adopting the following correction formula:
Figure BDA0002884983750000111
in the above formula, V' τ Represents the execution transmission speed of the Tth transmission channel; ω represents the number of items of client configuration data; gamma ray τ A weight representing the kth item of configuration data; w τk1 And W τk2 Respectively representing the assignment of the kth configuration data of the client participating at the two ends of the τ -th transmission channel, assigning 0-1 according to the index of the configuration data, assigning 1 to the index of the configuration data at the highest, and decreasing the other indexes according to the configuration data; expressing the ideal transmission speed of the Tth transmission channel;
and performing corresponding service acceleration processing according to the execution transmission speed obtained by calculation.
The working principle and the beneficial effects of the technical scheme are as follows: according to the scheme, the client configuration information at two ends of the transmission channel is obtained, and the transmission speed is corrected by adopting the formula, so that the actual execution transmission speed is adapted to the client configuration, excessive network resources are prevented from being allocated for data transmission related to a certain service simply for acceleration, the waste of the network resources is avoided while the service acceleration is realized, and the network resources are saved; the correction formula is adopted to consider the influence of the imbalance of various configuration data of the participating clients at two ends of the transmission channel on the transmission speed.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A service acceleration method based on service perception is characterized by comprising the following steps:
s100, receiving a business acceleration request of an enterprise;
s200, automatically sensing the service type of the enterprise requesting acceleration, wherein the service type comprises a video conference and high-capacity file transmission;
s300, according to the service type, determining and implementing a corresponding acceleration strategy, wherein the acceleration strategy comprises a synchronous transmission acceleration strategy and a transit acceleration strategy passing through an acceleration node, the video conference adopts the synchronous transmission acceleration strategy, and the high-capacity file transmission adopts the transit acceleration strategy passing through the acceleration node;
in the step S100, a service acceleration request from an enterprise client is received by using a virtual private network VPN;
in the step S200, after identifying the service type requested to be accelerated from the service acceleration request, the service type is imported to a service acceleration server and connected to a plurality of preset parallel network transmission lines;
in step S300, querying a transmission line matching the service acceleration request by using a preset service server database, including: searching target service server information of a service acceleration request in a service server database; determining a transmission line matched with a service acceleration request according to the target service server information;
the service server database stores server information of various network services and corresponding transmission line information;
and establishing a data connection path from the enterprise client to a target service server of the service acceleration request through the inquired transmission line so as to transmit the service data of the service acceleration request.
2. The traffic acceleration method based on traffic awareness of claim 1, wherein in step S300, the acceleration policy is set with a priority level, and the priority level is preset according to a traffic situation of an enterprise.
3. The service acceleration method based on service awareness according to claim 2, wherein when there are multiple service acceleration requests of the same service type, the service acceleration requests are classified from high to low according to priority, and then the service acceleration requests of each priority are respectively sorted according to the sequence of application time;
the order of the acceleration strategy is: firstly, executing service acceleration on service acceleration requests belonging to a high priority level according to the sequence; after all the service acceleration requests with the high priority level are executed, service acceleration is executed on the service acceleration requests with the second priority level according to the sequence, and the service acceleration is executed in a secondary class push mode;
if a high-priority service acceleration request is received in the process of executing the acceleration of a low-priority service acceleration request, the low-priority service acceleration is immediately suspended, and the service acceleration is executed on the newly received high-priority service acceleration request.
4. The service acceleration method based on service awareness according to claim 3, wherein a service acceleration manager is provided in a switch connected to an internal network of an enterprise and an external network, the service acceleration manager is provided with a plurality of service acceleration priority blocks, all connection terminals in the enterprise are brought into the corresponding service acceleration priority blocks according to priority, each service acceleration priority block is provided with a sorting table, the connection terminal sending the service acceleration request is arranged in front of the sorting table according to the sequence of application time, and the connection terminal not sending the service acceleration request is arranged behind the sorting table.
5. The traffic acceleration method based on traffic awareness according to claim 1, wherein in step S300, the acceleration strategy is implemented by a convolutional neural network, which splits a layer of the convolutional neural network into no less than two subtasks, and each subtask matches one convolutional kernel; the convolution kernels are connected in series, so that the corresponding service of the service acceleration request is transmitted in series between the convolution kernels;
performing a first predetermined number of vector dot product operations in parallel based on each convolution kernel, each vector dot product operation comprising a second predetermined number of multiplication operations; the product of the first predetermined number and the second predetermined number is the number of multiplier-adders in a convolution kernel;
and outputting the vector dot product operation result of each convolution kernel according to the output priority sequence.
6. The traffic acceleration method based on traffic awareness of claim 1, wherein in step S300, the isochronous transmission acceleration strategy is as follows:
the method comprises the steps that an enterprise client and a network server construct a bilateral accelerated transmission protocol based on a user datagram protocol, and the bilateral accelerated transmission protocol is provided with different initial transmission windows according to the network type of the enterprise client;
the enterprise client acquires a universal unique identification code from the network server through registration;
the network server sets an initial transmission window and other protocol parameters according to the network type of the enterprise client;
the enterprise client and the network server perform data transmission based on the bilateral acceleration transmission protocol, and the method comprises the following steps:
the enterprise client sends a service acceleration request to the network server, wherein the service acceleration request comprises the universal unique identification code and feeds back information to the network server in the service acceleration process;
and the network server receives the service acceleration request containing the universal unique identification code, and performs service acceleration containing the universal unique identification code with the enterprise client through an initial transmission window and other protocol parameters.
7. The traffic acceleration method based on traffic awareness according to claim 1, wherein in step S300, the transit acceleration policy of the accelerating node includes optimal path selection, transmission data optimization and proprietary protocol;
the optimal path selection comprises the steps of sorting according to the path length from short to long, calculating the average performance of each transfer node in the first five sorted paths, and selecting the path with the highest average performance as the optimal path;
the transmission data optimization comprises pruning the service data of the service acceleration request and compressing and transmitting the pruned service data;
the private protocol comprises a private protocol formed by optimizing a general network protocol, and the private protocol is adopted at each transit node of the optimal path.
8. The traffic acceleration method based on traffic awareness of claim 7, wherein each node in the transit acceleration policy performs traffic acceleration through zero copy, file caching and digest checking, wherein
Zero-copy includes loading a high-capacity file into a Socket cache region of the kernel state from a disk memory of the kernel state, or loading the high-capacity file into the disk memory of the kernel state from the Socket cache region of the kernel state;
the file caching comprises the steps of establishing caching according to file information of a high-capacity file, wherein the caching comprises one or more sequence blocks, each sequence block comprises a sequence linked list, the sequence linked lists are used for storing copies, the copies are arranged according to the weight sequence of the high-capacity file, and the weights are obtained through the following formulas:
Figure FDA0003752954890000031
wherein D represents the weight of the high-capacity file; k 1 、K 2 And K 3 Is a coefficient, where K 1 And K 2 Has a value range of 1.5-2, K 3 The value range of (A) is 2.5-3; t is t 1 Is the most recent write time, t, of a file on the sequential linked list 3 Is the most recent read time, t, of a file on a sequential linked list 2 The initial creation time of the file on the sequence linked list is G, and the size of the file on the sequence linked list is G;
the summary verification comprises the steps of extracting a characteristic section of the high-capacity file, wherein the characteristic section comprises a head section, a tail section and one or more randomly selected sections, performing MD5 verification on the characteristic section, and performing XOR summation on the verified characteristic section to obtain an information summary;
when the high-capacity file is read and written, the information abstract of the high-capacity file is obtained through abstract verification and is compared with the information abstract of the file in the cache; if the high-capacity file has the same copy in the cache, updating cache hit information of the copy, refreshing modification time of the copy, updating the weight of the copy, adjusting the link position of the copy in the sequence linked list according to the updated weight of the copy, and transmitting the copy through zero copy; if the high-capacity file does not have the same copy in the cache, establishing the copy of the high-capacity file in the cache, updating the weight of the copy, and transmitting the copy through zero copy.
9. The service acceleration method based on service awareness according to any one of claims 1 to 8, wherein a network server remotely obtains configuration data of a participating client corresponding to a service acceleration request according to the service acceleration request, wherein the configuration data includes an I/O port, a disk, a CPU, a memory, and a network switch, and assigns values and weight division to each configuration data;
the network server establishes transmission channels for the participating clients with data transmission pairwise according to the service acceleration request, determines the ideal transmission speed of each transmission channel, and calculates the execution transmission speed of each transmission channel by adopting the following correction formula:
Figure FDA0003752954890000041
in the above formula, V' τ Represents the execution transmission speed of the Tth transmission channel; ω represents the number of items of client configuration data; gamma ray τ A weight representing the kth item of configuration data; w τk1 And W τk2 Respectively representing the assignment of the kth configuration data of the client participating at the two ends of the τ -th transmission channel, assigning 0-1 according to the index of the configuration data, assigning 1 to the index of the configuration data at the highest, and decreasing the other indexes according to the configuration data; expressing the ideal transmission speed of the Tth transmission channel;
and performing corresponding service acceleration processing according to the calculated execution transmission speed.
CN202110010759.8A 2021-01-06 2021-01-06 Service acceleration method based on service perception Active CN112822051B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110010759.8A CN112822051B (en) 2021-01-06 2021-01-06 Service acceleration method based on service perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110010759.8A CN112822051B (en) 2021-01-06 2021-01-06 Service acceleration method based on service perception

Publications (2)

Publication Number Publication Date
CN112822051A CN112822051A (en) 2021-05-18
CN112822051B true CN112822051B (en) 2022-09-16

Family

ID=75857567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110010759.8A Active CN112822051B (en) 2021-01-06 2021-01-06 Service acceleration method based on service perception

Country Status (1)

Country Link
CN (1) CN112822051B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014509B (en) * 2021-05-26 2021-09-17 腾讯科技(深圳)有限公司 Application program acceleration method and device
CN114928644B (en) * 2022-07-20 2022-11-08 深圳市安科讯实业有限公司 Internet of things network fusion acceleration gateway
CN115174690B (en) * 2022-09-08 2022-12-23 中国人民解放军国防科技大学 System and method for accelerating high-flow service under weak network or broken network condition

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447856A (en) * 2007-11-28 2009-06-03 新奥特(北京)视频技术有限公司 High-capacity file transmission method
CN101841387A (en) * 2009-03-19 2010-09-22 中国移动通信集团江西有限公司 Wide area network data speed acceleration method, device and system
CN105516122A (en) * 2015-12-03 2016-04-20 网宿科技股份有限公司 Method and system for accelerating network transmission of acceleration strategy with hierarchical configuration
CN105577801A (en) * 2014-12-31 2016-05-11 华为技术有限公司 Business acceleration method and device
CN110391982A (en) * 2018-04-20 2019-10-29 伊姆西Ip控股有限责任公司 Transmit method, equipment and the computer program product of data
CN111352735A (en) * 2020-02-27 2020-06-30 上海上大鼎正软件股份有限公司 Data acceleration method, device, storage medium and equipment
CN111541903A (en) * 2020-01-14 2020-08-14 深圳市华曦达科技股份有限公司 Live broadcast acceleration method, source station end, edge node, client and live broadcast system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447856A (en) * 2007-11-28 2009-06-03 新奥特(北京)视频技术有限公司 High-capacity file transmission method
CN101841387A (en) * 2009-03-19 2010-09-22 中国移动通信集团江西有限公司 Wide area network data speed acceleration method, device and system
CN105577801A (en) * 2014-12-31 2016-05-11 华为技术有限公司 Business acceleration method and device
CN105516122A (en) * 2015-12-03 2016-04-20 网宿科技股份有限公司 Method and system for accelerating network transmission of acceleration strategy with hierarchical configuration
CN110391982A (en) * 2018-04-20 2019-10-29 伊姆西Ip控股有限责任公司 Transmit method, equipment and the computer program product of data
CN111541903A (en) * 2020-01-14 2020-08-14 深圳市华曦达科技股份有限公司 Live broadcast acceleration method, source station end, edge node, client and live broadcast system
CN111352735A (en) * 2020-02-27 2020-06-30 上海上大鼎正软件股份有限公司 Data acceleration method, device, storage medium and equipment

Also Published As

Publication number Publication date
CN112822051A (en) 2021-05-18

Similar Documents

Publication Publication Date Title
CN112822051B (en) Service acceleration method based on service perception
US11968105B2 (en) Systems and methods for social graph data analytics to determine connectivity within a community
CN107181724B (en) Identification method and system of cooperative flow and server using method
US20190089610A1 (en) Parallel computational framework and application server for determining path connectivity
US9888048B1 (en) Supporting millions of parallel light weight data streams in a distributed system
CN100414539C (en) Method for transmitting and downloading streaming data
US20070288635A1 (en) System and method for scalable processing of multi-way data stream correlations
US20060036743A1 (en) System for balance distribution of requests across multiple servers using dynamic metrics
US11531982B2 (en) Optimal transactions sharding for scalable blockchain
CN109617710B (en) Large data transmission bandwidth scheduling method with deadline constraint between data centers
CN112817856A (en) AB experiment integration method and system
CN110493323B (en) Block chain-based fairness file distribution method, system and storage medium
CN113902127A (en) Asynchronous federal learning method with block chain enabled
CN112766560B (en) Alliance blockchain network optimization method, device, system and electronic equipment
CN115310137B (en) Secrecy method and related device of intelligent settlement system
CN116361271B (en) Block chain data modification and migration method, electronic equipment and storage medium
CN111209100B (en) Service processing and data source determining method
CN114448838A (en) System reliability evaluation method
CN113438274A (en) Data transmission method and device, computer equipment and readable storage medium
EP4075727A1 (en) System and method for identifying services with which encrypted traffic is exchanged
US20130073561A1 (en) Random sampling from distributed streams
Kardassakis et al. Load balancing in stochastic networks: Algorithms, analysis, and game theory
CN110134547A (en) A kind of data de-duplication method and relevant apparatus based on middleware
CN114398400B (en) Serverless resource pool system based on active learning
KR102571783B1 (en) Search processing system performing high-volume search processing and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant