CN113127414A - Request processing method and system and edge node - Google Patents

Request processing method and system and edge node Download PDF

Info

Publication number
CN113127414A
CN113127414A CN201911413095.9A CN201911413095A CN113127414A CN 113127414 A CN113127414 A CN 113127414A CN 201911413095 A CN201911413095 A CN 201911413095A CN 113127414 A CN113127414 A CN 113127414A
Authority
CN
China
Prior art keywords
target data
edge node
node
server
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911413095.9A
Other languages
Chinese (zh)
Other versions
CN113127414B (en
Inventor
方云麟
童剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Baishancloud Technology Co Ltd
Original Assignee
Guizhou Baishancloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Baishancloud Technology Co Ltd filed Critical Guizhou Baishancloud Technology Co Ltd
Priority to CN201911413095.9A priority Critical patent/CN113127414B/en
Publication of CN113127414A publication Critical patent/CN113127414A/en
Application granted granted Critical
Publication of CN113127414B publication Critical patent/CN113127414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • G06F16/134Distributed indices

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present disclosure relates to a request processing method and system, and an edge node, where the edge node includes one or at least two servers, where at least one server includes a network front-end module and a first storage back-end module, where: the network front-end module is used for receiving a service request sent by a user terminal, analyzing the service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module, and responding to the service request by using the target data; and the first storage back-end module is used for responding to the acquisition request of the network front-end module for the target data.

Description

Request processing method and system and edge node
Technical Field
The present disclosure relates to the field of communications, and in particular, to a request processing method and system and an edge node.
Background
A Content Delivery Network (CDN) can deliver Content of a source station to a node closest to a user, so that the user can obtain required Content nearby, and response speed and success rate of user access are improved. The method solves the problem of access delay caused by distribution, bandwidth and server performance, and is suitable for various scenes such as site acceleration, on-demand broadcasting, live broadcasting and the like.
In the related technology, a user initiates a service request to an edge node through a client, the edge node locally inquires whether data corresponding to the service request is stored, and after the data corresponding to the service request is inquired, the data is utilized to respond to the service request; and after the data is acquired from the upper node, the data is used for responding to the service request.
In the process of responding to the service request by the edge node, the problem of excessive resource consumption exists.
Disclosure of Invention
To overcome at least one of the problems in the related art, a request processing method and system and an edge node are provided.
An edge node comprises at least two servers, wherein the storage back-end system only comprises a first storage back-end module, and the first storage back-end module of each server uses the same cluster system for data storage; wherein the first storage backend module comprises:
the judging unit is used for judging whether a server in the edge node stores target data or not after the target data used for responding to the service request is determined, and obtaining a first judgment result;
the first requesting unit is configured to request the first storage backend module of the first target server for the target data when the first determination result indicates that there is the first target server storing the target data in the servers in the edge node.
In an exemplary embodiment, the first storage backend module further comprises:
a selecting unit, configured to select one server from the other available edge nodes or upper node servers as a second target server when the query result indicates that no server in the edge node stores the target data;
and the second request unit is used for controlling the first storage back-end module in the edge node to request the target data from a second target server.
In one exemplary embodiment, the second request unit includes:
a determining subunit, configured to determine whether the target data is stored from other edge nodes in communication with the edge node, to obtain a second determination result;
a first request subunit, configured to, when the second determination result indicates that the target data exists, obtain the target data from the storage back-end module of the other edge node;
and the second request subunit is configured to, when the second determination result is that there is no target data, obtain the target data from the storage back-end module of the upper-level node corresponding to the edge node.
In an exemplary embodiment, when determining that there is at least one edge node storing the target data, the determining subunit determines the target edge node according to three conditions, including:
the bandwidth information of the edge node stored with the target data accords with a preset judgment strategy with abundant bandwidth;
the load information of the edge node stored with the target data accords with a preset load judgment strategy;
and the communication distance between the edge node stored with the target data and the edge node accords with a preset close judgment strategy.
In an exemplary embodiment, if at least two edge nodes of the target data meet more than three conditions, randomly determining a target edge node from the edge nodes meeting the conditions;
and if all the edge nodes with the target data do not meet the three conditions, acquiring the target data from the superior node corresponding to the edge nodes.
In an exemplary embodiment, the determining subunit determines whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes communicating with the edge node, wherein the data index information is index information established by the other edge nodes on the stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In an exemplary embodiment, each of the servers further includes:
and the network front-end module is used for analyzing the received service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module and responding to the service request by utilizing the target data.
In one exemplary embodiment, the network front end module includes:
the acquisition unit is used for acquiring the customized request information carried in the service request after the service request is analyzed;
and the processing unit is used for processing the service request according to the customized request information.
A data processing system, comprising:
an edge node as described in any of the above;
and the upper nodes comprise upper nodes and are used for providing the required target data to the edge nodes.
In one exemplary embodiment, the superordinate node includes at least one server, wherein the at least one server includes a second storage backend module, the second storage backend module including:
and the storage back-end unit is used for providing target data required by the service request.
In an exemplary embodiment, the upper node includes at least one server, wherein the at least one server includes a second storage back-end module, and wherein the second storage back-end module of the upper node and the storage back-end module of the edge node use the same cluster system for data storage.
In an exemplary embodiment, the superordinate node is further configured to, when the target data is not stored locally, determine whether there is a target superordinate node storing the target data from other superordinate nodes communicating with the superordinate node, and obtain a third determination result; when the third judgment result is that a target superior node for storing the target data exists, acquiring the target data from the target superior node; or, when the third determination result is that there is no target upper node storing the target data, the target data is acquired from the source station.
A request processing method is applied to an edge node and comprises the following steps:
after determining target data for responding to a service request, judging whether a server in the edge node stores a third target server of the target data or not, and obtaining a fourth judgment result;
when a fourth judgment result shows that a server in the edge node has a third target server storing the target data, a first storage back-end module of the server is used for requesting the target data from the third target server; the edge node comprises at least two servers, wherein the first storage back-end modules of the at least two servers use the same cluster system for data storage.
In an exemplary embodiment, after determining whether a server in the edge node stores a third target server of the target data and obtaining a fourth determination result, the method further includes:
when the fourth judgment result is that the server in the edge node does not have a third target server storing the target data, inquiring a server of a superior node corresponding to the edge node, judging whether the target data is stored or not, and obtaining a fifth judgment result, wherein a second storage back-end module of the server of the superior node and a first storage back-end module of the server of the edge node use the same cluster system for data storage;
when the fifth judgment result is that a superior node for storing the target data exists, acquiring the target data from a second storage back-end module of the superior node;
and when the fifth judgment result does not store the superior node of the target data, acquiring the target data from a source station.
In one exemplary embodiment, when determining at least one edge node storing the target data, determining the target edge node by three conditions includes:
the bandwidth information of the edge node stored with the target data accords with a preset judgment strategy with abundant bandwidth;
the load information of the edge node stored with the target data accords with a preset light load judgment strategy;
and the communication distance between the edge node stored with the target data and the edge node accords with a preset close judgment strategy.
In an exemplary embodiment, if at least two edge nodes storing the target data meet the three conditions, randomly determining a target edge node from the edge nodes meeting the three conditions;
and if all the edge nodes storing the target data do not meet the three conditions, acquiring the target data from a superior node corresponding to the edge nodes.
In one exemplary embodiment, determining whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes communicating with the edge node, wherein the data index information is index information established by the other edge nodes on the stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In one exemplary embodiment, the target data for responding to the service request is determined by:
after receiving a service request sent by a user terminal, analyzing the service request by using a network front-end module in a server, and determining target data for responding to the service request.
In an exemplary embodiment, after the parsing the service request by using a network front-end module in the server and determining target data for responding to the service request, the method further includes:
acquiring customized request information carried in the service request;
and processing the service request according to the customized request information.
A computer-readable storage medium having stored thereon a computer program which, when executed, implements the steps of any of the methods described above.
A computer device comprising a processor, a memory and a computer program stored on the memory, wherein the steps of any of the methods above are implemented when the computer program is executed by the processor.
A cloud distribution network system comprising one or more edge nodes and one or more layers of upper level nodes, wherein:
the edge node comprises one or more servers, each server comprises a network front-end module and a first storage back end, and the network front-end module is used for receiving a service request sent by a user terminal;
the superior node comprises one or more servers, and each server comprises a second storage back end; and the first storage back end and the second storage back end perform file index synchronization and are used for responding to an acquisition request of the network front-end module for target data.
According to the scheme provided by the invention, the first storage back-end modules of the servers are all deployed in the same cluster system, so that the first storage back-end modules of different servers can communicate with each other, data intercommunication is realized, data support is provided for the edge node to finish the acquisition of the target data, when the server does not store the data, the operation times of the edge node for acquiring the target data from other nodes are effectively reduced, the waiting time of a user is reduced, and the response efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of this specification. In the drawings:
fig. 1 is a schematic structural diagram of a CDN system in the related art.
Fig. 2 is a block diagram illustrating an edge node in accordance with an example embodiment.
FIG. 3 is a block diagram of a data processing system, shown in accordance with an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating a request processing method in accordance with an example embodiment.
FIG. 5 is a flow diagram illustrating a request processing method in accordance with an exemplary embodiment.
FIG. 6 is a block diagram illustrating a computer device according to an example embodiment.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the drawings of the embodiments of the present invention, and it is obvious that the described embodiments are some but not all of the embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments herein without making any creative effort, shall fall within the scope of protection. It should be noted that the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict.
Aiming at the problem of overlarge resource consumption in the process of responding to a service request by an edge node, the inventor analyzes the resource consumption condition, in a CDN system, a user initiates the service request based on an HTTP/HTTPS protocol, and the resource consumption condition in the process of processing the service request comprises the following steps:
1. when the receiving operation of the data is started, a process is required to be allocated to open a TCP/IP port for receiving the data, and the operation of allocating the process and opening the TCP/IP port occupies memory resources;
2. in the process of receiving data, the HTTP/HTTPS protocol in TCP/IP needs to be decoded, which comprises the following steps:
a. when receiving data, the network card needs to initiate a large number of soft interrupt signals (softirq), which occupy the computing power of the CPU, wherein 40% of the computing power can be used in a CDN scenario; .
b. When the data is decoded, the computing power of the CPU is required to be consumed, and 40-70% of the resources of the CPU can be occupied, even higher.
Through the analysis, the position of resource consumption can be determined, and the inventor analyzes the communication architecture of the position in the related art, and the analysis comprises the following steps:
fig. 1 is a schematic structural diagram of a CDN system in the related art. As shown in fig. 1, the CDN system includes at least two nodes, where at least one node is an edge node, and at least one node is a superior node, where each node may include one or at least two servers, and each server includes a network distribution front end and a storage back end; the storage back end may include two functional units, namely a network front end and a storage function. The procedure of responding to a service request is illustrated with particular reference to the arrow shown in fig. 1.
Under the above framework, the inventors found that at least the following two performance loss points exist in the related art, including:
1. the function of the network distribution front end in the server is a function based on an http/https protocol no matter the edge node or the superior node; the network front end built in the storage back end is used for independently providing network services, and comprises a function of providing an http/https protocol; that is, the functions of the network front end in the storage back end include the functions realized by the network distribution front end, which causes the function of providing the http/https protocol to be repeatedly developed in the network distribution front end and the second storage back end module, and increases the complexity of the processing flow, which causes additional performance loss, causes waste of resources, and becomes a performance loss point;
2. the network distribution front end of the superior node has the function of being responsible for judging which server the file requested by the service request is stored in the storage back end of; in order to realize the function, a network distribution front end is specially deployed, so that additional performance loss is caused, resource waste is caused, and the function becomes another performance loss point.
Based on the above analysis, for the performance loss points obtained by the above analysis, the following solutions are proposed:
fig. 2 is a block diagram illustrating an edge node in accordance with an example embodiment. The edge node shown in fig. 2 comprises one or at least two servers, wherein at least one server comprises a network front-end module and a first storage back-end system, the storage back-end system comprises only the first storage back-end module, wherein:
the network front-end module is used for receiving a service request sent by a user terminal, analyzing the service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module, and responding to the service request by using the target data;
and the first storage back-end module is used for responding to the acquisition request of the network front-end module for the target data.
In an exemplary embodiment, the edge node may comprise only one server, i.e. the server comprises one network front-end module and one first storage back-end module; alternatively, the first and second electrodes may be,
the edge node comprises at least two servers, wherein: each server comprises a network front-end module and a first storage back-end module; alternatively, some of the servers (e.g., one or at least two) include a network front-end module and a first storage back-end module, and other servers do not employ the above structure.
On the basis of the above analysis, it can be seen that, since the network front end in the storage back end in the related art includes the function of the network distribution front end, the network front end in the related art is decoupled from the storage back end to serve as an independent functional module, so that the first storage back end module only retains the storage function, and the isolation of the service logic and the storage logic is realized.
After decoupling the function of the network front end in the first storage back end module in the related technology, the first storage back end module only needs to realize the storage function, the function of the first storage back end module is simplified, and the independent division of the internal functions of the server is realized; compared with the prior art that the network front end consumes the resources of the memory and the CPU in the network distribution front end and the storage back end, the scheme provided by the invention only consumes the resources required by the network interaction operation by the network front end module and the first storage back end module on the premise of ensuring the normal work of the network front end module and the first storage back end module, and the first storage back end module does not consume the resources any more; in addition, after the network front end in the storage back end is decoupled, the data interaction operation between the network distribution front end and the network front end in the related technology is omitted, the interaction flow between the first storage back end module and the network front end module is simplified, the purpose of optimizing the data interaction flow between the function modules is realized, the consumption of internal resources of the server is reduced, and the performance of the CDN program is improved; meanwhile, a corresponding network front end does not need to be developed for the first storage back end module, so that the development quantity of compiling programs is avoided, and the development difficulty of the CDN program is reduced.
It can be seen from the above analysis that, by removing the network distribution module of the edge node, the "network front end" in the storage back end implements the "network distribution function" and the "network front end" function, so that the performance loss of the edge node is significantly reduced, the service throughput of the edge node is improved, and the "performance loss point 1" of the edge node is removed.
In one exemplary embodiment, the network front end module includes:
the acquisition unit is used for acquiring the customized request information carried in the service request after the service request is analyzed;
and the processing unit is used for processing the service request according to the customized request information.
In an exemplary embodiment, the customized request information is a personalized customized service of the user, such as an HTTP function of URL rewriting, modification of an HTTP header, or anti-hotlinking policy.
In the related art shown in fig. 1, if there is a customization requirement, it may be implemented at a network distribution front end or a storage back end, and when a compiling problem of a code is modified, a problem that a position of the code cannot be located occurs, and a functional conflict is easily caused. In addition, the network distribution front-end and the storage back-end both have network functions, and in the combination of the two pieces of software, part of service requests are transmitted to the storage back-end software and are covered by different processing logics, so that unexpected results are caused. In the system shown in fig. 2, the customization requirements of the users are developed in the network front-end module, and the first storage back-end module is only an in-line service, is not exposed to the clients, and is specially used for realizing the storage function.
Compared with the prior art that the customized requirements of the client are deployed in the first storage back-end module, the function of the customized requirements of the client is deployed to the network front-end module, the boundary clearing of the storage function of the edge node is realized, the problem that the boundary is not clear in the development process and repeated development is not achieved can be solved, the function of the customized requirements of the first storage back-end module and the function of the network front-end module can be prevented from being mutually covered, and the occurrence of abnormal conditions such as conflict processing and the like can be effectively controlled.
After receiving a request for acquiring target data from a network front-end module, a first storage back-end module directly reads the target data if the target data is locally stored, and completes the response to the request for acquiring; if the target data is not stored locally, the target data needs to be acquired from the outside of the server.
The implementation manner of obtaining the target data from the outside of the server specifically includes:
in an exemplary embodiment, the edge node comprises at least two servers, and the first storage backend modules of the at least two servers use the same cluster system for data storage; wherein the first storage backend die further comprises:
the judging unit is used for judging whether the target data are stored in the server in the edge node or not to obtain a first judging result;
the first requesting unit is configured to request the first storage backend module of the first target server for the target data when the first determination result indicates that there is the first target server storing the target data in the servers in the edge node.
In an exemplary embodiment, the same edge node may include at least two servers, and the first storage backend module of each server uses the same cluster system to store data, so that data of the first storage backend modules in the servers in the same edge node are intercommunicated, and the purpose of data sharing is achieved.
In an exemplary embodiment, index information may be established for the stored data by the server for the data stored by the server of the same edge node, and the index information is synchronized to other servers in the edge node, so that other servers can conveniently know the storage location of the data, and an operation basis is provided for reading the data across the servers.
By means of the fact that the first storage back-end modules of the servers are all deployed in the same cluster system, the first storage back-end modules of different servers can communicate with each other, data intercommunication is achieved, data support is provided for the edge node to finish obtaining of target data inside, when the server does not store data, the operation times of the edge node for obtaining the target data from other nodes are effectively reduced, waiting time of users is shortened, and response efficiency is improved.
When the edge node does not store the target data, the edge node needs to acquire the data from other nodes except the edge node. In the related art, the first storage backend module of the server may directly initiate an acquisition request to other nodes except the edge node. Different from the related art, the method selects the first storage back-end module of the other server to assist the server to realize the function, and specifically includes:
in an exemplary embodiment, the first storage backend module further comprises:
a selecting unit, configured to select one server from the other available edge nodes or upper node servers as a second target server when the query result indicates that no server in the edge node stores the target data;
and the second request unit is used for controlling the first storage back-end module in the edge node to request the target data from a second target server.
In an exemplary embodiment, the second target server selected by the selection unit may be selected according to the load state of the server and the bandwidth usage information, so as to implement load balancing inside the edge node, and on the premise of ensuring that data acquisition can be completed, resources of the server are fully utilized, thereby avoiding that the load of an individual server is increased to affect normal processing of the service request.
In the related art, when the edge node does not store the target data, the edge node requests the upper node for the target data. Different from the related art, in the scheme provided herein, the edge node may obtain the target data by interacting with other edge nodes, that is, a data obtaining manner between the edge node and the edge node.
In one exemplary embodiment, the second request unit includes:
a determining subunit, configured to determine whether the target data is stored from other edge nodes in communication with the edge node, to obtain a second determination result;
a first request subunit, configured to, when the second determination result indicates that the target data exists, obtain the target data from the storage back-end module of the other edge node;
and the second request subunit is configured to, when the second determination result is that there is no target data, obtain the target data from the storage back-end module of the upper-level node corresponding to the edge node.
In an exemplary embodiment, when the target data is stored in the edge node, the target data is directly returned to the user; when the edge node does not store the target data, the target data can be acquired by other edge nodes, so that the pressure of the upper node of the file is reduced, and the problem of bandwidth pressure caused by excessively concentrated requests for data from the upper node is avoided.
In an exemplary embodiment, the other edge nodes in communication with the edge node may be one or at least two nodes selected for the edge node in advance, and the selected node may serve as a partner node of the edge node to provide data support for the edge node. Wherein the selection of the partner node may be selected according to a distance from the edge node.
In an exemplary embodiment, the determining subunit determines whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes communicating with the edge node, wherein the data index information is index information established by the edge node on the stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In an exemplary embodiment, each edge node establishes corresponding data index information for locally stored data, and synchronizes the data index information to other edge nodes at regular intervals (for example, 5 minutes), so that the edge nodes can conveniently acquire the data stored by other edge nodes, an operation basis is provided for data sharing among the edge nodes, and the purpose of data sharing is achieved.
In an exemplary embodiment, the determining subunit determines whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes communicating with the edge node, wherein the data index information is index information established by the edge node on locally stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In an exemplary embodiment, when determining that there is at least one edge node storing the target data, the determining subunit determines the target edge node according to three conditions, including:
the bandwidth information of the edge node stored with the target data accords with a preset judgment strategy with abundant bandwidth;
the load information of the edge node stored with the target data accords with a preset load judgment strategy;
and the communication distance between the edge node stored with the target data and the edge node accords with a preset close judgment strategy.
In an exemplary embodiment, the edge node storing the target data is used as an edge node to be selected, and a suitable edge node may be selected as a target edge node according to at least one of bandwidth information, a load status, and a communication distance of the edge node to be selected, where:
the judgment strategy of the bandwidth abundance may be determined according to the bandwidth allocated to the edge node to be selected, for example, if the used amount of the edge node to be selected already exceeds or approaches a preset amount threshold, it is determined that the bandwidth amount of the edge node to be selected is insufficient, otherwise, it is determined that the bandwidth amount of the edge node to be selected is abundant; wherein the usage threshold may be determined based on a charging criteria for bandwidth usage;
the judgment strategy of the light load can be determined according to the load state of the edge node to be selected, if the value of the load state of the edge node to be selected is smaller than a preset load threshold value, the position of the edge node to be selected is determined to accord with the judgment strategy of the light load, otherwise, the position of the edge node to be selected is determined to not accord with the judgment strategy of the light load; wherein the load status may be at least one of a hardware load, a system load, a software load, and a network load;
the short-distance judgment policy may be determined according to a communication distance between the edge node and the upper node, and if the communication distance between the edge node to be selected and the edge node is far smaller than the communication distance between the edge node and the upper node, it is determined that the position of the edge node to be selected conforms to the short-distance judgment policy, otherwise, it is determined that the position of the edge node to be selected does not conform to the short-distance judgment policy.
The bandwidth information of the edge node can be acquired through the switch and acquired through a query interface provided by the management platform; or, the used usage is counted on a client access log of the server.
If at least two edge nodes of the target data meet the three conditions, randomly determining a target edge node from the edge nodes meeting the conditions;
and if all the edge nodes with the target data do not meet the three conditions, acquiring the target data from the superior node corresponding to the edge nodes.
FIG. 3 is a block diagram of a data processing system, shown in accordance with an exemplary embodiment. As shown in fig. 3, a data processing system, comprising:
an edge node as described in any of the above;
and the upper node is used for providing required target data to the edge node.
In one exemplary embodiment, the upper level node comprises one or at least two servers, wherein the at least one server comprises a second storage back-end module, wherein the at least one second storage back-end module comprises:
the network front end unit is used for carrying out network interaction of data with the edge node;
and the storage back-end unit is used for providing target data required by the service request.
Referring to the structure of the upper node in the related art shown in fig. 1, compared with the storage back end and the network distribution front end, the server in the upper node includes the network distribution front end, and the network distribution front end of the storage back end can implement the function of the network distribution front end, so that the deployment of the network distribution front end can be omitted, that is, the useless function of the network distribution front end is completely removed from the upper node, the system loss is obviously reduced, the service throughput is improved, and the performance loss point 2 in the related art is removed.
In addition, unlike fig. 2, the storage function of the upper node is simpler than that of the edge node, and therefore, it is not necessary to decouple the network front-end unit and the storage function unit in the first storage back-end module.
The module for realizing the network communication function shown in fig. 1 is a 2-group network distribution front end and a 2-group network front end, and the module for realizing the network communication function shown in fig. 3 is a 1-group network front end module and a 1-group network front end unit, which can reduce the network performance loss by 40%, and account for more than 20% of the overall performance loss.
In an exemplary embodiment, the upper node includes one or at least two servers, wherein the at least one server includes a second storage back-end module, and the second storage back-end module of the upper node and the storage backup module of the edge node use the same cluster system for data storage.
In an exemplary embodiment, the second storage backend module in the upper node and the first storage backend module in the edge node use the same cluster system for data storage, so that data stored in the upper node and the edge node can be communicated with each other, and the purpose of data sharing is achieved.
In an exemplary embodiment, a file caching strategy and a file acquisition strategy can be customized, bandwidth consumption is reduced, network cost expenditure is reduced, and file acquisition speed is increased. Conventional storage backend, etc., do not consider the case of cluster collaboration. In actual CDN, both intra-node and inter-node involve cluster cooperation, so that the storage back end shown in fig. 1 may have situations of storage file repetition and waste. The method and the system can well avoid the problem, and in addition, the customization of storage logic can be performed, such as hot spot balancing of files among servers, selection of return paths to parents and the like.
In an exemplary embodiment, the superordinate node is further configured to, when the target data is not stored locally, determine whether there is a target superordinate node storing the target data from other superordinate nodes communicating with the superordinate node, and obtain a third determination result; when the third judgment result is that a target superior node for storing the target data exists, acquiring the target data from the target superior node; or, when the third determination result is that there is no target upper node storing the target data, the target data is acquired from the source station.
FIG. 4 is a schematic diagram illustrating a request processing method in accordance with an example embodiment. As shown in fig. 4, the manner of obtaining the target data by the edge node includes three levels, which are the edge node, the upper node and the source station in sequence; each acquisition mode is explained below:
1. operations to obtain data from other edge nodes, comprising:
after the edge node does not locally store the target data, judging whether a target edge node storing the target data exists in other edge nodes; if yes, obtaining the target data through the target edge node;
2. an operation of obtaining data from an upper node, comprising:
if the target edge node of the target data is not stored in other edge nodes, the edge nodes request the upper node for the target data; the superior node judges whether the target data is stored locally; if yes, reading the target data and sending the target data to the edge node;
if the upper node does not store the target data, judging whether the target data is allowed to be acquired from other upper nodes or not; if the data is allowed to be stored in the other upper nodes, judging whether the target upper nodes storing the target data exist in the other upper nodes; if yes, the target data is obtained through the target superior node, and after the target data is obtained from the target superior node, the target data is sent to the edge node;
3. an operation of obtaining data from an upper node, comprising:
if the target data is not allowed to be acquired from other superior nodes, the superior nodes acquire the target data from the source station and send the target data to the edge node after acquiring the target data from the source station;
and if the target data are allowed to be acquired from other upper nodes but the target data are not stored in the other upper nodes, acquiring the target data from the source station, and sending the target data to the edge node after the target data are acquired from the source station.
By adding the operation of judging whether other superior nodes have cache, the pressure of the superior nodes to the source station can be reduced, and the cost of the source station is reduced.
A cloud distribution network system comprising one or more edge nodes and one or more layers of upper level nodes, wherein:
the edge node comprises one or more servers, each server comprises a network front-end module and a first storage back end, and the network front-end module is used for receiving a service request sent by a user terminal;
the superior node comprises one or more servers, and each server comprises a second storage back end; and the first storage back end and the second storage back end perform file index synchronization and are used for responding to an acquisition request of the network front-end module for target data.
If the storage back end finds that the requested file is not locally available, the storage back end requests other edge nodes or upper nodes for the file.
After the file is requested, the file is stored locally, and required information (including but not limited to the URL of the requested file, the timestamp of the requested file, and the file size) is uniformly placed in one file.
The file which integrates all the stored file information of the local server is the file index.
In the same node, each server synchronizes the index information of other servers. So within the same node, the index is shared transparent.
Between different nodes. The storage back end of the edge server inquires whether a target file exists from the storage back end of any or a specified superior node or other edge nodes. The superior or other edge nodes directly inquire the index of the node of the superior or other edge nodes and tell the inquiring party of the result, but not synchronize the index to the inquiring party.
Regardless of the edge node or the superior node, there is a file index in any storage back end.
The file index is not a file, but is a directory listing of stored files. This is a function of the storage back-end.
FIG. 5 is a flow chart illustrating a method of data processing according to an exemplary embodiment. Fig. 5 shows a flowchart, which is applied to an edge node, where the edge node includes one or at least two servers, and the method includes:
step 501, after receiving a service request sent by a user terminal, analyzing the service request by using a network front-end module in a server, and determining target data for responding to the service request;
step 502, obtaining the target data from a first storage back-end module in the server;
in an exemplary embodiment, if there is only one server in the edge node and there is no target file, the target data may be obtained to the upper node according to the determination that the edge node has no target data.
Step 503, responding to the service request by using the target data.
In an exemplary embodiment, obtaining the target data from a first storage back-end module within the server includes:
judging whether a server in the edge node stores a third target server of the target data or not to obtain a fourth judgment result;
when a fourth judgment result shows that a server in the edge node has a third target server storing the target data, a first storage back-end module of the server is used for requesting the target data from the third target server; the edge node comprises at least two servers, wherein the first storage back-end modules of the at least two servers use the same cluster system for data storage.
In an exemplary embodiment, after determining whether a server in the edge node stores a third target server of the target data and obtaining a fourth determination result, the method further includes:
when the fourth judgment result is that the server in the edge node does not have a third target server storing the target data, inquiring a server of a superior node corresponding to the edge node, judging whether the target data is stored or not, and obtaining a fifth judgment result, wherein a second storage back-end module of the server of the superior node and a first storage back-end module of the server of the edge node use the same cluster system for data storage;
when the fifth judgment result is that a superior node for storing the target data exists, acquiring the target data from a second storage back-end module of the superior node;
and when the fifth judgment result does not store the superior node of the target data, acquiring the target data from a source station.
In one exemplary embodiment, when determining at least one edge node storing the target data, determining the target edge node by three conditions includes:
the bandwidth information of the edge node stored with the target data accords with a preset judgment strategy with abundant bandwidth;
the load information of the edge node stored with the target data accords with a preset light load judgment strategy;
and the communication distance between the edge node stored with the target data and the edge node accords with a preset close judgment strategy.
In an exemplary embodiment, if at least two edge nodes storing the target data meet the three conditions, randomly determining a target edge node from the edge nodes meeting the three conditions;
and if all the edge nodes storing the target data do not meet the three conditions, acquiring the target data from a superior node corresponding to the edge nodes.
In one exemplary embodiment, determining whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes communicating with the edge node, wherein the data index information is index information established by the other edge nodes on the stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In the method provided by the exemplary embodiment of the present disclosure, after receiving a service request sent by a user terminal, a network front-end module in a server is used to analyze the service request, determine target data for responding to the service request, obtain the target data from a first storage back-end module in the server, and respond to the service request by using the target data, so that a network function is implemented by the network front-end module, a service logic is isolated from a storage logic, a resource loss is reduced, and a service throughput is improved.
A computer-readable storage medium having stored thereon a computer program which, when executed, implements the steps of any of the methods described above.
FIG. 6 is a block diagram illustrating a computer device 600 according to an example embodiment. For example, the computer device 600 may be provided as a server. Referring to fig. 6, the computer device 600 includes a processor 601, and the number of processors may be set to one or more as necessary. The computer device 600 further comprises a memory 602 for storing instructions, such as application programs, executable by the processor 601. The number of the memories can be set to one or more according to needs. Which may store one or more application programs. The processor 601 is configured to execute instructions to perform the above-described method.
As will be appreciated by one skilled in the art, the embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer, and the like. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional like elements in the article or device comprising the element.
While the preferred embodiments herein have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of this disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope thereof. Thus, it is intended that such changes and modifications be included herein, provided they come within the scope of the appended claims and their equivalents.

Claims (22)

1. An edge node is characterized by comprising at least two servers, wherein the storage back-end system only comprises a first storage back-end module, and the first storage back-end module of each server uses the same cluster system for data storage; wherein the first storage backend module comprises:
the judging unit is used for judging whether a server in the edge node stores target data or not after the target data used for responding to the service request is determined, and obtaining a first judgment result;
the first requesting unit is configured to request the first storage backend module of the first target server for the target data when the first determination result indicates that there is the first target server storing the target data in the servers in the edge node.
2. The edge node of claim 1, wherein the first storage backend module further comprises:
a selecting unit, configured to select one server from the other available edge nodes or upper node servers as a second target server when the query result indicates that no server in the edge node stores the target data;
and the second request unit is used for controlling the first storage back-end module in the edge node to request the target data from a second target server.
3. The edge node of claim 1, wherein the second request unit comprises:
a determining subunit, configured to determine whether the target data is stored from other edge nodes in communication with the edge node, to obtain a second determination result;
a first request subunit, configured to, when the second determination result indicates that the target data exists, obtain the target data from the storage back-end module of the other edge node;
and the second request subunit is configured to, when the second determination result is that there is no target data, obtain the target data from the storage back-end module of the upper-level node corresponding to the edge node.
4. The edge node according to claim 3, wherein the determining subunit determines the target edge node by three conditions when determining that there is at least one edge node storing the target data, including:
the bandwidth information of the edge node stored with the target data accords with a preset judgment strategy with abundant bandwidth;
the load information of the edge node stored with the target data accords with a preset load judgment strategy;
and the communication distance between the edge node stored with the target data and the edge node accords with a preset close judgment strategy.
5. The other edge node of claim 4,
if at least two edge nodes of the target data meet the three conditions, randomly determining a target edge node from the edge nodes meeting the conditions;
and if all the edge nodes with the target data do not meet the three conditions, acquiring the target data from the superior node corresponding to the edge nodes.
6. The edge node according to any of claims 3 to 5, wherein the determining subunit determines whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes communicating with the edge node, wherein the data index information is index information established by the other edge nodes on the stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
7. The edge node of claim 1, wherein each server further comprises:
and the network front-end module is used for analyzing the received service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module and responding to the service request by utilizing the target data.
8. The edge node of claim 7, wherein the network front-end module comprises:
the acquisition unit is used for acquiring the customized request information carried in the service request after the service request is analyzed;
and the processing unit is used for processing the service request according to the customized request information.
9. A data processing system, comprising:
the edge node of any of claims 1 to 8;
and the upper nodes comprise upper nodes and are used for providing the required target data to the edge nodes.
10. The system of claim 9, wherein the superordinate node comprises at least one server, wherein the at least one server comprises a second storage back-end module, the second storage back-end module comprising:
and the storage back-end unit is used for providing target data required by the service request.
11. The system according to claim 9 or 10, wherein the upper level node comprises at least one server, wherein the at least one server comprises a second storage back-end module, and wherein the second storage back-end module of the upper level node and the storage back-end module of the edge node use the same cluster system for data storage.
12. The system of claim 9, wherein:
the superior node is also used for judging whether a target superior node for storing the target data exists in other superior nodes communicated with the superior node when the target data is not stored locally, and obtaining a third judgment result; when the third judgment result is that a target superior node for storing the target data exists, acquiring the target data from the target superior node; or, when the third determination result is that there is no target upper node storing the target data, the target data is acquired from the source station.
13. A request processing method is applied to an edge node and comprises the following steps:
after determining target data for responding to a service request, judging whether a server in the edge node stores a third target server of the target data or not, and obtaining a fourth judgment result;
when a fourth judgment result shows that a server in the edge node has a third target server storing the target data, a first storage back-end module of the server is used for requesting the target data from the third target server; the edge node comprises at least two servers, wherein the first storage back-end modules of the at least two servers use the same cluster system for data storage.
14. The method according to claim 13, wherein determining whether the server in the edge node stores a third target server of the target data, and after obtaining a fourth determination result, the method further comprises:
when the fourth judgment result is that the server in the edge node does not have a third target server storing the target data, inquiring a server of a superior node corresponding to the edge node, judging whether the target data is stored or not, and obtaining a fifth judgment result, wherein a second storage back-end module of the server of the superior node and a first storage back-end module of the server of the edge node use the same cluster system for data storage;
when the fifth judgment result is that a superior node for storing the target data exists, acquiring the target data from a second storage back-end module of the superior node;
and when the fifth judgment result does not store the superior node of the target data, acquiring the target data from a source station.
15. The method of claim 14, wherein:
when at least one edge node storing the target data is determined, determining the target edge node according to three conditions including:
the bandwidth information of the edge node stored with the target data accords with a preset judgment strategy with abundant bandwidth;
the load information of the edge node stored with the target data accords with a preset light load judgment strategy;
and the communication distance between the edge node stored with the target data and the edge node accords with a preset close judgment strategy.
16. The method of claim 14, wherein:
if at least two edge nodes storing the target data meet the three conditions, randomly determining a target edge node from the edge nodes meeting the three conditions;
and if all the edge nodes storing the target data do not meet the three conditions, acquiring the target data from a superior node corresponding to the edge nodes.
17. The method according to any one of claims 14 to 16, wherein determining whether the target data is stored from other edge nodes in communication with the edge node comprises:
acquiring data index information sent by other edge nodes communicating with the edge node, wherein the data index information is index information established by the other edge nodes on the stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
18. The method of any of claims 14 to 16, wherein determining the target data for responding to the service request comprises:
after receiving a service request sent by a user terminal, analyzing the service request by using a network front-end module in a server, and determining target data for responding to the service request.
19. The method of claim 13, wherein after parsing the service request with a network front-end module within a server and determining target data for responding to the service request, the method further comprises:
acquiring customized request information carried in the service request;
and processing the service request according to the customized request information.
20. A computer-readable storage medium, on which a computer program is stored, which, when being executed, carries out the steps of the method according to any one of claims 13-19.
21. A computer arrangement comprising a processor, a memory and a computer program stored on the memory, characterized in that the steps of the method according to any of claims 13-19 are implemented when the computer program is executed by the processor.
22. A cloud distribution network system comprising one or more edge nodes and one or more layers of upper level nodes, wherein:
the edge node comprises one or more servers, each server comprises a network front-end module and a first storage back end, and the network front-end module is used for receiving a service request sent by a user terminal;
the superior node comprises one or more servers, and each server comprises a second storage back end; and the first storage back end and the second storage back end perform file index synchronization and are used for responding to an acquisition request of the network front-end module for target data.
CN201911413095.9A 2019-12-31 2019-12-31 Request processing method and system and edge node Active CN113127414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911413095.9A CN113127414B (en) 2019-12-31 2019-12-31 Request processing method and system and edge node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911413095.9A CN113127414B (en) 2019-12-31 2019-12-31 Request processing method and system and edge node

Publications (2)

Publication Number Publication Date
CN113127414A true CN113127414A (en) 2021-07-16
CN113127414B CN113127414B (en) 2023-05-23

Family

ID=76770311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911413095.9A Active CN113127414B (en) 2019-12-31 2019-12-31 Request processing method and system and edge node

Country Status (1)

Country Link
CN (1) CN113127414B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020133491A1 (en) * 2000-10-26 2002-09-19 Prismedia Networks, Inc. Method and system for managing distributed content and related metadata
CN102204218A (en) * 2011-05-31 2011-09-28 华为技术有限公司 Data processing method, buffer node, collaboration controller, and system
CN104243425A (en) * 2013-06-19 2014-12-24 深圳市腾讯计算机系统有限公司 Content management method, device and system in content delivery network
CN106790324A (en) * 2015-11-20 2017-05-31 华为技术有限公司 Content distribution method, virtual server management method, cloud platform and system
US20170318094A1 (en) * 2016-04-29 2017-11-02 Netapp, Inc. Self-organizing storage system for asynchronous storage service
CN107483614A (en) * 2017-08-31 2017-12-15 京东方科技集团股份有限公司 Content scheduling method and communication network based on CDN Yu P2P networks
CN109871498A (en) * 2018-12-15 2019-06-11 中国平安人寿保险股份有限公司 Back end interface response method, device, electronic equipment and storage medium
CN110336885A (en) * 2019-07-10 2019-10-15 深圳市网心科技有限公司 Fringe node distribution method, device, dispatch server and storage medium
CN110365747A (en) * 2019-06-24 2019-10-22 北京奇艺世纪科技有限公司 Processing method, device, server and the computer readable storage medium of network request
CN110392876A (en) * 2017-03-10 2019-10-29 净睿存储股份有限公司 Data set and other managed objects are synchronously copied into storage system based on cloud
CN110392094A (en) * 2019-06-03 2019-10-29 网宿科技股份有限公司 A kind of method and fusion CDN system of acquisition business datum

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020133491A1 (en) * 2000-10-26 2002-09-19 Prismedia Networks, Inc. Method and system for managing distributed content and related metadata
CN102204218A (en) * 2011-05-31 2011-09-28 华为技术有限公司 Data processing method, buffer node, collaboration controller, and system
CN104243425A (en) * 2013-06-19 2014-12-24 深圳市腾讯计算机系统有限公司 Content management method, device and system in content delivery network
CN106790324A (en) * 2015-11-20 2017-05-31 华为技术有限公司 Content distribution method, virtual server management method, cloud platform and system
US20170318094A1 (en) * 2016-04-29 2017-11-02 Netapp, Inc. Self-organizing storage system for asynchronous storage service
CN110392876A (en) * 2017-03-10 2019-10-29 净睿存储股份有限公司 Data set and other managed objects are synchronously copied into storage system based on cloud
CN107483614A (en) * 2017-08-31 2017-12-15 京东方科技集团股份有限公司 Content scheduling method and communication network based on CDN Yu P2P networks
CN109871498A (en) * 2018-12-15 2019-06-11 中国平安人寿保险股份有限公司 Back end interface response method, device, electronic equipment and storage medium
CN110392094A (en) * 2019-06-03 2019-10-29 网宿科技股份有限公司 A kind of method and fusion CDN system of acquisition business datum
CN110365747A (en) * 2019-06-24 2019-10-22 北京奇艺世纪科技有限公司 Processing method, device, server and the computer readable storage medium of network request
CN110336885A (en) * 2019-07-10 2019-10-15 深圳市网心科技有限公司 Fringe node distribution method, device, dispatch server and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HYUNSEOK CHANG ET AL.: "Bringing the Cloud to the Edge" *
王健: "轻量级边缘计算平台方案设计与应用研究" *

Also Published As

Publication number Publication date
CN113127414B (en) 2023-05-23

Similar Documents

Publication Publication Date Title
US20190124140A1 (en) Method and system for content delivery of mobile terminal applications
CN112218100A (en) Content distribution network, data processing method, device, equipment and storage medium
CN111935315A (en) Block synchronization method and device
EP3970344B1 (en) Cache management in content delivery systems
CN110035306A (en) Dispositions method and device, the dispatching method and device of file
CN109618003B (en) Server planning method, server and storage medium
CN106559241A (en) Using the collection of daily record, sending method, device, system and log server
US8984100B2 (en) Data downloading method, terminal, server, and system
CN111556123A (en) Self-adaptive network rapid configuration and load balancing system based on edge calculation
CN110602232A (en) Terminal system version downloading method, device and system based on peer-to-peer network idea
CN114401261A (en) File downloading method and device
US11606415B2 (en) Method, apparatus and system for processing an access request in a content delivery system
CN113132439B (en) Data processing method and system and edge node
CN113127414B (en) Request processing method and system and edge node
CN112491951A (en) Request processing method, server and storage medium in peer-to-peer network
RU2522995C2 (en) Method and apparatus for creating peer-to-peer group in peer-to-peer application and method of using peer-to-peer group
CN110581873A (en) cross-cluster redirection method and monitoring server
CN114466031A (en) CDN system node configuration method, device, equipment and storage medium
CN114338724A (en) Block synchronization method and device, electronic equipment and storage medium
WO2022022842A1 (en) Service request handling
CN114615333B (en) Resource access request processing method, device, equipment and medium
US20240015232A1 (en) Bound service request handling
CN114615333A (en) Resource access request processing method, device, equipment and medium
CN115022177A (en) CDN system, back-to-source method, CDN node and storage medium
CN117527508A (en) Message sending method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056799

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant