CN113127414B - Request processing method and system and edge node - Google Patents

Request processing method and system and edge node Download PDF

Info

Publication number
CN113127414B
CN113127414B CN201911413095.9A CN201911413095A CN113127414B CN 113127414 B CN113127414 B CN 113127414B CN 201911413095 A CN201911413095 A CN 201911413095A CN 113127414 B CN113127414 B CN 113127414B
Authority
CN
China
Prior art keywords
target data
edge node
server
node
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911413095.9A
Other languages
Chinese (zh)
Other versions
CN113127414A (en
Inventor
方云麟
童剑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Baishancloud Technology Co Ltd
Original Assignee
Guizhou Baishancloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Baishancloud Technology Co Ltd filed Critical Guizhou Baishancloud Technology Co Ltd
Priority to CN201911413095.9A priority Critical patent/CN113127414B/en
Publication of CN113127414A publication Critical patent/CN113127414A/en
Application granted granted Critical
Publication of CN113127414B publication Critical patent/CN113127414B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/13File access structures, e.g. distributed indices
    • G06F16/134Distributed indices

Abstract

The present disclosure relates to a method and system for processing a request and an edge node, the edge node comprising one or at least two servers, wherein at least one server comprises a network front-end module and a first storage back-end module, wherein: the network front-end module is used for receiving a service request sent by a user terminal, analyzing the service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module, and responding to the service request by utilizing the target data; the first storage back-end module is used for responding to the acquisition request of the network front-end module for the target data.

Description

Request processing method and system and edge node
Technical Field
The present disclosure relates to the field of communications, and in particular, to a method and system for processing a request, and an edge node.
Background
The content delivery network (Content Delivery Network, CDN) can deliver the source content to the node closest to the user, so that the user can obtain the required content nearby, and the response speed and the success rate of the user access are improved. The method solves the access delay problem caused by distribution, bandwidth and server performance, and is suitable for various scenes such as site acceleration, on-demand, live broadcast and the like.
In the related technology, a user initiates a service request to an edge node through a client, the edge node locally inquires whether data corresponding to the service request is stored, and after inquiring the data corresponding to the service request, the edge node responds to the service request by utilizing the data; and after the data corresponding to the service request is not queried, acquiring the data from a superior node of the edge node, and after the data is acquired from the superior node, responding to the service request by utilizing the data.
In the process of responding to the service request by the edge node, the problem of overlarge resource consumption exists.
Disclosure of Invention
To overcome at least one of the problems in the related art, a request processing method and system and an edge node are provided herein.
An edge node comprises at least two servers, wherein the storage back-end system only comprises a first storage back-end module, and the first storage back-end module of each server uses the same cluster system for data storage; wherein the first storage backend module comprises:
the judging unit is used for judging whether the server in the edge node stores the target data after determining the target data for responding to the service request, so as to obtain a first judging result;
And the first request unit is used for requesting the target data from a first storage back-end module of the first target server when the first judgment result is that the first target server storing the target data exists in the servers in the edge node.
In one exemplary embodiment, the first storage backend module further includes:
a selecting unit, configured to select, when the first determination result indicates that no server in the edge node stores the target data, one server from the other available edge nodes or upper node servers as a second target server;
and the second request unit is used for controlling the first storage back-end module in the edge node to request the target data from the second target server.
In an exemplary embodiment, the second request unit includes:
a judging subunit, configured to judge whether the target data is stored in another edge node that communicates with the edge node, so as to obtain a second judging result;
the first request subunit is used for acquiring the target data from the storage back-end modules of the other edge nodes when the second judging result is that the target data exists;
And the second request subunit is used for acquiring the target data from the storage back-end module of the upper node corresponding to the edge node when the second judging result is that the target data is not available.
In an exemplary embodiment, the determining subunit, when determining that there is at least one of the edge nodes storing the target data, determines the target edge node by the following three conditions, including:
the bandwidth information of the edge node storing the target data accords with a preset bandwidth abundance judgment strategy;
the load information of the edge node storing the target data accords with a preset load judgment strategy;
the communication distance between the edge node storing the target data and the edge node accords with a preset short-distance judgment strategy.
In an exemplary embodiment, if there are at least two edge nodes of the target data that meet the above three conditions, determining the target edge node from the edge nodes that meet the conditions at random;
and if all the edge nodes with the target data do not meet the three conditions, acquiring the target data from the upper node corresponding to the edge node.
In an exemplary embodiment, the determining subunit determines, from other edge nodes in communication with the edge node, whether the target data is stored, by:
Acquiring data index information sent by other edge nodes in communication with the edge node, wherein the data index information is index information established by the other edge nodes for stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In an exemplary embodiment, each server further includes:
the network front-end module is used for analyzing the received service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module, and responding to the service request by utilizing the target data.
In one exemplary embodiment, the network front end module includes:
the acquisition unit is used for acquiring customized request information carried in the service request after analyzing the service request;
and the processing unit is used for processing the service request according to the customized request information.
A data processing system, comprising:
an edge node as claimed in any one of the above;
And the upper level nodes comprise upper level nodes and are used for providing required target data for the edge nodes.
In one exemplary embodiment, the upper node includes at least one server, wherein the at least one server includes a second storage back-end module, the second storage back-end module including:
and the storage back-end unit is used for providing target data required by the service request.
In an exemplary embodiment, the upper node includes at least one server, wherein the at least one server includes a second storage back-end module, and wherein the second storage back-end module of the upper node and the storage back-end module of the edge node use the same cluster system for data storage.
In an exemplary embodiment, the upper node is further configured to determine, when the target data is not stored locally, whether there is a target upper node storing the target data from other upper nodes that communicate with the upper node, and obtain a third determination result; when the third judging result is that a target superior node for storing the target data exists, acquiring the target data from the target superior node; or when the third judging result is that the target superior node which does not store the target data is obtained, the target data is obtained from the source station.
A request processing method applied to an edge node, comprising:
after determining target data for responding to a service request, judging whether a server in the edge node stores a third target server of the target data or not, and obtaining a fourth judgment result;
when the fourth judging result is that the server in the edge node has a third target server storing the target data, requesting the target data from the third target server by utilizing a first storage back-end module of the server; the edge node comprises at least two servers, wherein a first storage back-end module of the at least two servers uses the same cluster system for data storage.
In an exemplary embodiment, after determining whether the server in the edge node stores the third target server of the target data, and obtaining the fourth determination result, the method further includes:
when the fourth judgment result is that the server in the edge node does not have the third target server storing the target data, inquiring the server of the upper node corresponding to the edge node, judging whether the target data is stored or not, and obtaining a fifth judgment result, wherein the second storage back-end module of the server of the upper node and the first storage back-end module of the server of the edge node use the same cluster system for data storage;
When the fifth judging result is that the upper node for storing the target data exists, acquiring the target data from a second storage back-end module of the upper node;
and acquiring the target data from the source station when the fifth judging result does not include the upper node for storing the target data.
In one exemplary embodiment, in determining at least one edge node storing the target data, determining the target edge node by three conditions, including:
the bandwidth information of the edge node storing the target data accords with a preset bandwidth abundance judgment strategy;
the load information of the edge node storing the target data accords with a preset light load judgment strategy;
the communication distance between the edge node storing the target data and the edge node accords with a preset short-distance judgment strategy.
In an exemplary embodiment, if at least two edge nodes storing the target data meet the three conditions, determining the target edge node from the edge nodes meeting the three conditions randomly;
and if all the edge nodes storing the target data do not meet the three conditions, acquiring the target data from the upper node corresponding to the edge node.
In one exemplary embodiment, determining whether the target data is stored from other edge nodes in communication with the edge node comprises:
acquiring data index information sent by other edge nodes in communication with the edge node, wherein the data index information is index information established by the other edge nodes for stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In one exemplary embodiment, determining target data for responding to a service request comprises:
after receiving a service request sent by a user terminal, analyzing the service request by utilizing a network front-end module in a server, and determining target data for responding to the service request.
In an exemplary embodiment, after the analyzing the service request with the network front end module in the server and determining the target data for responding to the service request, the method further includes:
acquiring customized request information carried in the service request;
And processing the service request according to the customized request information.
A computer readable storage medium having stored thereon a computer program which when executed performs the steps of the method of any of the preceding claims.
A computer device comprising a processor, a memory and a computer program stored on the memory, characterized in that the processor implements the steps of any one of the methods described above when executing the computer program.
A cloud distribution network system comprising one or more edge nodes and one or more levels of superordinate nodes, wherein:
the edge node comprises one or more servers, wherein the servers comprise a network front-end module and a first storage back-end, and the network front-end module is used for receiving a service request sent by a user terminal;
the upper node comprises one or more servers, and the servers comprise a second storage back end; and the first storage back end and the second storage back end are used for synchronizing file indexes and responding to the acquisition request of the network front end module for the target data.
According to the scheme provided by the invention, the first storage back-end modules of the servers are deployed in the same cluster system, so that the first storage back-end modules of different servers can communicate with each other, data intercommunication is realized, data support is provided for obtaining target data in the edge node, when the data is not stored in the server, the operation times of obtaining the target data from the edge node to other nodes are effectively reduced, the waiting time of users is reduced, and the response efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate and explain the disclosure, and do not constitute a limitation on the disclosure. In the drawings:
fig. 1 is a schematic structural diagram of a CDN system in the related art.
Fig. 2 is a block diagram of an edge node, according to an example embodiment.
FIG. 3 is a block diagram of a data processing system according to an exemplary embodiment.
FIG. 4 is a schematic diagram illustrating a request processing method according to an example embodiment.
Fig. 5 is a flow chart illustrating a method of request processing according to an exemplary embodiment.
FIG. 6 is a block diagram of a computer device, according to an example embodiment.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments herein more apparent, the technical solutions in the embodiments herein will be clearly and completely described below with reference to the accompanying drawings in the embodiments herein, and it is apparent that the described embodiments are some, but not all, embodiments herein. All other embodiments, based on the embodiments herein, which a person of ordinary skill in the art would obtain without undue burden, are within the scope of protection herein. It should be noted that, without conflict, the embodiments and features of the embodiments herein may be arbitrarily combined with each other.
Aiming at the problem of overlarge resource consumption in the process of responding to a service request by an edge node, the inventor analyzes the resource consumption condition, and in a CDN system, a user initiates the service request based on an HTTP/HTTPS protocol, and the resource consumption condition in the process of processing the service request comprises the following steps:
1. when starting the data receiving operation, a process is required to be allocated to open the TCP/IP port for receiving the data, and the operations of allocating the process and opening the TCP/IP port occupy memory resources;
2. in the process of receiving data, the HTTP/HTTPS protocol in the TCP/IP needs to be decoded, which comprises the following steps:
a. when receiving data, the network card needs to initiate a large number of soft interrupt signals (softrq), wherein the soft interrupt signals occupy the CPU computing capacity, and the computing capacity can occupy 40% in a CDN scene; .
b. When the data is decoded, the computing power of the CPU is consumed, and 40-70% of the resources of the CPU can be occupied, or even higher.
Through the above analysis, the location of resource consumption can be determined, and the inventor has analyzed the communication architecture of the above location in the related art, including:
fig. 1 is a schematic structural diagram of a CDN system in the related art. As shown in fig. 1, the CDN system includes at least two nodes, where at least one node is an edge node and at least one node is a superior node, where each node may include one or at least two servers, and each server includes a network distribution front end and a storage back end; the storage back end can also comprise two functional units, namely a network front end and a storage function. The response procedure for the service request is specifically shown with reference to the arrows shown in fig. 1.
Under the above architecture, the inventors found that there are at least two performance loss points in the related art, including:
1. the function of the network distribution front end in the server is to provide the function based on the http/https protocol whether the network distribution front end is an edge node or an upper node; the network front end arranged in the storage back end is used for independently providing network services and comprises a function of providing an http/https protocol; that is, the functions of the network front end in the storage back end include functions implemented by the network distribution front end, so that the functions of providing http/https protocols are repeatedly developed in the network distribution front end and the second storage back end module, the complexity of the processing flow is increased, extra performance loss is caused, and resource waste is caused, so that the function is a performance loss point;
2. the function of the network distribution front end of the upper node is to judge which server the file requested by the service request is stored in the storage rear end; in order to realize the function, the network distribution front end is specially deployed, so that extra performance loss is caused, resource waste is caused, and the network distribution front end becomes another performance loss point.
Based on the above analysis, the following solutions are proposed for the performance loss points obtained by the above analysis:
Fig. 2 is a block diagram of an edge node, according to an example embodiment. The edge node shown in fig. 2 comprises one or at least two servers, wherein at least one server comprises a network front-end module and a first storage back-end system, the storage back-end system only comprises the first storage back-end module, wherein:
the network front-end module is used for receiving a service request sent by a user terminal, analyzing the service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module, and responding to the service request by utilizing the target data;
the first storage back-end module is used for responding to the acquisition request of the network front-end module for the target data.
In an exemplary embodiment, the edge node may comprise only one server, i.e. the server comprises one network front-end module and one first storage back-end module; or alternatively, the process may be performed,
the edge node comprises at least two servers, wherein: each server comprises a network front-end module and a first storage back-end module; alternatively, some servers (e.g., one or at least two) include a network front-end module and a first storage back-end module, and other servers do not employ the above-described structure.
Based on the above analysis, it can be seen that, since the network front end in the storage back end in the related art includes the function of the network distribution front end, the network front end in the related art is decoupled from the storage back end and is used as an independent functional module, so that the first storage back end module only retains the storage function, and isolation between service logic and storage logic is realized.
After decoupling the functions of the network front end in the first storage back end module in the related technology, the first storage back end module only needs to realize the storage function, so that the functions of the first storage back end module are simplified, and the independent division of the internal functions of the server is realized; compared with the prior art that the network front end in the network distribution front end and the network front end in the storage back end consume the resources of the memory and the CPU, the scheme provided by the invention only consumes the resources required by the network interaction operation on the premise of ensuring the normal operation of the network front end module and the first storage back end module, and the first storage back end module does not consume the resources; in addition, after the network front end in the storage back end is decoupled, the data interaction operation of the network distribution front end and the network front end in the related technology is omitted, the interaction flow between the first storage back end module and the network front end module is simplified, the purpose of optimizing the data interaction flow between the functional modules is achieved, the consumption of resources in a server is reduced, and the CDN program performance is improved; meanwhile, the corresponding network front end does not need to be developed for the first storage back end module, so that the development quantity of the compiling program is avoided, and the development difficulty of the CDN program is reduced.
According to the analysis, the network distribution module of the edge node is removed, so that the network front end in the storage back end can realize the network distribution function and the network front end function, the performance loss of the edge node is obviously reduced, the service throughput of the edge node is improved, and the performance loss point 1 of the edge node is removed.
In one exemplary embodiment, the network front end module includes:
the acquisition unit is used for acquiring customized request information carried in the service request after analyzing the service request;
and the processing unit is used for processing the service request according to the customized request information.
In an exemplary embodiment, the customization request information is a personalized customization service of the user, such as URL rewriting, modification of HTTP header, or HTTP functionality of an anti-hotlinking policy.
In the related art shown in fig. 1, if there is a customization demand, the customization can be selectively implemented at a network distribution front end or a storage back end, and when the compiling problem of the code is modified, the problem that the position of the code cannot be located occurs, and the functional conflict is easily caused. In addition, the network distribution front end and the storage back end have network functions, and in the combination of the two software, part of service requests are transmitted to the storage back end software and can be covered by different processing logic, so that unexpected results are caused. In the system shown in fig. 2, the customization needs of the user are all developed in the network front-end module, while the first storage back-end module is only for service, is not exposed to the client, and is specially used for realizing the storage function.
Compared with the prior art that the customized demand of the customer is deployed on the first storage back-end module, the customized demand function of the customer is deployed on the network front-end module, the boundary of the storage function of the edge node is cleared, the problem of repeated development due to unclear boundary in the development process can be reduced, the problem that the function of the first storage back-end module for achieving the customized demand and the function of the network front-end module are mutually covered can be avoided, and abnormal conditions such as conflict processing and the like are effectively controlled.
After receiving an acquisition request of a network front-end module for target data, the first storage back-end module directly reads the target data if the target data is locally stored, and completes the response to the acquisition request; if the target data is not stored locally, the target data needs to be obtained from outside the server.
The implementation manner of obtaining the target data from the outside of the server specifically includes:
in an exemplary embodiment, the edge node includes at least two servers, and a first storage back-end module of the at least two servers uses the same cluster system to store data; wherein the first storage back-end module further comprises:
The judging unit is used for judging whether the server in the edge node stores the target data or not to obtain a first judging result;
and the first request unit is used for requesting the target data from a first storage back-end module of the first target server when the first judgment result is that the first target server storing the target data exists in the servers in the edge node.
In an exemplary embodiment, the same edge node may include at least two servers, and the first storage back-end module of each server uses the same cluster system to store data, so that the data of the first storage back-end modules in the servers in the same edge node are communicated, and the purpose of data sharing is achieved.
In an exemplary embodiment, the data stored by the servers of the same edge node may be used to create index information for the stored data by the servers, and synchronize the index information to other servers in the edge node, so that the other servers can conveniently learn the storage position of the data, and provide an operation basis for reading the data across the servers.
The first storage back end modules of the servers are deployed in the same cluster system, so that the first storage back end modules of different servers can communicate with each other, data intercommunication is realized, data support is provided for obtaining target data in the edge node, when the server does not store data, the operation times of the edge node for obtaining the target data to other nodes are effectively reduced, the waiting time of users is shortened, and the response efficiency is improved.
When the edge node does not store the target data, the data needs to be acquired from other nodes except the edge node. In the related art, the first storage backend module of the server may directly initiate the acquisition request to other nodes except the edge node. Unlike the related art, the first storage back-end module of the other server is selected to assist the server in achieving the function, which specifically includes:
in one exemplary embodiment, the first storage backend module further includes:
a selecting unit, configured to select, when the first determination result indicates that no server in the edge node stores the target data, one server from the other available edge nodes or upper node servers as a second target server;
and the second request unit is used for controlling the first storage back-end module in the edge node to request the target data from the second target server.
In an exemplary embodiment, the second target server selected by the selecting unit may be selected according to the load state and bandwidth usage information of the server, so as to implement load balancing in the edge node, and on the premise of ensuring that data acquisition can be completed, fully utilize resources of the server, and avoid aggravating loads of individual servers and affecting normal processing of service requests.
In the related art, when the edge node does not store the target data, the edge node requests the target data to the upper node. Unlike the related art, in the solution provided herein, the edge node may first interact with other edge nodes to obtain the target data, that is, the data acquisition manner of the edge node and the edge node.
In an exemplary embodiment, the second request unit includes:
a judging subunit, configured to judge whether the target data is stored in another edge node that communicates with the edge node, so as to obtain a second judging result;
the first request subunit is used for acquiring the target data from the storage back-end modules of the other edge nodes when the second judging result is that the target data exists;
and the second request subunit is used for acquiring the target data from the storage back-end module of the upper node corresponding to the edge node when the second judging result is that the target data is not available.
In one exemplary embodiment, when the edge node stores the target data, the target data is directly returned to the user; when the edge node does not store the target data, the target data can be acquired by other edge nodes, so that the pressure of the upper node of the file is reduced, and the problem of bandwidth pressure caused by excessively concentrated data request to the upper node is avoided.
In an exemplary embodiment, the other edge nodes in communication with the edge node may be one or at least two nodes selected in advance for the edge node, and the selected node may serve as a partner node for the edge node, providing data support for the edge node. Wherein the selection of the partner node may be selected based on a distance from the edge node.
In an exemplary embodiment, the determining subunit determines, from other edge nodes in communication with the edge node, whether the target data is stored, by:
acquiring data index information sent by other edge nodes in communication with the edge node, wherein the data index information is index information established by the edge node for stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In an exemplary embodiment, each edge node establishes corresponding data index information for locally stored data, and periodically (for example, 5 minutes) synchronizes the data index information to other edge nodes, so that the edge nodes can conveniently acquire the data stored by the other edge nodes, an operation basis is provided for data sharing among the edge nodes, and the purpose of data sharing is achieved.
In an exemplary embodiment, the determining subunit determines, from other edge nodes in communication with the edge node, whether the target data is stored, by:
acquiring data index information sent by other edge nodes in communication with the edge node, wherein the data index information is index information established by the edge node on locally stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
In an exemplary embodiment, the determining subunit, when determining that there is at least one of the edge nodes storing the target data, determines the target edge node by the following three conditions, including:
the bandwidth information of the edge node storing the target data accords with a preset bandwidth abundance judgment strategy;
the load information of the edge node storing the target data accords with a preset load judgment strategy;
the communication distance between the edge node storing the target data and the edge node accords with a preset short-distance judgment strategy.
In an exemplary embodiment, taking the edge node storing the target data as the edge node to be selected, an appropriate edge node may be selected as the target edge node according to at least one of bandwidth information, load status and communication distance of the edge node to be selected, wherein:
the bandwidth adequacy determination policy may be determined according to the bandwidth allocated to the edge node to be selected, for example, if the usage of the edge node to be selected has exceeded or is close to a preset usage threshold, determining that the bandwidth of the edge node to be selected is tense, otherwise, determining that the bandwidth of the edge node to be selected is adequacy; wherein the usage threshold may be determined based on a charging criteria for the bandwidth usage;
the light load judgment policy may be determined according to a load state of the edge node to be selected, if the value of the load state of the edge node to be selected is smaller than a preset load threshold, determining that the position of the edge node to be selected meets the light load judgment policy, otherwise, determining that the position of the edge node to be selected does not meet the light load judgment policy; wherein the load status may be at least one of a hardware load, a system load, a software load, and a network load;
The short-distance judgment policy may be determined according to a communication distance between the edge node and the upper node, if the communication distance between the edge node to be selected and the edge node is far smaller than the communication distance between the edge node and the upper node, determining that the position of the edge node to be selected meets the short-distance judgment policy, otherwise, determining that the position of the edge node to be selected does not meet the short-distance judgment policy.
The bandwidth information of the edge node can be acquired through the switch and is acquired through a query interface provided by the management platform; alternatively, statistics are made on the client access log of the server to derive the amount used.
If at least two edge nodes with the target data meet the three conditions, randomly determining the target edge node from the edge nodes meeting the conditions;
and if all the edge nodes with the target data do not meet the three conditions, acquiring the target data from the upper node corresponding to the edge node.
FIG. 3 is a block diagram of a data processing system according to an exemplary embodiment. As shown in fig. 3, a data processing system, comprising:
An edge node as claimed in any one of the above;
and the upper node is used for providing the required target data for the edge node.
In an exemplary embodiment, the upper node comprises one or at least two servers, wherein the at least one server comprises a second storage back-end module, wherein the at least one second storage back-end module comprises:
the network front-end unit is used for carrying out network interaction of data with the edge node;
and the storage back-end unit is used for providing target data required by the service request.
Referring to the structure of the upper node in the related art shown in fig. 1, the server in the upper node includes a network distribution front end and a storage back end, and compared with the network distribution front end and the storage back end, the network front end of the storage back end can implement the function of the network distribution front end, so that the deployment of the network distribution front end can be omitted, that is, the useless "network distribution front end" function is completely removed from the upper node, the system loss is obviously reduced, the throughput of the service is improved, and the "performance loss point 2" in the related art is removed.
In addition, unlike fig. 2, the storage function of the upper node is simpler than that of the edge node, so that the network front end unit and the storage function unit in the first storage back end module do not need to be decoupled.
The modules for realizing the network communication function shown in fig. 1 are the 2 groups of network distribution front ends and the 2 groups of network front ends, and the modules for realizing the network communication function shown in fig. 3 are the 1 groups of network front end modules and the 1 groups of network front end units, so that the network performance loss can be reduced by 40%, and the network performance loss is more than 20% of the total performance loss.
In an exemplary embodiment, the upper node includes one or at least two servers, wherein the at least one server includes a second storage back-end module, and wherein the second storage back-end module of the upper node and the storage back-end module of the edge node use the same cluster system for data storage.
In an exemplary embodiment, the second storage back-end module in the upper node and the first storage back-end module in the edge node use the same cluster system to store data, so that the stored data in the upper node and the edge node can be communicated, and the purpose of data sharing is achieved.
In an exemplary embodiment, the file caching strategy and the file acquisition strategy can be customized, so that bandwidth consumption is reduced, network cost expenditure is reduced, and file acquisition speed is improved. The conventional storage back-end and the like do not consider the situation of cluster cooperation. In practice, the intra-node and inter-node cooperation in the CDN involve cluster cooperation, so that the storage back end shown in fig. 1 may be used to implement the situation of file duplication and waste. The problem can be well avoided by using the scheme, and in addition, customization of storage logic, such as hot spot balancing of files among servers, selection of a parent return path and the like, can be performed.
In an exemplary embodiment, the upper node is further configured to determine, when the target data is not stored locally, whether there is a target upper node storing the target data from other upper nodes that communicate with the upper node, and obtain a third determination result; when the third judging result is that a target superior node for storing the target data exists, acquiring the target data from the target superior node; or when the third judging result is that the target superior node which does not store the target data is obtained, the target data is obtained from the source station.
FIG. 4 is a schematic diagram illustrating a request processing method according to an example embodiment. As shown in fig. 4, the manner in which the edge node acquires the target data includes three levels, which are an edge node, an upper node, and a source station in order; each acquisition mode is described below:
1. an operation of obtaining data from other edge nodes, comprising:
after the edge node does not store the target data locally, judging whether other edge nodes have target edge nodes for storing the target data or not; if yes, acquiring the target data through the target edge node;
2. an operation of acquiring data from a superordinate node, comprising:
If the other edge nodes do not have the target edge node for storing the target data, the edge node requests the target data to the upper node; the superior node judges whether the target data is stored locally or not; if yes, reading the target data and sending the target data to an edge node;
if the upper node does not store the target data, judging whether the target data is allowed to be acquired from other upper nodes; if so, judging whether the target upper node storing the target data exists in other upper nodes; if yes, the target data are acquired through the target superior node, and the target data are transmitted to the edge node after being acquired from the target superior node;
3. an operation of acquiring data from a superordinate node, comprising:
if the target data is not allowed to be acquired from other superior nodes, the superior nodes acquire the target data from the source station and send the target data to the edge node after acquiring the target data from the source station;
and if the target data is allowed to be acquired from other upper nodes, but the target data is not stored in other upper nodes, acquiring the target data from a source station, and transmitting the target data to an edge node after acquiring the target data from the source station.
By adding the operation of judging whether other upper nodes have the cache, the pressure of the upper nodes to the source station can be reduced, and the cost of the source station is reduced.
A cloud distribution network system comprising one or more edge nodes and one or more levels of superordinate nodes, wherein:
the edge node comprises one or more servers, wherein the servers comprise a network front-end module and a first storage back-end, and the network front-end module is used for receiving a service request sent by a user terminal;
the upper node comprises one or more servers, and the servers comprise a second storage back end; and the first storage back end and the second storage back end are used for synchronizing file indexes and responding to the acquisition request of the network front end module for the target data.
If the storage back-end finds that the requested file is not locally available, the storage back-end can request the file from other edge nodes or higher nodes.
After the request is made to the file, the file will be stored locally and the required information (including but not limited to URL of the requested file, timestamp of the request, file size) will be uniformly placed into one file.
This aggregate all files of the local server that store file information, namely file indexes.
In the same node, each server synchronizes the index information of other servers. The index is shared transparent within the same node.
Between different nodes. And querying whether the target file exists or not from the storage back end of the edge server to the storage back end of any or designated upper level or other edge nodes. The upper level or other edge node directly queries the index of its own node, telling the querying requester the result, rather than synchronizing the index to the querying party.
Whether the edge node or the upper node is provided, the file index is provided for all the storage back ends.
The file index is not a file, but a directory listing of stored files. This is a function of the storage backend.
FIG. 5 is a flowchart illustrating a method of data processing according to an exemplary embodiment. The flowchart shown in fig. 5 is applied to an edge node, where the edge node includes one or at least two servers, and the method includes:
step 501, after receiving a service request sent by a user terminal, analyzing the service request by using a network front-end module in a server, and determining target data for responding to the service request;
step 502, obtaining the target data from a first storage back-end module in the server;
In one exemplary embodiment, if there is only one server within an edge node and there is no target file, then the target data may be obtained to an upper node in accordance with a determination that the edge node does not have target data.
Step 503, responding to the service request by using the target data.
In one exemplary embodiment, retrieving the target data from a first storage back-end module within the server includes:
judging whether a server in the edge node stores a third target server of the target data or not, and obtaining a fourth judgment result;
when the fourth judging result is that the server in the edge node has a third target server storing the target data, requesting the target data from the third target server by utilizing a first storage back-end module of the server; the edge node comprises at least two servers, wherein a first storage back-end module of the at least two servers uses the same cluster system for data storage.
In an exemplary embodiment, after determining whether the server in the edge node stores the third target server of the target data, and obtaining the fourth determination result, the method further includes:
When the fourth judgment result is that the server in the edge node does not have the third target server storing the target data, inquiring the server of the upper node corresponding to the edge node, judging whether the target data is stored or not, and obtaining a fifth judgment result, wherein the second storage back-end module of the server of the upper node and the first storage back-end module of the server of the edge node use the same cluster system for data storage;
when the fifth judging result is that the upper node for storing the target data exists, acquiring the target data from a second storage back-end module of the upper node;
and acquiring the target data from the source station when the fifth judging result does not include the upper node for storing the target data.
In one exemplary embodiment, in determining at least one edge node storing the target data, determining the target edge node by three conditions, including:
the bandwidth information of the edge node storing the target data accords with a preset bandwidth abundance judgment strategy;
the load information of the edge node storing the target data accords with a preset light load judgment strategy;
The communication distance between the edge node storing the target data and the edge node accords with a preset short-distance judgment strategy.
In an exemplary embodiment, if at least two edge nodes storing the target data meet the three conditions, determining the target edge node from the edge nodes meeting the three conditions randomly;
and if all the edge nodes storing the target data do not meet the three conditions, acquiring the target data from the upper node corresponding to the edge node.
In one exemplary embodiment, determining whether the target data is stored from other edge nodes in communication with the edge node comprises:
acquiring data index information sent by other edge nodes in communication with the edge node, wherein the data index information is index information established by the other edge nodes for stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
According to the method provided by the exemplary embodiment of the invention, after receiving the service request sent by the user terminal, the network front-end module in the server is utilized to analyze the service request, the target data for responding to the service request is determined, the target data is obtained from the first storage back-end module in the server, the target data is utilized to respond to the service request, the network function is realized by the network front-end module, the service logic and the storage logic are isolated, the resource loss is reduced, and the service throughput is improved.
A computer readable storage medium having stored thereon a computer program which when executed performs the steps of the method of any of the preceding claims.
Fig. 6 is a block diagram of a computer device 600, according to an example embodiment. For example, the computer device 600 may be provided as a server. Referring to fig. 6, a computer device 600 includes a processor 601, the number of which may be set to one or more as needed. The computer device 600 further comprises a memory 602 for storing instructions, such as application programs, executable by the processor 601. The number of the memories can be set to one or more according to the requirement. Which may store one or more applications. The processor 601 is configured to execute instructions to perform the above-described method.
It will be apparent to one of ordinary skill in the art that embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer. Furthermore, as is well known to those of ordinary skill in the art, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
The description herein is with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of additional identical elements in an article or apparatus that comprises the element.
While preferred embodiments herein have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all alterations and modifications as fall within the scope herein.
It will be apparent to those skilled in the art that various modifications and variations can be made herein without departing from the spirit and scope of the disclosure. Thus, given that such modifications and variations herein fall within the scope of the claims herein and their equivalents, such modifications and variations are intended to be included herein.

Claims (19)

1. An edge node, comprising at least two servers, each server comprising a network front-end module and a storage back-end system, the storage back-end system comprising only a first storage back-end module to isolate business logic from storage logic, wherein the first storage back-end module of each server uses the same cluster system for data storage; wherein the first storage backend module comprises:
the judging unit is used for judging whether the server in the edge node stores the target data after determining the target data for responding to the service request, so as to obtain a first judging result;
the first request unit is used for requesting the target data from a first storage back-end module of the first target server when the first judgment result is that the first target server storing the target data exists in the servers in the edge node;
A selecting unit, configured to select, when the first determination result indicates that no server in the edge nodes stores the target data, one server from other available edge nodes or upper node servers as a second target server;
and the second request unit is used for controlling the first storage back-end module in the edge node to request the target data from the second target server.
2. The edge node of claim 1, wherein the second request unit comprises:
a judging subunit, configured to judge whether the target data is stored in another edge node that communicates with the edge node, so as to obtain a second judging result;
the first request subunit is used for acquiring the target data from the storage back-end modules of the other edge nodes when the second judging result is that the target data exists;
and the second request subunit is used for acquiring the target data from the storage back-end module of the upper node corresponding to the edge node when the second judging result is that the target data is not available.
3. The edge node of claim 2, wherein the determining subunit, when determining that there is at least one of the edge nodes storing the target data, determines the target edge node by three conditions, including:
The bandwidth information of the edge node storing the target data accords with a preset bandwidth abundance judgment strategy;
the load information of the edge node storing the target data accords with a preset load judgment strategy;
the communication distance between the edge node storing the target data and the edge node accords with a preset short-distance judgment strategy.
4. The edge node according to claim 3, wherein,
if at least two edge nodes with the target data meet the three conditions, randomly determining the target edge node from the edge nodes meeting the conditions;
and if all the edge nodes with the target data do not meet the three conditions, acquiring the target data from the upper node corresponding to the edge node.
5. An edge node according to any one of claims 2 to 4, wherein the determination subunit determines whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes in communication with the edge node, wherein the data index information is index information established by the other edge nodes for stored data;
And inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
6. The edge node of claim 1, wherein each server further comprises:
the network front-end module is used for analyzing the received service request, determining target data for responding to the service request, acquiring the target data from the first storage back-end module, and responding to the service request by utilizing the target data.
7. The edge node of claim 6, wherein the network front end module comprises:
the acquisition unit is used for acquiring customized request information carried in the service request after analyzing the service request;
and the processing unit is used for processing the service request according to the customized request information.
8. A data processing system, comprising:
an edge node as claimed in any one of claims 1 to 7;
and the upper level nodes comprise upper level nodes and are used for providing required target data for the edge nodes.
9. The system of claim 8, wherein the upper node comprises at least one server, wherein the at least one server comprises a second storage back-end module, the second storage back-end module comprising:
and the storage back-end unit is used for providing target data required by the service request.
10. The system of claim 8 or 9, wherein the upper node comprises at least one server, wherein the at least one server comprises a second storage back-end module, wherein the second storage back-end module of the upper node and the storage back-end module of the edge node use the same cluster system for data storage.
11. The system according to claim 8, wherein:
the upper node is further configured to determine, when the target data is not stored locally, whether there is a target upper node storing the target data from other upper nodes that communicate with the upper node, and obtain a third determination result; when the third judging result is that a target superior node for storing the target data exists, acquiring the target data from the target superior node; or when the third judging result is that the target superior node which does not store the target data is obtained, the target data is obtained from the source station.
12. A method for processing a request, applied to an edge node, comprising:
after determining target data for responding to a service request, judging whether a server in the edge node stores a third target server of the target data or not, and obtaining a fourth judgment result;
when the fourth judging result is that the server in the edge node has a third target server storing the target data, requesting the target data from the third target server by utilizing a first storage back-end module of the server; the edge node comprises at least two servers, each server comprises a network front-end module and a storage back-end system, the storage back-end system only comprises a first storage back-end module so as to isolate business logic and storage logic, and the first storage back-end modules of the at least two servers use the same cluster system for data storage;
judging whether the server in the edge node stores a third target server of the target data, and after obtaining a fourth judgment result, the method further comprises:
when the fourth judgment result is that the server in the edge node does not have the third target server storing the target data, inquiring the server of the upper node corresponding to the edge node, judging whether the target data is stored or not, and obtaining a fifth judgment result, wherein the second storage back-end module of the server of the upper node and the first storage back-end module of the server of the edge node use the same cluster system for data storage;
When the fifth judging result is that the upper node for storing the target data exists, acquiring the target data from a second storage back-end module of the upper node;
and acquiring the target data from the source station when the fifth judging result does not include the upper node for storing the target data.
13. The method according to claim 12, wherein:
in determining at least one edge node storing the target data, determining the target edge node by three conditions including:
the bandwidth information of the edge node storing the target data accords with a preset bandwidth abundance judgment strategy;
the load information of the edge node storing the target data accords with a preset light load judgment strategy;
the communication distance between the edge node storing the target data and the edge node accords with a preset short-distance judgment strategy.
14. The method according to claim 13, wherein:
if at least two edge nodes stored with the target data meet the three conditions, randomly determining the target edge node from the edge nodes meeting the three conditions;
and if all the edge nodes storing the target data do not meet the three conditions, acquiring the target data from the upper node corresponding to the edge node.
15. A method according to any of claims 12 to 14, wherein determining whether the target data is stored from other edge nodes in communication with the edge node by:
acquiring data index information sent by other edge nodes in communication with the edge node, wherein the data index information is index information established by the other edge nodes for stored data;
and inquiring the acquired data index information by taking the target data as a search keyword, and determining an edge node corresponding to the data index information recorded with the target data as a target edge node.
16. The method according to any of claims 12 to 14, wherein determining target data for responding to a service request by:
after receiving a service request sent by a user terminal, analyzing the service request by utilizing a network front-end module in a server, and determining target data for responding to the service request.
17. The method of claim 16, wherein the analyzing the service request with a network front end module within the server, after determining the target data for responding to the service request, further comprises:
Acquiring customized request information carried in the service request;
and processing the service request according to the customized request information.
18. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any of claims 12-17.
19. A computer device comprising a processor, a memory and a computer program stored on the memory, characterized in that the processor implements the steps of the method according to any of claims 12-17 when the computer program is executed.
CN201911413095.9A 2019-12-31 2019-12-31 Request processing method and system and edge node Active CN113127414B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911413095.9A CN113127414B (en) 2019-12-31 2019-12-31 Request processing method and system and edge node

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911413095.9A CN113127414B (en) 2019-12-31 2019-12-31 Request processing method and system and edge node

Publications (2)

Publication Number Publication Date
CN113127414A CN113127414A (en) 2021-07-16
CN113127414B true CN113127414B (en) 2023-05-23

Family

ID=76770311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911413095.9A Active CN113127414B (en) 2019-12-31 2019-12-31 Request processing method and system and edge node

Country Status (1)

Country Link
CN (1) CN113127414B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110392876A (en) * 2017-03-10 2019-10-29 净睿存储股份有限公司 Data set and other managed objects are synchronously copied into storage system based on cloud

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ATE381191T1 (en) * 2000-10-26 2007-12-15 Prismedia Networks Inc METHOD AND SYSTEM FOR MANAGING DISTRIBUTED CONTENT AND CORRESPONDING METADATA
CN102204218B (en) * 2011-05-31 2015-01-21 华为技术有限公司 Data processing method, buffer node, collaboration controller, and system
CN104243425B (en) * 2013-06-19 2018-09-04 深圳市腾讯计算机系统有限公司 A kind of method, apparatus and system carrying out Content Management in content distributing network
CN106790324B (en) * 2015-11-20 2020-06-16 华为技术有限公司 Content distribution method, virtual server management method, cloud platform and system
US10320906B2 (en) * 2016-04-29 2019-06-11 Netapp, Inc. Self-organizing storage system for asynchronous storage service
CN107483614B (en) * 2017-08-31 2021-01-22 京东方科技集团股份有限公司 Content scheduling method and communication network based on CDN (content delivery network) and P2P network
CN109871498B (en) * 2018-12-15 2024-04-02 中国平安人寿保险股份有限公司 Rear-end interface response method and device, electronic equipment and storage medium
CN110392094B (en) * 2019-06-03 2021-03-19 网宿科技股份有限公司 Method for acquiring service data and converged CDN system
CN110365747B (en) * 2019-06-24 2022-04-01 北京奇艺世纪科技有限公司 Network request processing method and device, server and computer readable storage medium
CN110336885B (en) * 2019-07-10 2022-04-01 深圳市网心科技有限公司 Edge node distribution method, device, scheduling server and storage medium

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110392876A (en) * 2017-03-10 2019-10-29 净睿存储股份有限公司 Data set and other managed objects are synchronously copied into storage system based on cloud

Also Published As

Publication number Publication date
CN113127414A (en) 2021-07-16

Similar Documents

Publication Publication Date Title
CN112218100B (en) Content distribution network, data processing method, device, equipment and storage medium
US10708350B2 (en) Method and system for content delivery of mobile terminal applications
US8068512B2 (en) Efficient utilization of cache servers in mobile communication system
US9503308B2 (en) Method, device and system for processing content
CN101841553B (en) Method, user node and server for requesting location information of resources on network
EP3970344B1 (en) Cache management in content delivery systems
US8984100B2 (en) Data downloading method, terminal, server, and system
US11757716B2 (en) Network management apparatus, method, and program
CN109873855A (en) A kind of resource acquiring method and system based on block chain network
CN102571942A (en) Method and device for pushing resource information and providing service in P2P (peer-to-peer) network
CN113127414B (en) Request processing method and system and edge node
US11606415B2 (en) Method, apparatus and system for processing an access request in a content delivery system
CN113132439B (en) Data processing method and system and edge node
CN113965519A (en) Flow control method, cluster resource guarantee method, equipment and storage medium
RU2522995C2 (en) Method and apparatus for creating peer-to-peer group in peer-to-peer application and method of using peer-to-peer group
CN114866553A (en) Data distribution method, equipment and storage medium
CN114338724A (en) Block synchronization method and device, electronic equipment and storage medium
CN114338714A (en) Block synchronization method and device, electronic equipment and storage medium
CN114422526A (en) Block synchronization method and device, electronic equipment and storage medium
CN114615333A (en) Resource access request processing method, device, equipment and medium
CN115174955B (en) Digital cinema nationwide high-speed distribution system based on future network
CN113746880A (en) Data transmission method, device, server and storage medium
CN115022177A (en) CDN system, back-to-source method, CDN node and storage medium
CN117082142A (en) Data packet caching method and device, electronic equipment and storage medium
CN116582328A (en) Network isolation device and method for transmitting data between network isolation systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40056799

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant