CN115250294A - Data request processing method based on cloud distribution, and system, medium and equipment thereof - Google Patents

Data request processing method based on cloud distribution, and system, medium and equipment thereof Download PDF

Info

Publication number
CN115250294A
CN115250294A CN202110449228.9A CN202110449228A CN115250294A CN 115250294 A CN115250294 A CN 115250294A CN 202110449228 A CN202110449228 A CN 202110449228A CN 115250294 A CN115250294 A CN 115250294A
Authority
CN
China
Prior art keywords
node
retry
upstream node
upstream
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110449228.9A
Other languages
Chinese (zh)
Other versions
CN115250294B (en
Inventor
许正达
吴志军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Baishancloud Technology Co Ltd
Original Assignee
Guizhou Baishancloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Baishancloud Technology Co Ltd filed Critical Guizhou Baishancloud Technology Co Ltd
Priority to CN202110449228.9A priority Critical patent/CN115250294B/en
Publication of CN115250294A publication Critical patent/CN115250294A/en
Application granted granted Critical
Publication of CN115250294B publication Critical patent/CN115250294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The disclosure relates to a data request processing method based on cloud distribution, and a system, a medium and equipment thereof. The method comprises the following steps: receiving a data request sent by a downstream node; when the requested data does not exist locally, requesting data to an upstream node; and when the 5XX response status code is received, retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request until the requested data is obtained or the 5XX response status code is returned to the downstream node after the retrying is finished. The method and the device can retry the upstream node according to the preset retry rule corresponding to the domain name of the data request, so that the behavior of requesting data back to the upstream node can be matched and controlled.

Description

Data request processing method based on cloud distribution, and system, medium and equipment thereof
Technical Field
The present disclosure relates to the field of CDN technologies, and in particular, to a data request processing method based on cloud delivery, and a system, a medium, and a device thereof.
Background
With the development of the network era, more and more users generate massive data access, so that the pressure of a source station server is increased, when a certain node of a source station cannot be served due to various reasons, a CDN system is required to take measures in time to retry to a next node which can be served, so that the quality of service is improved, and meanwhile, when a father node cannot be served, an edge node can retry to a next father node, so that the hit rate of resources is improved.
When the cache node of the CDN misses, the data requested by the client needs to be acquired back upstream, a suitable upstream needs to be selected, and a retry is needed to the next upstream node when the preferred upstream node cannot serve. In the prior art, when the upstream connection fails, the connection is overtime, the reading is overtime, or the status codes such as 502, 503 and 504 are responded, the retry time when the next upstream node is retried cannot be controlled, and the priority is disordered and uncontrollable.
Disclosure of Invention
In order to overcome the problems in the related art, the present disclosure provides a data request processing method based on cloud distribution, and a system, a medium, and a device thereof.
According to a first aspect of the embodiments of the present disclosure, there is provided a data request processing method based on cloud distribution, the method including:
receiving a data request sent by a downstream node;
when the requested data does not exist locally, requesting the data from an upstream node;
and when the 5XX response status code is received, retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request until the requested data is obtained or the 5XX response status code is returned to the downstream node after the retrying is finished.
Wherein the retry rule comprises a retry priority and a retry number; the retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request includes:
and determining an address list of the upstream node based on the retry priority, the retry times and the information of the upstream node, and sequentially retrying the upstream node based on the address list.
Wherein the retry rule is:
when the upstream node only comprises a source node, retrying to the source node based on the retrying times of the source node;
when the upstream node includes a parent node and a source node, performing retry to the parent node based on the retry number of the parent node, and when the retry number of the parent node is reached and the requested data is not obtained, performing retry to the source node based on the retry number of the source node.
Wherein the retry rule comprises a retry pattern of the upstream node, the retry pattern comprising at least any one of: default mode, hash mode, polling mode;
the retrying to at least one upstream node according to the preset retrying rule corresponding to the domain name of the data request includes:
when the retry mode of the upstream node is a default mode, acquiring a first sequence of the upstream node in the default mode, and requesting data resources from the upstream node in the default mode based on the first sequence;
when the retry mode of the upstream node is a hash mode, determining a second ordering of the upstream node of the hash mode based on the network address related to the data request, and requesting the data resource from the upstream node of the hash mode based on the second ordering;
when the retry mode of the upstream node is a polling mode, acquiring a group weight and a node weight of the upstream node in the polling mode, determining a third sequence of the upstream node in the polling mode based on the group weight and the node weight, and requesting a data resource from the upstream node in the polling mode based on the third sequence.
According to a second aspect of the embodiments of the present disclosure, there is provided a data request processing system based on cloud distribution, the system including:
the receiving module is used for receiving a data request sent by a downstream reception order;
the request module is used for requesting data to an upstream node when the requested data does not exist locally;
and the retry module is used for retrying to at least one upstream node according to a preset retry rule corresponding to the domain name of the data request when the 5XX response status code is received, and returning the 5XX response status code to the downstream node until the requested data is obtained or the retrying is finished.
Wherein the retry rule comprises a retry priority and a retry number; the retry module is further to:
and determining an address list of the upstream node based on the retry priority, the retry times and the information of the upstream node, and sequentially retrying the upstream node based on the address list.
Wherein the retry rule is:
when the upstream node only comprises a source node, retrying to the source node based on the retrying times of the source node;
and when the upstream node comprises a father node and a source node, retrying to the father node based on the retrying times of the father node, and when the retrying times of the father node are reached and the requested data is not obtained, retrying to the source node based on the retrying times of the source node.
Wherein the retry rule comprises a retry pattern of the upstream node, the retry pattern comprising at least any one of: default mode, hash mode, polling mode;
the retry module is further to:
when the retry mode of the upstream node is a default mode, acquiring a first sequence of the upstream node in the default mode, and requesting data resources from the upstream node in the default mode based on the first sequence;
when the retry pattern of the upstream node is a hash pattern, determining a second ordering of the upstream node of the hash pattern based on a network address associated with the data request, and requesting data resources from the upstream node of the hash pattern based on the second ordering;
when the retry mode of the upstream node is a polling mode, acquiring a group weight and a node weight of the upstream node in the polling mode, determining a third sequence of the upstream node in the polling mode based on the group weight and the node weight, and requesting a data resource from the upstream node in the polling mode based on the third sequence.
According to a third aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when executed, implements the steps of the method as described above.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a computer device comprising a processor, a memory and a computer program stored on the memory, wherein the processor implements the steps of the method as described above when executing the computer program.
The present disclosure provides a data request processing method, which is particularly useful for retrying a scenario of a subsequent upstream node when a preferred upstream node is out of service. In the method, a CDN node receives a data request sent by a downstream node, and requests data to an upstream node when the requested data is not stored locally; and when the 5XX response status code is received, retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request until the requested data is obtained or the 5XX response status code is returned to the downstream node after the retrying is finished. By adopting the method, the upstream node can retry according to the preset retry rule corresponding to the domain name of the data request, so that the behavior of requesting data back to the upstream node can be matched and controlled.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
Fig. 1 is a flowchart illustrating a data request processing method based on cloud distribution according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a data request processing method based on cloud distribution according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a data request processing method based on cloud distribution according to an exemplary embodiment.
Fig. 4 is a block diagram illustrating a cloud distribution-based data request processing system in accordance with an exemplary embodiment.
FIG. 5 is a block diagram illustrating a computer device according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
When the cache node of the CDN misses, it needs to obtain the data requested by the client back upstream, and when the preferred upstream node cannot serve, it needs to retry to the next upstream node.
The present disclosure provides a data request processing method, which is particularly useful for retrying a scenario of a subsequent upstream node when a preferred upstream node is out of service. In the method, a CDN node receives a data request sent by a downstream node, and requests data from the upstream node when the requested data is not stored locally; and when the 5XX response status code is received, retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request until the requested data is obtained or the 5XX response status code is returned to the downstream node after the retrying is finished. By adopting the method, the upstream node can retry according to the preset retry rule corresponding to the domain name of the data request, so that the behavior of requesting data back to the upstream node can be matched and controlled.
In the present disclosure, an upstream node of a CDN node may be a parent node or an origin node, and an origin node may have a plurality of different IP addresses because an origin server may have a plurality of different IP addresses.
Fig. 1 is a flowchart illustrating a data request processing method based on cloud distribution according to an exemplary embodiment, as shown in fig. 1, the method includes the following steps:
step 101, receiving a data request sent by a downstream node;
step 102, when the requested data does not exist locally, requesting data to an upstream node;
and 103, when the 5XX response status code is received, retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request until the requested data is obtained or the 5XX response status code is returned to the downstream node after the retrying is finished.
For the CDN cache system, the upstream node includes a parent node and a source node, and there may be a plurality of parent nodes and source nodes respectively.
Taking the current CDN node as an edge node as an example, the following is specifically described:
in steps 101 and 102, a current CDN node (e.g., an edge node) receives a data request sent by a downstream node (e.g., a client), and requests data from an upstream node (e.g., a parent node or an origin node of the CDN node) of the CDN node when the requested data is not locally stored.
In step 103, if the edge node receives the 5XX response status code, it indicates that the edge node failed to request data from an upstream node based on the data request, for example, failed connection with a parent node or a source node, connection timeout, read timeout, and a response status code of 5XX (e.g., 502, 503, 504, the status code is configurable). And retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request until the requested data is obtained, or returning a 5XX response status code to a downstream node sending the data request after the retrying is finished if the requested data is not obtained from any retryed upstream node.
By adopting the method, the upstream node can retry according to the preset retry rule corresponding to the domain name of the data request, so that the behavior of requesting data back to the upstream node can be matched and controlled.
In an alternative embodiment, the retry rule includes a retry priority and a number of retries; the retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request includes:
and determining an address list of the upstream node based on the retry priority, the retry times and the information of the upstream node, and sequentially retrying the upstream node based on the address list.
In this embodiment, the edge node sets a retry rule in advance at the edge node by acquiring information of each upstream node, and may include, for example, a retry priority and a retry number, where the retry priority is a priority of retries to the upstream node, and the retry number is a highest number of retries to the upstream node.
When the retry is carried out to at least one upstream node, the address list of the upstream node is determined based on the retry priority, the retry times and the information of the upstream node, and the retry is carried out to the upstream node in sequence based on the address list. For example, based on the retry priority and information of the upstream node of the edge node, the retry order of the upstream node is determined, and the retry maximum number of the upstream node is determined based on the retry number, thereby determining the address list of the upstream node which retries, and then sequentially retries to the upstream node based on the address list. That is, when an edge node needs to request data from an upstream node, all upstream nodes configured by a domain name of the data request automatically form a linked list arranged according to a priority order, and when the upstream node is failed to be accessed each time, a request is initiated to the next upstream node in the linked list. That is, data is requested from an upstream node, and the parent node and the source node are sequentially requested in the order of request in addition to satisfying the retry number.
Specifically, the retry times may be set for the parent node or the source node according to the upstream node, so as to avoid a situation that the user waits for a long time or the parent node is under excessive pressure due to unsuccessful retries.
By adopting the method, when data is requested to an upstream node, the times of requesting back to the father node and the source node can be respectively controlled, resource waste caused by repeated retry is avoided, too long time for a user to wait for response is avoided, and the use experience of the user is improved. In addition, when the request is made, the request can be made according to the set priority, so that the purposes of controllable request sequence and predictable retry behaviors are achieved.
In an alternative embodiment, the retry rule is:
when the upstream node only comprises a source node, retrying to the source node based on the retrying times of the source node;
when the upstream node includes a parent node and a source node, performing retry to the parent node based on the retry number of the parent node, and when the retry number of the parent node is reached and the requested data is not obtained, performing retry to the source node based on the retry number of the source node.
In one embodiment, the upstream node only includes the source node, the source node is nodes a, B, and C, the linked list of the source node is a- > B- > C, and the retry number of the source node is determined to be 2. When the edge node receives the data request, if the edge node does not hit the data resource, the edge node requests the data resource to an upstream node (i.e. a source node). Data is requested from node A, which returns 5XX, indicating a request failure. At this time, the data request is retried to the node B, the node B returns 5XX, and the retry frequency of the source node is not reached yet, so that the data request is continuously retried to the node C, the node C returns the requested data, the request is ended, and the result is returned to the client.
In one embodiment, the upstream nodes include parent nodes and source nodes, where the edge node retries to the next node in the upstream nodes based on an error condition. For example, after the current request is made to the edge node, since the parent nodes of the current request are configured to be nodes a, B, and C, the linked list a- > B- > C of the upstream node is formed at this time, the data is requested to the node a first according to the priority, the node a returns 5XX, which indicates that the request has failed, and the parent nodes to be retried at this time are nodes B and C. And after the data request to the node A fails, initiating a request to the node B according to the priority sequence, and if the node B returns 5XX, initiating a request to the node C. It should be noted that if the node a returns a success, or the returned status code is not 5XX (configurable), it is not necessary to initiate a request to the nodes B and C, and the request is ended, and a result is returned to the client.
For another example, when the upstream node of the edge node includes a parent node and a source node, the parent node is nodes a, B, and C, and the source node is nodes D, E, and F. And determining that the retry times of the parent node are 1 and the retry times of the source node are unlimited. When the edge node receives the data request, if the edge node does not hit the data resource, the edge node needs to request the data resource from the upstream node. If requested from node A, node A returns 5XX, indicating that the request failed. At this time, the data request is retried to the node B, and the node B returns 5XX, and the number of retries reaches the parent node (the data resource is requested from the upstream node a for the first time without retry), so that the data request is retried to the source node. Because the number of source node retries is unlimited, all source nodes may be retried before failing to hit the data resource. In this case, the path of the current node requesting the data resource to the upstream node may be, for example, a, B, D, E, F, or a, C, D, E, F.
It should be noted that, when the upstream node is retried again after the data request from the upstream node fails, the node that has already been requested is not retried. Thus, in the discussion of request priorities below, the linked list of upstream nodes involved is set one-way from front to back.
To control the number of requests, the user can configure the number of retries for the parent node and the source node, respectively. In determining the retry number, the retry number of the parent node may be set to a small value and the retry number of the source node may be set to a large value in consideration of alleviating the stress of the parent as much as possible and improving the success rate. It should be noted that no matter how large the retry number of the source node is set, the retry number will not actually exceed the total number of the current source nodes in the process of the request.
When the retry number of the parent node is reached and the requested data is not obtained, the source node is retried based on the retry number of the source node. Based on the characteristics of the CDN, in order to reduce the burden of the source node and avoid the overlarge pressure of the source node, the father node is tried preferentially during request, and when the request frequency of the father node reaches the retry frequency of the father node but the data request is still unsuccessful, the source node is requested again.
In an alternative embodiment, the retry rule includes a retry pattern of the upstream node, the retry pattern including at least any one of: default mode, hash mode, polling mode;
the retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request includes:
when the retry mode of the upstream node is a default mode, acquiring a first sequence of the upstream node in the default mode, and requesting data resources from the upstream node in the default mode based on the first sequence;
when the retry pattern of the upstream node is a hash pattern, determining a second ordering of the upstream node of the hash pattern based on a network address associated with the data request, and requesting data resources from the upstream node of the hash pattern based on the second ordering;
when the retry mode of the upstream node is a polling mode, acquiring a group weight and a node weight of the upstream node in the polling mode, determining a third sequence of the upstream node in the polling mode based on the group weight and the node weight, and requesting a data resource from the upstream node in the polling mode based on the third sequence.
In this embodiment, the retry pattern corresponding to the property of each upstream node is configured in advance, and a plurality of nodes may be configured for the same retry pattern or one node may be configured. Also, a priority may be set for each retry mode to ensure that the desired node is preferentially accessed. By adopting the mode of setting multiple retry modes, the requirements of different services can be met, so that the retry behaviors are accurate and controllable. The retry mode and its priority may be configured according to specific application scenarios, including configuration of node servers, nodes that a user wishes to access preferentially, and other factors. The specific configuration manner can be implemented by those skilled in the art based on the existing method, and is not described herein again.
Specifically, when a retry is performed to at least one upstream node according to a preset retry rule corresponding to the domain name of the data request, a retry mode of each upstream node and a priority of the related retry mode are obtained. The upstream node with high priority is requested preferentially, and the rule of the retry mode is followed when requested. The retry rules for several retry modes to which the present disclosure relates are described in detail below.
In an alternative embodiment, the retry mode includes: default mode, hash mode, polling mode.
The default mode may be denoted default mode; the hash patterns may include consistent hash patterns (denoted as hash patterns) and recursive hash patterns (denoted as rhash patterns); the polling mode may be denoted as a roundbin mode. The hash mode can be divided into a consistent hash mode and a recursive hash mode based on the calculation mode of the hash value, the two modes are different only in the calculation mode of the hash value, and the retry rules are the same.
Here, nodes with the same retry pattern may also be used with different priorities, thereby implementing a more accurate, personalized request policy.
The request rules for each mode are as follows:
the default mode is as follows: when there are one or more default-mode parent nodes (or source nodes), the request ordering of the default-mode upstream nodes is obtained based on the priority of the default-mode upstream nodes, and then the data is requested from the default-mode parent nodes (or source nodes) according to the request ordering. Here, the priority of the upstream node may be determined according to the configuration of the upstream node and/or the requirements of the user.
Hash mode: with this scheme, it is ensured that the same network address (e.g., URL) can be requested to the same parent node (or source node) with priority. Specifically, the priority of the upstream node of each hash pattern is calculated using a network address (URL) as a parameter, so as to obtain the request sequence of the upstream node of the hash pattern, and then data is requested from the parent node (or source node) of the hash patterns according to the request sequence. Here, the consistent hash mode and the recursive hash mode differ in the algorithm of calculating the priority of the upstream node. Therefore, in a specific implementation, a mode in which the two hash modes are independently set may also be adopted.
And (3) polling mode: each parent node (or source node) of the polling mode is configured with a group weight and a node weight, the group weight having a higher priority than the node weight. That is, when selecting a retry parent node (or source node), a node with a high group weight is preferentially selected; in the case where the group weights of the nodes are the same, the number of times of actually accessing the nodes is set based on the ratio of the node weights. The request ordering of the upstream nodes of the poll pattern is determined based on this principle, and then data is requested from the parent nodes (or source nodes) of the hash patterns according to the request ordering.
In an exemplary embodiment, the source nodes include nodes a, B, C, and D, which are all in polling mode, and the group weight and the node weight are configured for each source node as follows: a (1, 1), B (1, 1), C (2, 1), D (2, 2), when determining that the data is required to be requested to the upstream node, according to the group weight and the node weight, for example, the node path of the request can be obtained as the first data request: d- > C- > A- > B, and the second data request: d- > C- > B- > A, and a third data request: c- > D- > A- > B, a fourth data request: d- > C- > B- > a, fifth data request: d- > C- > A- > B, sixth data request: c- > D- > B- > A, seventh data request: d- > C- > A- > B (looping back to the first data request). That is, since the group weight of the node C and the node D is 2, the node C and the node D are always accessed with priority, and since the node weight of the node C and the node D is 1:2, so for all data requests, the proportional number of priority access nodes C and D is 1:2. the same principle is applied to the access times of the node a and the node B, and details are not repeated.
It should be noted that the order of the priority of requesting data from the upstream node is not necessarily related to the data request being the fourth request, and the group weight and the influence of the node weight on the priority of requesting data from the upstream node are only exemplified here. In addition, if data hits in a request for data from a certain node, the following node is not retried.
It can be seen from the above that, according to the characteristics of the upstream node and the requirement for user service, a required retry mode can be configured for each upstream node, so as to achieve the purpose of controlling the upstream node to retry the data request back in the edge node.
For example, if it is desired to retry the source node, if the same URL can always be obtained preferentially from the same source node, a hash pattern may be configured, such as the source node a: hash mode, source node B: and in the hash mode, for http:// url1, using url1 as a parameter to obtain the priority of data request of an upstream node through hash calculation: b- > A; for http:// url2, using url2 as parameter to obtain the priority of requesting data from upstream node through hash calculation: a- > B.
For another example, if all URLs access the source node in a polling mode when it is desired to request data from the source node, the source node a may be configured to: roundbin, source node B: and roundbin, and the group weight and the node weight of the node A and the node B are the same, the retry path of the node is A- > B, B- > A, A- > B.
In the embodiment presented herein, the retry patterns of the upstream nodes are the same. In other embodiments, the retry pattern of the upstream node may not be the same. In this case, upstream nodes with different retry patterns may be sorted according to their priorities and then upstream nodes with the same retry pattern may be sorted again. This will be described in the following specific examples.
It should be noted that the set retry number needs to be satisfied when data is requested from the upstream node in each mode. That is, when the number of retries is reached, the upstream node having the latter priority is not retried.
In addition, in the present disclosure, different return status codes are configured according to different reasons for data request failures. When data acquisition fails, various possible status codes are received. It may be determined whether to retry the following upstream node based on the received status code. For example, when the received status code is 5XX, it is determined to retry the next upstream node. In this way, the retry flexibility is improved, and the processing pressure of the source node is reduced.
Specific embodiments according to the present disclosure are described below in conjunction with specific application scenarios. In this embodiment, the current node is configured with parent nodes A, B, C, source nodes D, E, F, G, H, I. The retry mode of the parent nodes A, B, C is the default mode, the retry mode of the source nodes D, E is the default mode, the retry mode of the source nodes F, G is the hash mode, the retry mode of the source nodes H, I is the polling mode, and the group weight and the node weight are H (2, 1), I (1, 1), respectively. As shown in fig. 2, in this embodiment, the data request processing method based on cloud distribution includes the following steps:
in step 201, the edge node receives a data request sent by the client, and requests data to the upstream node a when it is determined that the requested data does not exist locally.
Step 202, when the 5XX response status code is received, determining that the priority chain table of the access upstream node is the father nodes a, B and C and the source nodes D, E, F, G, H and I, and determining that the retry number of the father node is 2 and the retry number of the source node is 6.
Step 203, obtaining retry modes of each upstream node, which include a default mode, a hash mode, and a polling mode, and sorting priorities of the retry modes from high to low as: default mode, hash mode, polling mode.
Step 204, the priority of the parent nodes B and C is obtained and sorted from high to low as: B. and C, performing secondary filtration.
Step 205, request data from father node B, if it is not hit, retry to C node, if it is not hit, then retry to source node if it is the father node retry times.
Step 206, the priority of the source nodes D and E is obtained and sorted from high to low as: D. and E, sequentially requesting data from the source nodes D and E, wherein the data are not hit and do not reach the retry upper limit times of the source nodes.
Step 207, determining the priority of the source nodes F and G from high to low based on the URL of the data request as follows: F. g, and sequentially requesting data from the source nodes F and G, wherein the data are not hit and do not reach the retry upper limit times of the source nodes.
Step 208, determining the priority of the source nodes H and I from high to low based on the group weight and the node weight as follows: H. i, requesting data from a source node H, and hitting data resources.
In another specific embodiment, both retry modes of the parent node and the source node may be configured to be the same retry mode, including but not limited to any one of a default mode, a hash mode, or a polling mode, and the following is exemplified by both retry modes configured by the parent node and the source node being the default mode, specifically as follows:
the current node is configured with father nodes A, B and C and source nodes D, E, F, G, H and I. As shown in fig. 3, in this embodiment, the data request processing method based on cloud distribution includes the following steps:
step 301, the edge node receives a data request sent by the client, and requests data to the upstream node a when determining that the requested data does not exist locally.
Step 302, when receiving the 5XX response status code, determining that the priority chain table of the access upstream node is the parent nodes a, B and C and the source nodes D, E, F, G, H and I, and determining that the retry number of the parent node is 2 and the retry number of the source node is 6.
In step 303, the retry mode of each upstream node is obtained as the default mode, that is, the retry mode of the parent nodes a, B, and C is the default mode, and the retry mode of the source nodes D, E, F, G, H, and I is the default mode.
Step 304, obtaining the priority of the parent nodes B and C from high to low as: B. and C, performing secondary filtration.
Step 305, request data from the father node B, if the data is not hit, retry to the C node, if the data is not hit, at this time, retry times of the father node, and then retry the source node.
Step 306, the priorities of the source nodes D, E, F, G, H, and I are obtained and sorted from high to low as: D. e, F, G, H and I, requesting data from the source node D, missing, continuing to request data from the source node E, and hitting data resources.
It can be seen from the above embodiments that, when data is requested, the times of requests back to the parent node and back to the source node can be controlled respectively, and retries can be performed according to the set priority, so that the purpose of controlling the retry sequence is achieved, and resource waste caused by multiple retries is avoided.
The present disclosure also provides a data request processing system based on cloud distribution, as shown in fig. 4, the system includes:
a receiving module 401, configured to receive a data request sent by a downstream reception order;
a request module 402, configured to request data from an upstream node when the requested data does not exist locally;
and the retry module 403 is configured to, when the 5XX response status code is received, retry to at least one upstream node according to a preset retry rule corresponding to the domain name of the data request until the requested data is obtained or the retry is finished, and return the 5XX response status code to the downstream node.
In an alternative embodiment, the retry rule includes a retry priority and a number of retries; the retry module is further to:
and determining an address list of the upstream node based on the retry priority, the retry times and the information of the upstream node, and sequentially retrying the upstream node based on the address list.
In an alternative embodiment, the retry rule is:
when the upstream node only comprises a source node, retrying to the source node based on the retrying times of the source node;
and when the upstream node comprises a father node and a source node, retrying to the father node based on the retrying times of the father node, and when the retrying times of the father node are reached and the requested data is not obtained, retrying to the source node based on the retrying times of the source node.
In an optional embodiment, the retry rule comprises a retry pattern of the upstream node, the retry pattern comprising at least any one of: default mode, hash mode, polling mode;
the retry module is further to:
when the retry mode of the upstream node is a default mode, acquiring a first sequence of the upstream node in the default mode, and requesting data resources from the upstream node in the default mode based on the first sequence;
when the retry pattern of the upstream node is a hash pattern, determining a second ordering of the upstream node of the hash pattern based on a network address associated with the data request, and requesting data resources from the upstream node of the hash pattern based on the second ordering;
when the retry mode of the upstream node is a polling mode, acquiring a group weight and a node weight of the upstream node in the polling mode, determining a third sequence of the upstream node in the polling mode based on the group weight and the node weight, and requesting a data resource from the upstream node in the polling mode based on the third sequence.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
The data request method provided by the disclosure has the following beneficial effects:
(1) When data is requested to an upstream node, the times of requesting back to a parent node and the times of requesting back to a source node can be respectively controlled, and resource waste caused by repeated retries is avoided.
(2) When the request is made, the request can be made according to the set priority, so that the purposes of controllable request sequence and predictable retry behaviors are achieved.
(3) A plurality of retry modes are set, so that the requirements of different services can be met, for example, the retry behavior can be accurately controlled through the polling mode.
(4) Whether the subsequent upstream node is retried or not is determined according to different state codes, so that the retry flexibility is improved, and the processing pressure of the upstream node is reduced.
The present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed, implements the steps of the above-described method.
The present disclosure also provides a computer device comprising a processor, a memory and a computer program stored on the memory, the steps of the above method being implemented when the processor executes the computer program.
Fig. 5 is a block diagram illustrating a data request processing device 500 based on cloud distribution according to an example embodiment. For example, the computer device 500 may be provided as a server. Referring to fig. 5, the computer device 500 includes a processor 501, and the number of the processors may be set to one or more as necessary. The computer device 500 further comprises a memory 502 for storing instructions, such as an application program, executable by the processor 501. The number of the memories can be set to one or more according to needs. Which may store one or more applications. The processor 401 is configured to execute instructions to perform the data request method described above.
As will be appreciated by one skilled in the art, the embodiments herein may be provided as a method, apparatus (device), or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied in the medium. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, including, but not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computer, etc. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as is well known to those skilled in the art.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (devices) and computer program products according to embodiments herein. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrases "comprising 8230; \8230;" comprises 8230; "does not exclude the presence of additional like elements in an article or device comprising the element.
While the preferred embodiments herein have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following appended claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of this disclosure.
It will be apparent to those skilled in the art that various changes and modifications may be made herein without departing from the spirit and scope thereof. Thus, it is intended that such changes and modifications be included herein, provided they come within the scope of the appended claims and their equivalents.

Claims (10)

1. A data request processing method based on cloud distribution is characterized by comprising the following steps:
receiving a data request sent by a downstream node;
when the requested data does not exist locally, requesting data to an upstream node;
and when the 5XX response status code is received, retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request until the requested data is obtained or the 5XX response status code is returned to the downstream node after the retrying is finished.
2. The method of claim 1, wherein the retry rules include a retry priority and a number of retries; the retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request includes:
and determining an address list of the upstream node based on the retry priority, the retry times and the information of the upstream node, and sequentially retrying the upstream node based on the address list.
3. The method of claim 2, wherein the retry rule is:
when the upstream node only comprises a source node, retrying to the source node based on the retrying times of the source node;
when the upstream node includes a parent node and a source node, performing retry to the parent node based on the retry number of the parent node, and when the retry number of the parent node is reached and the requested data is not obtained, performing retry to the source node based on the retry number of the source node.
4. The method of claim 1, wherein the retry rule comprises a retry pattern for the upstream node, the retry pattern comprising at least any one of: default mode, hash mode, polling mode;
the retrying to at least one upstream node according to a preset retrying rule corresponding to the domain name of the data request includes:
when the retry mode of the upstream node is a default mode, acquiring a first sequence of the upstream node in the default mode, and requesting data resources from the upstream node in the default mode based on the first sequence;
when the retry mode of the upstream node is a hash mode, determining a second ordering of the upstream node of the hash mode based on the network address related to the data request, and requesting the data resource from the upstream node of the hash mode based on the second ordering;
when the retry mode of the upstream node is a polling mode, acquiring a group weight and a node weight of the upstream node in the polling mode, determining a third ordering of the upstream node in the polling mode based on the group weight and the node weight, and requesting a data resource from the upstream node in the polling mode based on the third ordering.
5. A data request processing system based on cloud distribution, the system comprising:
the receiving module is used for receiving a data request sent by a downstream node;
the request module is used for requesting data to an upstream node when the requested data is not stored locally;
and the retry module is used for retrying to at least one upstream node according to a preset retry rule corresponding to the domain name of the data request when the 5XX response status code is received until the requested data is obtained or the retry is finished, and returning the 5XX response status code to the downstream node.
6. The method of claim 5, wherein the retry rules include a retry priority and a number of retries; the retry module is further to:
and determining an address list of the upstream node based on the retry priority, the retry times and the information of the upstream node, and sequentially retrying the upstream node based on the address list.
7. The method of claim 6, wherein the retry rule is:
when the upstream node only comprises a source node, retrying to the source node based on the retrying times of the source node;
and when the upstream node comprises a father node and a source node, retrying to the father node based on the retrying times of the father node, and when the retrying times of the father node are reached and the requested data is not obtained, retrying to the source node based on the retrying times of the source node.
8. The method of claim 5, wherein the retry rule comprises a retry pattern for the upstream node, the retry pattern comprising at least any one of: default mode, hash mode, polling mode;
the retry module is further to:
when the retry mode of the upstream node is a default mode, acquiring a first sequence of the upstream node in the default mode, and requesting data resources from the upstream node in the default mode based on the first sequence;
when the retry pattern of the upstream node is a hash pattern, determining a second ordering of the upstream node of the hash pattern based on a network address associated with the data request, and requesting data resources from the upstream node of the hash pattern based on the second ordering;
when the retry mode of the upstream node is a polling mode, acquiring a group weight and a node weight of the upstream node in the polling mode, determining a third ordering of the upstream node in the polling mode based on the group weight and the node weight, and requesting a data resource from the upstream node in the polling mode based on the third ordering.
9. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when executed, implements the steps of the method according to any one of claims 1-4.
10. A computer arrangement comprising a processor, a memory and a computer program stored on the memory, characterized in that the steps of the method according to any of claims 1-4 are implemented when the computer program is executed by the processor.
CN202110449228.9A 2021-04-25 2021-04-25 Cloud distribution-based data request processing method and system, medium and equipment thereof Active CN115250294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110449228.9A CN115250294B (en) 2021-04-25 2021-04-25 Cloud distribution-based data request processing method and system, medium and equipment thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110449228.9A CN115250294B (en) 2021-04-25 2021-04-25 Cloud distribution-based data request processing method and system, medium and equipment thereof

Publications (2)

Publication Number Publication Date
CN115250294A true CN115250294A (en) 2022-10-28
CN115250294B CN115250294B (en) 2024-03-22

Family

ID=83697468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110449228.9A Active CN115250294B (en) 2021-04-25 2021-04-25 Cloud distribution-based data request processing method and system, medium and equipment thereof

Country Status (1)

Country Link
CN (1) CN115250294B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1633106A (en) * 2004-12-16 2005-06-29 中国科学院计算技术研究所 A transmitter-oriented resource reservation implementing method having backtracking ability
WO2012167106A1 (en) * 2011-06-01 2012-12-06 Interdigital Patent Holdings, Inc. Content delivery network interconnection (cdni) mechanism
CN102868935A (en) * 2012-08-24 2013-01-09 乐视网信息技术(北京)股份有限公司 Scheduling method for responding multiple sources in content distribution network (CDN)
CN103237031A (en) * 2013-04-26 2013-08-07 网宿科技股份有限公司 Method and device for orderly backing to source in content distribution network
CN108737532A (en) * 2018-05-11 2018-11-02 北京大米科技有限公司 A kind of resource acquiring method, client, computer equipment and readable medium
WO2019148568A1 (en) * 2018-02-02 2019-08-08 网宿科技股份有限公司 Method and system for sending request for acquiring data resource
CN110912926A (en) * 2019-12-04 2020-03-24 湖南快乐阳光互动娱乐传媒有限公司 Data resource back-source method and device
CN111181782A (en) * 2019-12-24 2020-05-19 新浪网技术(中国)有限公司 Return source processing method and device
CN111464649A (en) * 2017-04-19 2020-07-28 贵州白山云科技股份有限公司 Access request source returning method and device
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1633106A (en) * 2004-12-16 2005-06-29 中国科学院计算技术研究所 A transmitter-oriented resource reservation implementing method having backtracking ability
WO2012167106A1 (en) * 2011-06-01 2012-12-06 Interdigital Patent Holdings, Inc. Content delivery network interconnection (cdni) mechanism
CN102868935A (en) * 2012-08-24 2013-01-09 乐视网信息技术(北京)股份有限公司 Scheduling method for responding multiple sources in content distribution network (CDN)
CN103237031A (en) * 2013-04-26 2013-08-07 网宿科技股份有限公司 Method and device for orderly backing to source in content distribution network
CN111464649A (en) * 2017-04-19 2020-07-28 贵州白山云科技股份有限公司 Access request source returning method and device
WO2019148568A1 (en) * 2018-02-02 2019-08-08 网宿科技股份有限公司 Method and system for sending request for acquiring data resource
CN108737532A (en) * 2018-05-11 2018-11-02 北京大米科技有限公司 A kind of resource acquiring method, client, computer equipment and readable medium
CN110912926A (en) * 2019-12-04 2020-03-24 湖南快乐阳光互动娱乐传媒有限公司 Data resource back-source method and device
CN111181782A (en) * 2019-12-24 2020-05-19 新浪网技术(中国)有限公司 Return source processing method and device
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment

Also Published As

Publication number Publication date
CN115250294B (en) 2024-03-22

Similar Documents

Publication Publication Date Title
CN105099988B (en) Method, access method and device and system for supporting gray scale to issue
US20210014778A1 (en) Upf selection method and device
JP6018182B2 (en) Send category information
CN108173774B (en) Client upgrading method and system
CN110929202B (en) Page request failure processing method and device and computer equipment
US8938495B2 (en) Remote management system with adaptive session management mechanism
CN106453460B (en) File distribution method, device and system
CN110781083B (en) H5 client code setting multi-environment testing method and system
US11902352B2 (en) HttpDNS scheduling method, apparatus, medium and device
JP2019506764A (en) System and method for obtaining, processing and updating global information
CN103380634A (en) Methods and apparatus for transmitting data
CN109783564A (en) Support the distributed caching method and equipment of multinode
CN108347479B (en) Multi-warehouse static resource uploading method and system based on content distribution network
CN109873855A (en) A kind of resource acquiring method and system based on block chain network
CN106294627A (en) Data managing method and data server
CN110943876B (en) URL state detection method, device, equipment and system
CN112751926B (en) Management method, system and related device for working nodes in cluster
CN115250294B (en) Cloud distribution-based data request processing method and system, medium and equipment thereof
US20150095496A1 (en) System, method and medium for information processing
CN115706741A (en) Method and device for returning slice file
CN111698281B (en) Resource downloading method and device, electronic equipment and storage medium
CN108243229B (en) Request processing method and device
JP6100384B2 (en) Information processing system, server device, information processing method, and program
CN111866197B (en) Domain name resolution method and system
CN108124021A (en) Internet protocol IP address obtains, the method, apparatus and system of website visiting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant