CN113992760B - Method, device, equipment and storage medium for scheduling back source traffic - Google Patents

Method, device, equipment and storage medium for scheduling back source traffic Download PDF

Info

Publication number
CN113992760B
CN113992760B CN202111237433.5A CN202111237433A CN113992760B CN 113992760 B CN113992760 B CN 113992760B CN 202111237433 A CN202111237433 A CN 202111237433A CN 113992760 B CN113992760 B CN 113992760B
Authority
CN
China
Prior art keywords
node
source
back source
code
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111237433.5A
Other languages
Chinese (zh)
Other versions
CN113992760A (en
Inventor
汪晨飞
单腾飞
高俊文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111237433.5A priority Critical patent/CN113992760B/en
Publication of CN113992760A publication Critical patent/CN113992760A/en
Application granted granted Critical
Publication of CN113992760B publication Critical patent/CN113992760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The disclosure provides a method, a device, equipment, a storage medium and a program product for scheduling back source traffic, which relate to the technical field of cloud computing, in particular to the technical field of content distribution networks. The specific implementation scheme is as follows: in response to receiving the back-source request, computing a first code corresponding to the back-source request; for each return source node in the plurality of return source nodes, calculating a third code according to the first code and the second code corresponding to each return source node; determining a target back source node in a plurality of back source nodes according to the third code and the weight of each back source node; and assigning the back source request to the target back source node.

Description

Method, device, equipment and storage medium for scheduling back source traffic
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to the field of content distribution network technologies.
Background
CDN (Content Delivery Network, i.e., content delivery network) is an intelligent virtual network built on top of an existing network, including node servers deployed throughout the network. The CDN system can schedule the request of the user to a node server which is closer to the user according to the network flow, the connection of each node, the load condition, the distance from the user, the response time and other comprehensive information, so that the user can obtain the required content nearby, the network congestion is reduced, and the response speed and the hit rate of the user for accessing the website are improved.
Disclosure of Invention
The disclosure provides a method, a device, equipment, a storage medium and a program product for scheduling back source traffic.
According to an aspect of the present disclosure, there is provided a scheduling method of back source traffic, including: in response to receiving a back-source request, computing a first code corresponding to the back-source request; for each source return node in a plurality of source return nodes, calculating a third code according to the first code and the second code corresponding to each source return node; determining a target back source node in the plurality of back source nodes according to the third code and the weight of each back source node; and distributing the back source request to the target back source node.
According to another aspect of the present disclosure, there is provided a scheduling apparatus for back source traffic, including: the first computing module is used for responding to the received back source request and computing a first code corresponding to the back source request; the second calculation module is used for calculating a third code according to the first code and the second code corresponding to each back source node aiming at each back source node in the plurality of back source nodes; the node determining module is used for determining a target back source node in the plurality of back source nodes according to the third code and the weight of each back source node; and the allocation module is used for allocating the back source request to the target back source node.
Another aspect of the present disclosure provides an electronic device, comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the methods shown in the embodiments of the present disclosure.
According to another aspect of the disclosed embodiments, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the methods shown in the disclosed embodiments.
According to another aspect of the disclosed embodiments, there is provided a computer program product comprising a computer program/instruction, characterized in that the computer program/instruction, when executed by a processor, implements the steps of the method shown in the disclosed embodiments.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is an application scenario schematic diagram of a method, an apparatus, an electronic device, and a storage medium for scheduling back source traffic according to an embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of a method of scheduling back source traffic in accordance with an embodiment of the present disclosure;
fig. 3 schematically illustrates a schematic diagram of a scheduling method of back source traffic according to an embodiment of the present disclosure;
fig. 4 schematically illustrates a flowchart of a method of scheduling back source traffic according to another embodiment of the present disclosure;
fig. 5 schematically illustrates a block diagram of a scheduling apparatus of back source traffic according to an embodiment of the disclosure; and
FIG. 6 schematically illustrates a block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
An application scenario of the method and apparatus provided by the present disclosure will be described below with reference to fig. 1.
Fig. 1 is an application scenario schematic diagram of a method, an apparatus, an electronic device, and a storage medium for scheduling back source traffic according to an embodiment of the disclosure.
As shown in fig. 1, the application scenario 100 includes end devices 111, 112, 113 and a CDN service cluster 120. Wherein CDN service cluster 120 may include a plurality of nodes 121, 122, 123, and 124.
The user may interact with the CDN service cluster 120 over a network using the terminal devices 111, 112, 113 to request acquisition of data, etc. Various communication client applications, such as shopping class applications, web browser applications, search class applications, instant messaging tools, mailbox clients, social platform software, etc. (by way of example only) may be installed on the terminal devices 111, 112, 113.
The terminal devices 111, 112, 113 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
According to an embodiment of the present disclosure, when a user wants to acquire a certain data resource, the user may send an acquisition request of the data resource to the CDN service cluster 120 through the terminal device 111, 112, or 113. The node 121 in the CDN service cluster 120 may receive the fetch request. If the node 121 does not have the data resource stored locally, the node 121 may perform a source-back operation to acquire the data resource. For example, one of the nodes 122, 123, and 124 of the upper layer of the node 121 may be selected, and then a back source request for the data resource may be sent to the selected node. When nodes 122, 123, and 124 receive the back-source request, a determination may be made as to whether the data resource is stored locally. If so, the data resource is returned to node 121. If not, the data resource is requested from a further upper level node or source station and returned to node 121. After the node 121 acquires the data resource, the acquired data resource is sent to the terminal device used by the user.
It should be understood that the number of end devices and CDN service clusters in fig. 1, as well as the number of nodes in a CDN service cluster, are merely illustrative. There may be any number of terminal devices, CDN service clusters, and nodes, as desired for implementation.
In the technical scheme of the disclosure, the related data resources, the data such as the source return request and the like are collected, stored, used, processed, transmitted, provided, disclosed and the like, all conform to the regulations of related laws and regulations, and the public order is not violated.
Fig. 2 schematically illustrates a flowchart of a method of scheduling of back source traffic according to an embodiment of the present disclosure.
As shown in fig. 2, the method 200 includes calculating a first code corresponding to a back-source request in response to receiving the back-source request in operation S210.
In accordance with embodiments of the present disclosure, a back source request may be used to request that a back source operation be performed. The first codes corresponding to the different back source requests are also different. The first code may for example comprise a multi-bit number.
According to embodiments of the present disclosure, the first encoding may be calculated based on the request identification of the back source request, for example, using a hash algorithm. The request identifier of the back source request may include a request URI (Uniform Resource Identifier ) or the like.
Then, in operation S220, for each of the plurality of back source nodes, a third code is calculated from the first code and the second code corresponding to each back source node.
The back source node may comprise, for example, a node in a CDN service cluster, according to an embodiment of the present disclosure.
According to embodiments of the present disclosure, the second code corresponding to each back source node may be pre-calculated. Wherein, the second codes corresponding to different back source nodes are also different. The second code may for example comprise a multi-bit number. For example, a hash algorithm may be utilized to calculate a second encoding of the back source node based on the node identification of the back source node. The node identifier of the source node may include, for example, an IP address, a node name, and the like of the source node.
According to embodiments of the present disclosure, a pseudo-random data distribution algorithm may be utilized, for example, to determine a third code in a mapping space of the pseudo-random data distribution algorithm that corresponds to the first code and the second code. Wherein a pseudo-random data distribution algorithm may be used to map two original values to a certain value within the mapping space.
In operation S230, a target back source node of the plurality of back source nodes is determined according to the third code and the weight of each back source node.
According to embodiments of the present disclosure, the selection coefficient of each back source node may be determined, for example, according to the third code and the weight of each back source node. And then determining a target back source node in the plurality of back source nodes according to the selection coefficient of each back source node. The weight of the node can be set according to actual needs. For example, the weight of a node may be set according to the size of the service capability of the node, with the larger the service capability, the larger the corresponding weight.
In operation S240, the back source request is allocated to the target back source node.
According to the embodiment of the disclosure, the source back node can be enabled to execute the source back operation by distributing the source back request to the target source back node.
According to the embodiment of the disclosure, for the same back source request and the same back source node, the same third code can be calculated, and thus the obtained selection coefficient is also the same. Therefore, the source return node corresponding to the source return request is selected according to the selection coefficient, so that the same source return request is always served by the same source return node, and the source return hit rate is improved.
A method of determining the selection coefficient of the back source node is described below.
According to embodiments of the present disclosure, the selection coefficient of the back source node may be calculated, for example, according to the following formula:
S=ln(C/m)/W
wherein S is a selection coefficient of the back source node, C is a third code corresponding to the back source node, m is the total number of values in the mapping space of the pseudo-random data distribution algorithm, and W is the weight of the back source node.
According to the embodiment of the disclosure, according to the calculation formula of the selection coefficient, the selection coefficient of the back source node is in direct proportion to the weight of the back source node. Therefore, the back source node with the largest selection coefficient among all the back source nodes can be determined to be the target node, so that the traffic distribution is distributed according to the weight when the traffic is distributed according to the selection coefficient.
The scheduling method of the back-flow streams shown above is further described with reference to fig. 3 in connection with a specific embodiment. Those skilled in the art will appreciate that the following example embodiments are merely for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 3 schematically illustrates a schematic diagram of a scheduling method of back source traffic according to an embodiment of the present disclosure.
As shown in fig. 3, in case of receiving a back source request, a request identification of the back source request may be acquired according to an embodiment of the present disclosure. The first encoding is then calculated based on the request identification using a hash algorithm.
Illustratively, in this embodiment, a hash algorithm may be used to calculate an unsigned 64-bit number a based on the request identifier as the first code.
In this embodiment, the source-returning request corresponds to N candidate source-returning nodes. The node identification of each of the N back source nodes may be obtained. A second encoding is then calculated for each node based on each node identification using a hash algorithm.
In this embodiment, by way of example, for N back source nodes, N unsigned 64-bit numbers B1 can be obtained by calculation based on the node identification of each back source node by utilizing a hash algorithm b2, B3, &....bn, respectively as a second encoding for each back source node.
After the first code and the second code are obtained, a third code corresponding to the node may be calculated based on the first code and the second code of the node using a pseudo-random data distribution algorithm for each node.
Illustratively, in this embodiment, the pseudo-random data distribution algorithm may include, for example, a CRUSH algorithm. The CRUSH algorithm may be implemented, for example, by a credit (x, y) function. The input to the credit (x, y) function is two unsigned 64 bit integers x and v, the mapping space is 0-65535, and the output is an integer C in the range of 0-65535. By inputting the second code and the first code of each back source node into a crumh function, that is, calculating crumh (a, B1), crumh (a, B2), crumh (a, BN), C1, C2, crumh, CN can be obtained respectively as the third code corresponding to each back source node.
Next, for each node, a selection coefficient of the back source node may be determined according to the third encoding and the weight of the back source node. And then determining a target back source node in the plurality of back source nodes according to the selection coefficient of each back source node. For example, the size of the selection coefficient of each back source node may be compared, and the back source node with the largest selection coefficient may be determined as the target back source node.
According to embodiments of the present disclosure, the selection coefficient of the back source node may be calculated, for example, according to the following formula:
S=ln(C/65536)/W
wherein S is a selection coefficient of the echo source node, C is a third code corresponding to the echo source node, 65536 is a total number of values in a mapping space of the CRUSH algorithm, and W is a weight of the echo source node.
Based on this, the computation s1=ln (C1/65536)/W1, s2=ln (C2/65536)/W2, where W1, W2, and..wn are the set weights of the 1 st, 2 nd, and N back source nodes, respectively, may be traversed.
Then, the sizes of S1 to SN can be compared. If the i-th selection coefficient si=ln (Ci/65536)/Wi is maximum, the back source request is allocated to the i-th back source node.
According to embodiments of the present disclosure, the output of the credit function may be considered to be approximately evenly distributed within 0-65535, i.e., the third code C calculated by the credit function may be approximately considered to be a random variable between 0-65535. Correspondingly, according to the calculation formula s=ln (C/65536)/W of the selection coefficient, the selection coefficient S may be approximately obtained by taking the negative calculation from a random variable obeying the exponential distribution with the parameter being the weight W. It can be further deduced that the probability of the back source request being allocated to the ith back source node is proportional to the weight of the ith back source node, i.e. when allocating traffic according to the selection coefficient, the traffic allocation is desirably allocated according to the weight.
According to the embodiment of the disclosure, the mapping logic between the back source request and the first code and the mapping logic between the back source node and the second code can ensure that the same back source request is always served by the same node, thereby improving the back source hit rate. In addition, the computation logic related to the back source traffic scheduling method according to the embodiment of the disclosure is simple, and a complex data structure is not required to be maintained.
Fig. 4 schematically illustrates a flowchart of a method of scheduling back source traffic according to another embodiment of the present disclosure.
As shown in fig. 4, the method 400 includes calculating a first code corresponding to a back-source request in response to receiving the back-source request in operation S410.
Then, in operation S420, it is determined whether a change occurs to the source node. In case that the source node is unchanged, operation S430 is performed. In case that the source node is changed, operation S440 is performed.
In operation S430, a second code corresponding to each of the plurality of back source nodes is acquired. Then, operation S460 is performed.
In operation S440, node identifiers of the changed plurality of back source nodes are acquired.
In operation S450, a second code corresponding to each of the plurality of back source nodes is calculated based on the node identification of each back source node using a hash algorithm.
In operation S460, for each of the plurality of back source nodes, a third code is calculated from the first code and the second code corresponding to each back source node.
In operation S470, a target back source node of the plurality of back source nodes is determined according to the third code and the weight of each back source node.
In operation S480, the back source request is assigned to the target back source node.
According to the embodiments of the present disclosure, when operations such as newly adding an online machine, a offline old machine, an online machine expansion or contraction occur in the CDN service cluster, the back source node may change, and the corresponding node identifier may also change.
According to the method for scheduling the back source traffic in the embodiment of the disclosure, for a specific back source request, when the calculated Si value after the weight of the ith node is changed is larger than the maximum value of the S value set before the change, the request is redistributed to the node with the changed weight, and the corresponding traffic is migrated to the node with the changed weight. When the calculated before-change Si value is the maximum value of the before-change S value set and the calculated after-change Si value is no longer the maximum value of the after-change S value set, the request is distributed from the ith node to other nodes, and the node corresponding to the flow and with the change weight is migrated. As the request identification, the node identification and the weight of the determined back source request are also determined, the finally calculated selection coefficient S value of each request identification and different node identifications is also determined, when the weight of the node i is changed, only the Si value corresponding to one node is changed, the S values of other nodes are not changed, and the size relation is not changed, so that no flow transfer exists between the nodes without weight change, and the flow is transferred to the position between the node with the changed weight and other nodes, thereby reducing the flow migration level caused by the remapping distribution of the flow request due to the resource change of the back source node, and further reducing the back source bandwidth cost.
The following describes a scheduling apparatus for back-source traffic in an embodiment of the present disclosure with reference to fig. 5.
Fig. 5 schematically illustrates a block diagram of a scheduling apparatus for back source traffic according to an embodiment of the present disclosure.
As shown in fig. 5, the scheduling apparatus 500 for back-source traffic includes a first calculation module 510, a second calculation module 520, a node determination module 530, and an allocation module 540.
The first calculating module 510 is configured to calculate, in response to receiving the back-source request, a first code corresponding to the back-source request.
The second calculating module 520 is configured to calculate, for each of the plurality of back source nodes, a third code according to the first code and the second code corresponding to each back source node.
The node determining module 530 is configured to determine a target back source node of the plurality of back source nodes according to the third code and the weight of each back source node.
An allocation module 540, configured to allocate the back source request to the target back source node.
According to an embodiment of the present disclosure, the first computing module includes a request identification acquisition sub-module and a first computing sub-module. The request identifier acquisition sub-module is used for acquiring the request identifier of the retrieval source request. A first computing sub-module for computing a first code based on the request identification using a hash algorithm.
According to an embodiment of the disclosure, the apparatus further includes a first acquisition module and a third calculation module. The first acquisition module is used for acquiring the node identification of each source node. And a third calculation module for calculating a second code based on the node identification using a hash algorithm.
According to an embodiment of the present disclosure, the second calculation module includes a second calculation sub-module for determining a third code corresponding to the first code and the second code in a mapping space of the pseudo-random data distribution algorithm using the pseudo-random data distribution algorithm.
According to an embodiment of the present disclosure, the node determination module includes a selection coefficient determination sub-module and a target node determination sub-module. The selection coefficient determining submodule is used for determining the selection coefficient of each back source node according to the third code and the weight of each back source node. And the target node determining submodule is used for determining target back source nodes in the plurality of back source nodes according to the selection coefficient of each back source node.
According to an embodiment of the present disclosure, the selection coefficient determination submodule includes a fourth calculation unit configured to calculate, for each back source node, a selection coefficient of the back source node according to the following formula:
S=ln(C/m)/W
wherein S is a selection coefficient of the back source node, C is a third code corresponding to the back source node, m is the total number of values in the mapping space of the pseudo-random data distribution algorithm, and W is the weight of the back source node.
According to embodiments of the present disclosure, the pseudo random data distribution algorithm may include a CRUSH algorithm.
According to an embodiment of the disclosure, the apparatus further includes a second acquisition module and a fifth calculation module. The second obtaining module is configured to obtain node identifiers of the changed plurality of source return nodes when the source return nodes change. And a fifth calculation module, configured to calculate, using a hash algorithm, a second code corresponding to each of the source nodes based on a node identifier of each of the source nodes.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 6 schematically illustrates a block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 may also be stored. The computing unit 601, ROM 602, and RAM 603 are connected to each other by a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Various components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, mouse, etc.; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 601 performs the various methods and processes described above, such as the scheduling method of the back-source traffic. For example, in some embodiments, the method of scheduling back source traffic may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the back source traffic scheduling method described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the scheduling method of the reflow streams in any other suitable way (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service (Virtual Private Server or VPS for short) are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (14)

1. A scheduling method of back source traffic includes:
in response to receiving a back-source request, computing a first code corresponding to the back-source request;
for each source node in a plurality of source nodes, determining a third code corresponding to the first code and the second code in a mapping space of a pseudo-random data distribution algorithm by using the pseudo-random data distribution algorithm according to the first code and the second code corresponding to each source node, wherein the second code is calculated based on the node identification of each source node;
determining a target back source node in the plurality of back source nodes according to the third code and the weight of each back source node, and determining a selection coefficient of each back source node;
determining a target back source node in the plurality of back source nodes according to the selection coefficient of each back source node; and
and distributing the back source request to the target back source node.
2. The method of claim 1, wherein the computing a first encoding corresponding to a back source request comprises:
acquiring a request identifier of the source return request; and
the first encoding is calculated based on the request identification using a hash algorithm.
3. The method of claim 1, further comprising:
acquiring a node identifier of each source node; and
and calculating the second code based on the node identification by using a hash algorithm.
4. The method of claim 1, wherein the determining the selection coefficient of each back source node according to the third encoding and the weight of each back source node comprises:
for each back source node, calculating a selection coefficient of the back source node according to the following formula:
S=ln(C/m)/W
the S is a selection coefficient of the source return node, the C is a third code corresponding to the source return node, the m is the total number of values in a mapping space of the pseudo-random data distribution algorithm, and the W is the weight of the source return node.
5. The method of claim 1, wherein the pseudo-random data distribution algorithm comprises a crumsh algorithm.
6. The method of claim 1, further comprising:
under the condition that the source return nodes are changed, node identifiers of a plurality of changed source return nodes are obtained; and
and calculating a second code corresponding to each back source node based on the node identification of each back source node in the plurality of back source nodes by using a hash algorithm.
7. A scheduling apparatus for a reflow source stream, comprising:
the first computing module is used for responding to the received back source request and computing a first code corresponding to the back source request;
a second calculation module, configured to determine, for each of a plurality of source nodes, a third code corresponding to the first code and the second code in a mapping space of a pseudo-random data distribution algorithm according to the first code and the second code corresponding to the source node, where the second code is calculated based on a node identifier of the source node;
the node determining module is used for determining a selection coefficient of each back source node according to the third code and the weight of each back source node, and determining a target back source node in the plurality of back source nodes according to the selection coefficient of each back source node; and
and the allocation module is used for allocating the back source request to the target back source node.
8. The apparatus of claim 7, wherein the first computing module comprises:
the request identifier acquisition sub-module is used for acquiring the request identifier of the source-returning request; and
a first computing sub-module for computing the first code based on the request identification using a hash algorithm.
9. The apparatus of claim 7, further comprising:
the first acquisition module is used for acquiring the node identification of each source node; and
and the second calculating module is used for calculating the second code based on the node identification by utilizing a hash algorithm.
10. The apparatus of claim 7, wherein the node determination module comprises:
the third calculation module is configured to calculate, for each back source node, a selection coefficient of the back source node according to the following formula:
S=ln(C/m)/W
the S is a selection coefficient of the source return node, the C is a third code corresponding to the source return node, the m is the total number of values in a mapping space of the pseudo-random data distribution algorithm, and the W is the weight of the source return node.
11. The apparatus of claim 7, wherein the pseudo-random data distribution algorithm comprises a crumsh algorithm.
12. The apparatus of claim 7, further comprising:
the second acquisition module is used for acquiring node identifiers of a plurality of changed source return nodes under the condition that the source return nodes are changed; and
and a fourth calculation module, configured to calculate, using a hash algorithm, a second code corresponding to each of the plurality of back source nodes based on a node identifier of the each back source node.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-6.
14. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-6.
CN202111237433.5A 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic Active CN113992760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111237433.5A CN113992760B (en) 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111237433.5A CN113992760B (en) 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic

Publications (2)

Publication Number Publication Date
CN113992760A CN113992760A (en) 2022-01-28
CN113992760B true CN113992760B (en) 2024-03-01

Family

ID=79740719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111237433.5A Active CN113992760B (en) 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic

Country Status (1)

Country Link
CN (1) CN113992760B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450734A (en) * 2015-11-09 2016-03-30 上海爱数信息技术股份有限公司 Distributed storage CEPH data distribution optimization method
CN105991459A (en) * 2015-02-15 2016-10-05 上海帝联信息科技股份有限公司 Source-returning route distribution method, apparatus and system of CDN node
WO2019057212A1 (en) * 2017-09-22 2019-03-28 中兴通讯股份有限公司 Method, apparatus and device for scheduling service within cdn node, and storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment
CN113037869A (en) * 2021-04-14 2021-06-25 北京百度网讯科技有限公司 Method and apparatus for back-sourcing of content distribution network system
CN113364877A (en) * 2021-06-11 2021-09-07 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN113472852A (en) * 2021-06-02 2021-10-01 乐视云计算有限公司 CDN node back-source method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483614B (en) * 2017-08-31 2021-01-22 京东方科技集团股份有限公司 Content scheduling method and communication network based on CDN (content delivery network) and P2P network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105991459A (en) * 2015-02-15 2016-10-05 上海帝联信息科技股份有限公司 Source-returning route distribution method, apparatus and system of CDN node
CN105450734A (en) * 2015-11-09 2016-03-30 上海爱数信息技术股份有限公司 Distributed storage CEPH data distribution optimization method
WO2019057212A1 (en) * 2017-09-22 2019-03-28 中兴通讯股份有限公司 Method, apparatus and device for scheduling service within cdn node, and storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment
CN113037869A (en) * 2021-04-14 2021-06-25 北京百度网讯科技有限公司 Method and apparatus for back-sourcing of content distribution network system
CN113472852A (en) * 2021-06-02 2021-10-01 乐视云计算有限公司 CDN node back-source method, device and equipment
CN113364877A (en) * 2021-06-11 2021-09-07 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113992760A (en) 2022-01-28

Similar Documents

Publication Publication Date Title
CN107545338B (en) Service data processing method and service data processing system
CN114253979B (en) Message processing method and device and electronic equipment
WO2022111313A1 (en) Request processing method and micro-service system
CN110933181B (en) Routing method, device and system and electronic equipment
CN112615795A (en) Flow control method and device, electronic equipment, storage medium and product
CN113342517A (en) Resource request forwarding method and device, electronic equipment and readable storage medium
CN113992760B (en) Method, device, equipment and storage medium for scheduling back source traffic
CN115190180A (en) Method and device for scheduling network resource request during sudden increase of network resource request
CN113778645A (en) Task scheduling method, device and equipment based on edge calculation and storage medium
CN114793234B (en) Message processing method, device, equipment and storage medium
CN115334040B (en) Method and device for determining Internet Protocol (IP) address of domain name
CN114615273B (en) Data transmission method, device and equipment based on load balancing system
CN114884945B (en) Data transmission method, cloud server, device, system and storage medium
CN115086300B (en) Video file scheduling method and device
CN114827055B (en) Data mirroring method and device, electronic equipment and switch cluster
CN116996481B (en) Live broadcast data acquisition method and device, electronic equipment and storage medium
CN114900562A (en) Resource acquisition method and device, electronic equipment and storage medium
CN115442432A (en) Control method, device, equipment and storage medium
CN117793188A (en) Resource flow limiting method, device, system, equipment and storage medium
CN117951150A (en) Database connection method, apparatus, device, storage medium, and program product
CN115509690A (en) Virtual machine allocation method and device, electronic equipment and storage medium
CN113946414A (en) Task processing method and device and electronic equipment
CN115526507A (en) Battery replacement station management method and device, electronic equipment and storage medium
CN116886775A (en) External exposure method and device, cluster deployment system and storage medium
CN116594737A (en) Processor resource allocation method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant