CN113992760A - Back source flow scheduling method, device, equipment and storage medium - Google Patents

Back source flow scheduling method, device, equipment and storage medium Download PDF

Info

Publication number
CN113992760A
CN113992760A CN202111237433.5A CN202111237433A CN113992760A CN 113992760 A CN113992760 A CN 113992760A CN 202111237433 A CN202111237433 A CN 202111237433A CN 113992760 A CN113992760 A CN 113992760A
Authority
CN
China
Prior art keywords
source
source node
node
code
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111237433.5A
Other languages
Chinese (zh)
Other versions
CN113992760B (en
Inventor
汪晨飞
单腾飞
高俊文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111237433.5A priority Critical patent/CN113992760B/en
Publication of CN113992760A publication Critical patent/CN113992760A/en
Application granted granted Critical
Publication of CN113992760B publication Critical patent/CN113992760B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Information Transfer Between Computers (AREA)

Abstract

The disclosure provides a back-source traffic scheduling method, a back-source traffic scheduling device, a back-source traffic scheduling storage medium and a program product, and relates to the technical field of cloud computing, in particular to the technical field of content distribution networks. The specific implementation scheme is as follows: in response to receiving the back-to-source request, calculating a first code corresponding to the back-to-source request; aiming at each back source node in the multiple back source nodes, calculating a third code according to the first code and a second code corresponding to each back source node; determining a target back-to-source node in the plurality of back-to-source nodes according to the third codes and the weight of each back-to-source node; and distributing the back-source request to the target back-source node.

Description

Back source flow scheduling method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of cloud computing technologies, and in particular, to the field of content distribution network technologies.
Background
A CDN (Content Delivery Network) is an intelligent virtual Network built on the basis of an existing Network, and includes node servers deployed around the Network. The CDN system can schedule the request of the user to a node server close to the user in real time according to network flow, connection of each node, load conditions, distance to the user, response time and other comprehensive information, so that the user can obtain required content nearby, network congestion is reduced, and response speed and hit rate of the user for accessing a website are improved.
Disclosure of Invention
The present disclosure provides a back source traffic scheduling method, apparatus, device, storage medium, and program product.
According to an aspect of the present disclosure, a method for scheduling back-source traffic is provided, including: in response to receiving a back-to-source request, calculating a first code corresponding to the back-to-source request; aiming at each back source node in a plurality of back source nodes, calculating a third code according to the first code and a second code corresponding to each back source node; determining a target back-to-source node in the plurality of back-to-source nodes according to the third codes and the weight of each back-to-source node; and distributing the back-source request to the target back-source node.
According to another aspect of the present disclosure, there is provided a back source traffic scheduling apparatus, including: the first calculation module is used for responding to the received back-source request and calculating a first code corresponding to the back-source request; a second calculating module, configured to calculate, for each back-source node in the multiple back-source nodes, a third code according to the first code and a second code corresponding to each back-source node; a node determining module, configured to determine a target back-to-source node in the multiple back-to-source nodes according to the third code and the weight of each back-to-source node; and an allocation module for allocating the back-source request to the target back-source node.
Another aspect of the present disclosure provides an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of the embodiments of the present disclosure.
According to another aspect of the disclosed embodiments, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method shown in the disclosed embodiments.
According to another aspect of the embodiments of the present disclosure, there is provided a computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the steps of the method shown in the embodiments of the present disclosure.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a schematic view of an application scenario of a back source traffic scheduling method, apparatus, electronic device and storage medium according to an embodiment of the present disclosure;
fig. 2 schematically illustrates a flow chart of a method of scheduling back source traffic according to an embodiment of the present disclosure;
fig. 3 schematically shows a schematic diagram of a scheduling method of back-source traffic according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of a method of scheduling back source traffic according to another embodiment of the present disclosure;
fig. 5 schematically illustrates a block diagram of a scheduling apparatus of back-source traffic according to an embodiment of the present disclosure; and
FIG. 6 schematically shows a block diagram of an example electronic device that may be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
An application scenario of the method and apparatus provided by the present disclosure will be described below with reference to fig. 1.
Fig. 1 is a schematic view of an application scenario of a back source traffic scheduling method, an apparatus, an electronic device, and a storage medium according to an embodiment of the disclosure.
As shown in fig. 1, the application scenario 100 includes end devices 111, 112, 113 and a CDN service cluster 120. The CDN service cluster 120 may include a plurality of nodes 121, 122, 123, and 124, among other things.
Users may interact with the CDN service cluster 120 over the network using end devices 111, 112, 113 to request data, etc. Various messaging client applications, such as shopping-like applications, web browser applications, search-like applications, instant messaging tools, mailbox clients, social platform software, etc. (examples only) may be installed on the terminal devices 111, 112, 113.
The terminal devices 111, 112, 113 may be various electronic devices having a display screen and supporting web browsing, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
According to the embodiment of the present disclosure, when a user wants to acquire a certain data resource, the user may send an acquisition request of the data resource to the CDN service cluster 120 through the terminal device 111, 112, or 113. The node 121 in the CDN service cluster 120 may receive the acquisition request. If the node 121 does not locally store the data resource, the node 121 may perform a back-to-source operation to obtain the data resource. For example, one of the nodes 122, 123 and 124 above the node 121 may be selected, and then a back-source request for the data resource may be sent to the selected node. When nodes 122, 123, and 124 receive the request back to the source, it may be determined whether the data resource is stored locally. If so, the data resource is returned to node 121. If not, the data resource is requested from a further previous level node or source station and then returned to node 121. After the node 121 acquires the data resource, the acquired data resource is sent to the terminal device used by the user.
It should be understood that the number of end devices and CDN service clusters in fig. 1, as well as the number of nodes in a CDN service cluster, are merely illustrative. There may be any number of terminal devices, CDN service clusters, and nodes, as desired for implementation.
In the technical scheme of the disclosure, the collection, storage, use, processing, transmission, provision, disclosure and other processing of the related data resources, the source returning request and other data all accord with the regulations of related laws and regulations, and do not violate the good custom of the public order.
Fig. 2 schematically shows a flow chart of a scheduling method of back-source traffic according to an embodiment of the present disclosure.
As shown in fig. 2, the method 200 includes, in response to receiving an back-to-source request, calculating a first encoding corresponding to the back-to-source request at operation S210.
According to embodiments of the present disclosure, a back-to-source request may be used to request that a back-to-source operation be performed. The first codes corresponding to different back-source requests are different. Illustratively, the first encoding may comprise a multi-bit number, for example.
According to embodiments of the present disclosure, the first encoding may be calculated based on the request identification of the back-source request, for example, using a hashing algorithm. The request Identifier of the back-to-source request may include a request URI (Uniform Resource Identifier) and the like.
Then, in operation S220, for each back-source node of the plurality of back-source nodes, a third code is calculated according to the first code and a second code corresponding to each back-source node.
According to an embodiment of the present disclosure, the back-source node may include, for example, a node in a CDN service cluster.
According to an embodiment of the present disclosure, the second code corresponding to each back-source node may be pre-computed. And the second codes corresponding to different back source nodes are different. Illustratively, the second encoding may comprise a multi-bit number, for example. For example, a hash algorithm may be utilized to compute the second encoding for the back-to-source node based on the node identification of the back-to-source node. The node identifier of the back-to-source node may include, for example, an IP address, a node name, and the like of the back-to-source node.
According to an embodiment of the present disclosure, a third code corresponding to the first code and the second code in a mapping space of the pseudo random data distribution algorithm may be determined, for example, using a pseudo random data distribution algorithm. Wherein a pseudo-random data distribution algorithm may be used to map two original values to a value within the mapping space.
In operation S230, a target back-to-source node of the plurality of back-to-source nodes is determined according to the third encoding and the weight of each back-to-source node.
According to an embodiment of the present disclosure, the selection coefficient of each back-to-source node may be determined, for example, according to the third encoding and the weight of each back-to-source node. And then determining a target back-source node in the plurality of back-source nodes according to the selection coefficient of each back-source node. The weights of the nodes can be set according to actual needs. For example, the weight of the node may be set according to the size of the service capability of the node, and the larger the service capability is, the larger the corresponding weight is.
In operation S240, a back-source request is assigned to a target back-source node.
According to an embodiment of the present disclosure, a target back-to-source node may be caused to perform a back-to-source operation by assigning a back-to-source request to the target back-to-source node.
According to the embodiment of the disclosure, the same third code can be calculated for the same back source request and the same back source node, and the obtained selection coefficients are also the same. Therefore, the source returning node corresponding to the source returning request is selected according to the selection coefficient, so that the same source returning request can be served by the same source returning node, and the source returning hit rate is improved.
A method of determining the selection coefficient back to the source node is described below.
According to an embodiment of the present disclosure, the selection coefficient back to the source node may be calculated, for example, according to the following formula:
S=ln(C/m)/W
wherein S is a selection coefficient of the back source node, C is a third code corresponding to the back source node, m is the total number of values in a mapping space of a pseudo-random data distribution algorithm, and W is the weight of the back source node.
According to the embodiment of the disclosure, according to the calculation formula of the selection coefficient, the selection coefficient of the back-to-source node is proportional to the weight of the back-to-source node. Therefore, the back source node with the largest selection coefficient in all back source nodes can be determined as the target node, so that when the flow is distributed according to the selection coefficient, the flow distribution is distributed according to the weight.
The method for scheduling back-source traffic shown above is further described with reference to fig. 3 in conjunction with a specific embodiment. Those skilled in the art will appreciate that the following example embodiments are only for the understanding of the present disclosure, and the present disclosure is not limited thereto.
Fig. 3 schematically shows a schematic diagram of a scheduling method of back-source traffic according to an embodiment of the present disclosure.
As shown in fig. 3, according to the embodiment of the present disclosure, in the case of receiving an back-source request, a request identifier of the back-source request may be obtained. A first encoding is then calculated based on the request identification using a hashing algorithm.
For example, in this embodiment, an unsigned 64-bit number a may be calculated based on the request identifier by using a hash algorithm as the first code.
In this embodiment, the back-source request corresponds to N candidate back-source nodes. A node identification for each of the N back-to-source nodes may be obtained. A second encoding for each node is then calculated based on each node identification using a hashing algorithm.
For example, in this embodiment, for N back-source nodes, N unsigned 64-bit numbers B1, B2, B3, and a.
After the first code and the second code are obtained, a third code corresponding to each node may be calculated for the node based on the first code and the second code for the node using a pseudo-random data distribution algorithm.
Exemplarily, in the present embodiment, the pseudo random data distribution algorithm may include, for example, a CRUSH algorithm. The CRUSH algorithm may be implemented, for example, by a CRUSH (x, y) function. The input of the pause (x, y) function is two unsigned 64-bit integers x and v, the mapping space is 0-65535, and the output is an integer C in the range of 0-65535. By inputting the second code and the first code of each back-source node into a CRUSH function, namely calculating the pause (A, B1), the pause (A, B2),. eta., and the pause (A, BN), C1, C2,. eta., and CN can be respectively obtained as the third codes corresponding to each back-source node.
Next, for each node, a selection coefficient for the back-to-source node may be determined based on the third coding and the weight of the back-to-source node. And then determining a target back-source node in the plurality of back-source nodes according to the selection coefficient of each back-source node. For example, the magnitude of the selection coefficient of each back-source node may be compared, and the back-source node with the largest selection coefficient may be determined as the target back-source node.
According to an embodiment of the present disclosure, the selection coefficient back to the source node may be calculated, for example, according to the following formula:
S=ln(C/65536)/W
wherein S is a selection coefficient of the back source node, C is a third code corresponding to the back source node, 65536 is the total number of values in the mapping space of the CRUSH algorithm, and W is the weight of the back source node.
Based on this, S1 ═ ln (C1/65536)/W1, S2 ═ ln (C2/65536)/W2,. and SN ═ ln (CN/65536)/WN may be calculated over the course, where W1, W2,. WN are the set weights of the 1 st, 2 nd, and N return source nodes, respectively.
Then, the magnitudes of S1-SN can be compared. If the ith selection coefficient Si ═ ln (Ci/65536)/Wi is maximum, the back-source request is distributed to the ith back-source node.
According to embodiments of the present disclosure, the output of the househ function may be considered to be approximately evenly distributed within 0-65535, i.e., the third code C calculated by the househ function may be approximately considered to be a random variable between 0-65535. Accordingly, according to the calculation formula S ═ ln (C/65536)/W for the selection coefficient, the selection coefficient S can be obtained by approximately performing a negative calculation with a random variable subjected to an exponential distribution with the parameter as the weight W. It can further be derived that the probability that a back-source request is assigned to the ith back-source node is proportional to the ith back-source node weight, i.e. when allocating traffic according to the selection coefficient, the traffic allocation is desirably assigned according to the weight.
According to the embodiment of the disclosure, the mapping logic between the source returning request and the first code and the mapping logic between the source returning node and the second code can ensure that the same source returning request can be served by the same node, and the source returning hit rate is improved. In addition, the method for scheduling the source return traffic according to the embodiment of the present disclosure has simple calculation logic and does not need to maintain a complex data structure.
Fig. 4 schematically shows a flow chart of a method of scheduling back-source traffic according to another embodiment of the present disclosure.
As shown in fig. 4, the method 400 includes, in response to receiving an back-to-source request, calculating a first encoding corresponding to the back-to-source request at operation S410.
Then, in operation S420, it is determined whether a change has occurred back to the source node. In case that no change occurs to the back source node, operation S430 is performed. In case that the back source node is changed, operation S440 is performed.
In operation S430, a second code corresponding to each of the plurality of back-to-source nodes is obtained. And then performs operation S460.
In operation S440, the changed node identifications of the multiple back-to-source nodes are obtained.
In operation S450, a second code corresponding to each back-source node of the plurality of back-source nodes is calculated based on the node identification of each back-source node using a hash algorithm.
In operation S460, for each back-source node of the plurality of back-source nodes, a third code is calculated according to the first code and a second code corresponding to each back-source node.
In operation S470, a target back-to-source node of the plurality of back-to-source nodes is determined according to the third encoding and the weight of each back-to-source node.
In operation S480, a back-source request is assigned to a target back-source node.
According to the embodiment of the disclosure, when operations such as adding an online machine, removing an old machine, expanding or shrinking an online machine and the like occur in a CDN service cluster, a back source node changes, and a corresponding node identifier also changes.
According to the method for scheduling back-source traffic of the embodiment of the disclosure, for a certain back-source request, when the value Si calculated after the weight change of the ith node is larger than the maximum value of the set of S values before the change, the request is redistributed to the node with the changed weight, and the corresponding traffic is migrated to the node with the changed weight. When the value of the arrival Si calculated before the weight is changed is the maximum value of the set of S values before the weight is changed, and the value of the arrival Si calculated after the weight is changed is no longer the maximum value of the set of S values after the weight is changed, the request is distributed to other nodes from the ith node, and the nodes with the changed weights corresponding to the traffic are migrated. Because the selection coefficient S value finally calculated by each request identifier and different node identifiers is also determined for the request identifiers, the node identifiers and the weights of the determined back-source requests, when the weight of the node i changes, only the Si value corresponding to the node changes, the S values of the other nodes do not change, and the size relationship does not change, so that no traffic transfer exists between the nodes without weight change, and the traffic is migrated to the nodes with the changed weights and other nodes, thereby reducing the traffic migration magnitude caused by traffic request remapping distribution due to back-source node resource change, and further reducing the back-source bandwidth cost.
The scheduling apparatus of the source return traffic according to the embodiment of the present disclosure is explained below with reference to fig. 5.
Fig. 5 schematically shows a block diagram of a scheduling apparatus of back-source traffic according to an embodiment of the present disclosure.
As shown in fig. 5, the scheduling apparatus 500 for back-source traffic includes a first calculation module 510, a second calculation module 520, a node determination module 530, and an allocation module 540.
A first calculation module 510, configured to calculate, in response to receiving the back-to-source request, a first code corresponding to the back-to-source request.
A second calculating module 520, configured to calculate, for each back-source node in the multiple back-source nodes, a third code according to the first code and a second code corresponding to each back-source node.
A node determining module 530, configured to determine a target back-to-source node in the multiple back-to-source nodes according to the third encoding and the weight of each back-to-source node.
An assigning module 540, configured to assign the back-source request to the target back-source node.
According to an embodiment of the present disclosure, the first calculation module includes a request identification acquisition submodule and a first calculation submodule. The request identifier obtaining submodule is used for obtaining the request identifier of the back source request. And the first calculation sub-module is used for calculating the first code based on the request identifier by utilizing a hash algorithm.
According to an embodiment of the present disclosure, the apparatus further includes a first obtaining module and a third calculating module. The first obtaining module is configured to obtain a node identifier of each back-to-source node. And the third calculation module is used for calculating the second code based on the node identification by utilizing a hash algorithm.
According to an embodiment of the disclosure, the second calculation module includes a second calculation submodule for determining a third code corresponding to the first code and the second code in a mapping space of the pseudo-random data distribution algorithm using the pseudo-random data distribution algorithm.
According to an embodiment of the present disclosure, the node determination module includes a selection coefficient determination submodule and a target node determination submodule. And the selection coefficient determining submodule is used for determining the selection coefficient of each back-source node according to the third code and the weight of each back-source node. And the target node determining submodule is used for determining a target back-source node in the plurality of back-source nodes according to the selection coefficient of each back-source node.
According to an embodiment of the present disclosure, the selection coefficient determination submodule includes a fourth calculation unit, configured to calculate, for each back-to-source node, a selection coefficient of the back-to-source node according to the following formula:
S=ln(C/m)/W
wherein S is a selection coefficient of the back source node, C is a third code corresponding to the back source node, m is the total number of values in a mapping space of a pseudo-random data distribution algorithm, and W is the weight of the back source node.
According to an embodiment of the present disclosure, the pseudo random data distribution algorithm may include a CRUSH algorithm, among others.
According to the embodiment of the disclosure, the device further comprises a second obtaining module and a fifth calculating module. The second obtaining module is configured to obtain the changed node identifiers of the multiple back-to-source nodes under the condition that the back-to-source nodes are changed. And the fifth calculation module is used for calculating a second code corresponding to each back-source node based on the node identification of each back-source node in the plurality of back-source nodes by utilizing a hash algorithm.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 6 schematically shows a block diagram of an example electronic device 600 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 6, the apparatus 600 includes a computing unit 601, which can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)602 or a computer program loaded from a storage unit 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data required for the operation of the device 600 can also be stored. The calculation unit 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
A number of components in the device 600 are connected to the I/O interface 605, including: an input unit 606 such as a keyboard, a mouse, or the like; an output unit 607 such as various types of displays, speakers, and the like; a storage unit 608, such as a magnetic disk, optical disk, or the like; and a communication unit 609 such as a network card, modem, wireless communication transceiver, etc. The communication unit 609 allows the device 600 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 601 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 601 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 601 performs the various methods and processes described above, such as a scheduling method of back-source traffic. For example, in some embodiments, the method of scheduling of back-source traffic may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 608. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 600 via the ROM 602 and/or the communication unit 609. When the computer program is loaded into the RAM 603 and executed by the computing unit 601, one or more steps of the method for scheduling back-to-source traffic described above may be performed. Alternatively, in other embodiments, the computing unit 601 may be configured to perform the back-source traffic scheduling method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The Server may be a cloud Server, which is also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service extensibility in a traditional physical host and a VPS service (Virtual Private Server, or VPS for short). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A method for scheduling back source traffic comprises the following steps:
in response to receiving a back-to-source request, calculating a first code corresponding to the back-to-source request;
aiming at each back source node in a plurality of back source nodes, calculating a third code according to the first code and a second code corresponding to each back source node;
determining a target back-to-source node in the plurality of back-to-source nodes according to the third codes and the weight of each back-to-source node; and
and distributing the back source request to the target back source node.
2. The method of claim 1, wherein the computing a first encoding corresponding to an origin-returning request comprises:
acquiring a request identifier of the back source request; and
calculating the first code based on the request identification using a hash algorithm.
3. The method of claim 1, further comprising:
acquiring a node identifier of each back source node; and
calculating the second code based on the node identification using a hash algorithm.
4. The method of claim 1, wherein said computing a third encoding from said first encoding and a second encoding corresponding to said each back-source node comprises:
determining a third code corresponding to the first code and the second code in a mapping space of the pseudo-random data distribution algorithm by using the pseudo-random data distribution algorithm.
5. The method of claim 4, wherein said determining a target back-to-source node of the plurality of back-to-source nodes according to the third encoding and the weight of each back-to-source node comprises:
determining a selection coefficient of each back-source node according to the third codes and the weight of each back-source node; and
and determining a target back-source node in the plurality of back-source nodes according to the selection coefficient of each back-source node.
6. The method of claim 5, wherein the determining the selection coefficient for each back-to-source node according to the third encoding and the weight for each back-to-source node comprises:
for each back-to-source node, calculating a selection coefficient of the back-to-source node according to the following formula:
S=ln(C/m)/W
wherein S is a selection coefficient of the back-to-source node, C is a third code corresponding to the back-to-source node, m is a total number of values in a mapping space of the pseudo-random data distribution algorithm, and W is the weight of the back-to-source node.
7. The method of claim 4, wherein the pseudo-random data distribution algorithm comprises a CRUSH algorithm.
8. The method of claim 1, further comprising:
under the condition that the back source nodes are changed, acquiring the changed node identifications of the plurality of back source nodes; and
calculating a second code corresponding to each back-source node in the plurality of back-source nodes based on the node identification of each back-source node by utilizing a hash algorithm.
9. A device for scheduling back-source traffic, comprising:
the first calculation module is used for responding to the received back-source request and calculating a first code corresponding to the back-source request;
a second calculating module, configured to calculate, for each back-source node in the multiple back-source nodes, a third code according to the first code and a second code corresponding to each back-source node;
a node determining module, configured to determine a target back-to-source node in the multiple back-to-source nodes according to the third code and the weight of each back-to-source node; and
and the distribution module is used for distributing the back-source request to the target back-source node.
10. The apparatus of claim 9, wherein the first computing module comprises:
a request identifier obtaining submodule, configured to obtain a request identifier of the source returning request; and
a first computation submodule, configured to compute the first code based on the request identifier by using a hash algorithm.
11. The apparatus of claim 9, further comprising:
a first obtaining module, configured to obtain a node identifier of each back-to-source node; and
a third calculation module for calculating the second code based on the node identification using a hash algorithm.
12. The apparatus of claim 9, wherein the second computing module comprises:
a second calculation submodule, configured to determine, by using a pseudo-random data distribution algorithm, a third code corresponding to the first code and the second code in a mapping space of the pseudo-random data distribution algorithm.
13. The apparatus of claim 12, wherein the node determination module comprises:
a selection coefficient determining submodule, configured to determine a selection coefficient of each back-to-source node according to the third code and the weight of each back-to-source node; and
and the target node determining submodule is used for determining a target back-source node in the plurality of back-source nodes according to the selection coefficient of each back-source node.
14. The apparatus of claim 13, wherein the selection coefficient determination sub-module comprises:
a fourth calculating unit, configured to calculate, for each back-to-source node, a selection coefficient of the back-to-source node according to the following formula:
S=ln(C/m)/W
wherein S is a selection coefficient of the back-to-source node, C is a third code corresponding to the back-to-source node, m is a total number of values in a mapping space of the pseudo-random data distribution algorithm, and W is the weight of the back-to-source node.
15. The apparatus of claim 12, wherein the pseudo-random data distribution algorithm comprises a CRUSH algorithm.
16. The apparatus of claim 9, further comprising:
a second obtaining module, configured to obtain node identifiers of multiple changed back-to-source nodes when the back-to-source nodes change; and
and a fifth calculation module, configured to calculate, by using a hash algorithm, a second code corresponding to each back-source node in the multiple back-source nodes based on the node identifier of each back-source node.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising computer program/instructions, characterized in that the computer program/instructions, when executed by a processor, implement the steps of the method according to any of claims 1-8.
CN202111237433.5A 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic Active CN113992760B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111237433.5A CN113992760B (en) 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111237433.5A CN113992760B (en) 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic

Publications (2)

Publication Number Publication Date
CN113992760A true CN113992760A (en) 2022-01-28
CN113992760B CN113992760B (en) 2024-03-01

Family

ID=79740719

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111237433.5A Active CN113992760B (en) 2021-10-22 2021-10-22 Method, device, equipment and storage medium for scheduling back source traffic

Country Status (1)

Country Link
CN (1) CN113992760B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105450734A (en) * 2015-11-09 2016-03-30 上海爱数信息技术股份有限公司 Distributed storage CEPH data distribution optimization method
CN105991459A (en) * 2015-02-15 2016-10-05 上海帝联信息科技股份有限公司 Source-returning route distribution method, apparatus and system of CDN node
US20190068701A1 (en) * 2017-08-31 2019-02-28 Boe Technology Group Co., Ltd. Content Scheduling Method Based on CDN and P2P Network, and Communication Network
WO2019057212A1 (en) * 2017-09-22 2019-03-28 中兴通讯股份有限公司 Method, apparatus and device for scheduling service within cdn node, and storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment
CN113037869A (en) * 2021-04-14 2021-06-25 北京百度网讯科技有限公司 Method and apparatus for back-sourcing of content distribution network system
CN113364877A (en) * 2021-06-11 2021-09-07 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium
CN113472852A (en) * 2021-06-02 2021-10-01 乐视云计算有限公司 CDN node back-source method, device and equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105991459A (en) * 2015-02-15 2016-10-05 上海帝联信息科技股份有限公司 Source-returning route distribution method, apparatus and system of CDN node
CN105450734A (en) * 2015-11-09 2016-03-30 上海爱数信息技术股份有限公司 Distributed storage CEPH data distribution optimization method
US20190068701A1 (en) * 2017-08-31 2019-02-28 Boe Technology Group Co., Ltd. Content Scheduling Method Based on CDN and P2P Network, and Communication Network
WO2019057212A1 (en) * 2017-09-22 2019-03-28 中兴通讯股份有限公司 Method, apparatus and device for scheduling service within cdn node, and storage medium
CN112153160A (en) * 2020-09-30 2020-12-29 北京金山云网络技术有限公司 Access request processing method and device and electronic equipment
CN113037869A (en) * 2021-04-14 2021-06-25 北京百度网讯科技有限公司 Method and apparatus for back-sourcing of content distribution network system
CN113472852A (en) * 2021-06-02 2021-10-01 乐视云计算有限公司 CDN node back-source method, device and equipment
CN113364877A (en) * 2021-06-11 2021-09-07 北京百度网讯科技有限公司 Data processing method, device, electronic equipment and medium

Also Published As

Publication number Publication date
CN113992760B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
CN113037869B (en) Method and apparatus for back-sourcing of content distribution network system
CN116431282A (en) Cloud virtual host server management method, device, equipment and storage medium
CN111062572A (en) Task allocation method and device
CN112241319A (en) Method, electronic device and computer program product for balancing load
CN113760982A (en) Data processing method and device
CN112615795A (en) Flow control method and device, electronic equipment, storage medium and product
CN113765969A (en) Flow control method and device
CN113992760B (en) Method, device, equipment and storage medium for scheduling back source traffic
CN115567602A (en) CDN node back-to-source method, device and computer readable storage medium
CN115543416A (en) Configuration updating method and device, electronic equipment and storage medium
CN115190180A (en) Method and device for scheduling network resource request during sudden increase of network resource request
JP2023031248A (en) Edge computing network, data transmission method, apparatus, device, and storage medium
KR20220139407A (en) Task assignment method and apparatus, electronic device and computer readable medium
CN113127561B (en) Method and device for generating service single number, electronic equipment and storage medium
CN113778645A (en) Task scheduling method, device and equipment based on edge calculation and storage medium
US20160277489A1 (en) User service access allocation method and system
CN112561301A (en) Work order distribution method, device, equipment and computer readable medium
CN113626175A (en) Data processing method and device
CN115442432B (en) Control method, device, equipment and storage medium
CN114793234B (en) Message processing method, device, equipment and storage medium
CN116996481B (en) Live broadcast data acquisition method and device, electronic equipment and storage medium
CN115361449B (en) Method, device, equipment and storage medium for adjusting IP resources
CN114449031B (en) Information acquisition method, device, equipment and storage medium
CN112861034B (en) Method, device, equipment and storage medium for detecting information
CN114900562A (en) Resource acquisition method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant