CN110825525B - Data resource back-source method and device - Google Patents

Data resource back-source method and device Download PDF

Info

Publication number
CN110825525B
CN110825525B CN201911075777.3A CN201911075777A CN110825525B CN 110825525 B CN110825525 B CN 110825525B CN 201911075777 A CN201911075777 A CN 201911075777A CN 110825525 B CN110825525 B CN 110825525B
Authority
CN
China
Prior art keywords
server
source
source returning
target
returning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911075777.3A
Other languages
Chinese (zh)
Other versions
CN110825525A (en
Inventor
盛骥斌
唐文滔
曾迅迅
曹问
邵灿
刘维
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Original Assignee
Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Happly Sunshine Interactive Entertainment Media Co Ltd filed Critical Hunan Happly Sunshine Interactive Entertainment Media Co Ltd
Priority to CN201911075777.3A priority Critical patent/CN110825525B/en
Publication of CN110825525A publication Critical patent/CN110825525A/en
Application granted granted Critical
Publication of CN110825525B publication Critical patent/CN110825525B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer And Data Communications (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention relates to the technical field of computers, and relates to a data resource source returning method and a device, wherein the method comprises the following steps: when the target server receives the resource request, judging whether the target server stores the data resource to be acquired corresponding to the resource request; when not stored, the resource request is forwarded to a back-to-source server associated with the target server. When successfully responding to the source returning server and returning source returning information are obtained, inputting each source returning information into a preset machine learning model to determine the source returning speed of each source returning server, and determining the source returning server with the maximum source returning speed as a target source returning server; and determining the shortest source return path according to the source return information of the target source return server, and triggering the target source return server to return data resources according to the shortest source return path. By the method, the source return path when the data resource returns to the source is shortened, the data resource does not need to be returned according to the original request path, excessive access bandwidth is avoided, and traffic consumption is reduced.

Description

Data resource back-source method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for returning data resources to a source.
Background
In a Content Delivery Network (CDN) system, there are multiple cache servers that store data resources, and each server stores a corresponding data resource according to the local heat required for each data resource. In the CDN system, when an arbitrary server receives a resource request but does not store a data resource corresponding to the resource request, the data resource needs to be returned to the source, that is, the data resource is acquired from another upstream server.
In the prior art, a data resource source returning method mainly sends a source returning request to each upstream server in a CDN system via a source station or a central node, and in a process of requesting data resources to return to a source, each upstream server needs to be requested layer by layer until it is determined that the currently requested upstream server stores the data resources. When data resources are returned by the upstream server after the multi-layer back-to-source request, the data resources are required to be returned layer by layer according to the original request path, and when the data resources are returned by each layer of upstream server, a certain ingress and egress bandwidth is occupied, and the more the number of layers is, the more the ingress and egress bandwidth is occupied, so that the consumed flow is increased.
Disclosure of Invention
In view of this, the present invention provides a data resource source returning method, by which a path for returning data resources to a source can be optimized without returning data resources according to an original request path, thereby avoiding occupying excessive import and export bandwidths and reducing traffic consumption.
The invention also provides a data resource source returning device which is used for ensuring the realization and the application of the method in practice.
A data resource back-to-source method, comprising:
when detecting that a target server receives a resource request, judging whether the target server stores a data resource to be acquired corresponding to the resource request;
if the data resource to be acquired is not stored in the target server, forwarding the resource request to each source returning server which has an association relation with the target server, and acquiring source returning information returned by each source returning server which successfully responds to the resource request;
inputting the acquired source returning information fed back by each source returning server into a machine learning model trained in advance, triggering the machine learning model to calculate the source returning speed of each source returning server when returning the data resource to be acquired according to each source returning information;
determining a source returning server with the maximum source returning speed as a target source returning server, determining a shortest source returning path between the target source returning server and the target server according to source returning information returned by the target source returning server, and triggering the target source returning server to return the data resource to be acquired to the target server through the shortest source returning path.
Optionally, in the method, the forwarding the resource request to each back-to-source server having an association relationship with the target server, and acquiring back-to-source information returned by each back-to-source server that successfully responds to the resource request includes:
determining each source returning path of the target server, wherein a plurality of source returning servers are sequentially arranged on each source returning path;
in each source returning path, judging whether the data resource to be acquired is stored in the current source returning server which has received the resource request, if the data resource to be acquired is not stored in the current source returning server, forwarding the resource request to the next source returning server of the current source returning server until the source returning server which successfully responds to the resource request is determined in the source returning path;
when determining that the first source returning server successfully responds to the resource request in each source returning path, starting a preset timer to time, and stopping the forwarding process of the resource request in each source returning path when the time reaches a preset time threshold;
and acquiring the return source information returned by each return source server which has successfully responded to the resource request at present.
Optionally, the method for determining each back-source path of the target server includes:
acquiring each historical source returning information of each upstream server adjacent to the target server, and determining the average historical source returning information of each upstream server according to each historical source returning information of each upstream server;
inputting the average historical backsource information of each upstream server into the machine learning model, and triggering the machine learning model to output the predicted backsource speed of each upstream server;
and determining an upstream server with the predicted backlog speed greater than a preset speed threshold value as a first backlog server of the target server, and determining each backlog path of the target server based on each first backlog server.
The method described above, optionally, the training process of the machine learning model includes:
when the server time of the target server reaches the end-of-day processing time, acquiring each current-day source returning information prestored in the target server and a source returning speed corresponding to each current-day source returning information;
sequentially inputting the current-day return source information into the machine learning model so as to enable the machine learning model to carry out model training until model parameters of the machine learning model meet preset training conditions;
when each piece of the current source returning information is input into the machine learning model, obtaining a training result of the current source returning information input into the machine learning model; calling a preset loss function, and calculating the training result and the return source speed corresponding to the current day return source information input into the machine learning model to obtain a loss function value; judging whether the model parameters of the machine learning model meet the training conditions or not according to the loss function values; if not, adjusting the model parameters of the machine learning model according to the loss function values; and if so, obtaining the machine learning model which is trained.
Optionally, the above method, where determining a shortest return-to-source path between the target return-to-source server and the target server according to the return-to-source information returned by the target return-to-source server includes:
acquiring an IP address of the target source returning server contained in source returning information returned by the target source returning server, and determining a target source returning path corresponding to the target source returning server;
sending a connection request corresponding to the IP address to each node in the target source returning path, so that each node is connected with the target source returning server after successfully responding to the connection request; each node is the target server and each source returning server arranged in the target source returning path respectively;
and determining a connection path between each node successfully responding to the connection request and the target source returning server, and determining the shortest source returning path between the source returning server and the target server according to each connection path.
Optionally, the triggering the target back-to-source server to return the data resource to the target server through the shortest back-to-source path includes:
acquiring the resource size of the data resource to be acquired, which is contained in the return source information returned by the target return source server;
and determining a source returning mode of the target source returning server for returning the data resource to be acquired according to the shortest source returning path, the resource size and the source returning speed of the target source returning server, and triggering the target source returning server to return the data resource based on the source returning mode and the shortest source returning path.
A data resource back-source device, comprising:
the system comprises a judging unit, a processing unit and a processing unit, wherein the judging unit is used for judging whether a target server stores data resources to be acquired corresponding to a resource request when detecting that the target server receives the resource request;
an obtaining unit, configured to forward the resource request to each source returning server having an association relationship with the target server if the data resource to be obtained is not stored in the target server, and obtain source returning information returned by each source returning server that successfully responds to the resource request;
the triggering unit is used for inputting the acquired source returning information fed back by each source returning server into a machine learning model which is trained in advance, triggering the machine learning model to calculate the source returning speed of each source returning server when returning the data resource to be acquired according to each source returning information;
and the source returning unit is used for determining the source returning server with the maximum source returning speed as a target source returning server, determining the shortest source returning path between the target source returning server and the target server according to source returning information returned by the target source returning server, and triggering the target source returning server to return the data resource to be acquired to the target server through the shortest source returning path.
The above apparatus, optionally, the obtaining unit includes:
the first determining subunit is configured to determine each source returning path of the target server, where a plurality of source returning servers are sequentially arranged on each source returning path;
a determining subunit, configured to determine, in each source returning path, whether the data resource to be acquired is stored in a current source returning server that has received the resource request, and if the data resource to be acquired is not stored in the current source returning server, forward the resource request to a next source returning server of the current source returning server until a source returning server that successfully responds to the resource request is determined in the source returning path;
a timing subunit, configured to start a preset timer to perform timing when a first source returning server that successfully responds to the resource request is determined in each source returning path, and stop a forwarding process of the resource request in each source returning path when the timing reaches a preset time threshold;
and the first acquisition subunit is used for acquiring the source returning information returned by each source returning server which has successfully responded to the resource request currently.
The above apparatus, optionally, the first determining subunit includes:
a second obtaining subunit, configured to obtain each piece of historical source returning information of each upstream server adjacent to the target server, and determine average historical source returning information of each upstream server according to each piece of historical source returning information of each upstream server;
the triggering subunit is used for inputting the average historical backsource information of each upstream server into the machine learning model and triggering the machine learning model to output the predicted backsource speed of each upstream server;
and the second determining subunit is used for determining the upstream server with the predicted backlog speed greater than the preset speed threshold as the first backlog server of the target server, and determining each backlog path of the target server based on each first backlog server.
The above apparatus, optionally, further comprises:
the training unit is used for acquiring each current-day source returning information prestored in the target server and the source returning speed corresponding to each current-day source returning information when the server time of the target server reaches the final daily processing time; sequentially inputting the current-day return source information into the machine learning model so as to enable the machine learning model to carry out model training until model parameters of the machine learning model meet preset training conditions; when each piece of the current source returning information is input into the machine learning model, obtaining a training result of the current source returning information input into the machine learning model; calling a preset loss function, and calculating the training result and the return source speed corresponding to the current day return source information input into the machine learning model to obtain a loss function value; judging whether the model parameters of the machine learning model meet the training conditions or not according to the loss function values; if not, adjusting the model parameters of the machine learning model according to the loss function values; and if so, obtaining the machine learning model which is trained.
A storage medium, the storage medium comprising stored instructions, wherein when the instructions are executed, the apparatus on which the storage medium is located is controlled to execute the above data resource source returning method.
An electronic device comprising a memory and one or more instructions, wherein the one or more instructions are stored in the memory and configured to be executed by the one or more processors to perform the data resource back-to-source method.
Compared with the prior art, the invention has the following advantages:
the invention provides a data resource back-to-source method, which comprises the following steps: when detecting that a target server receives a resource request, judging whether the target server stores a data resource to be acquired corresponding to the resource request; and when the target server does not store the data resource to be acquired, forwarding the resource request to a source returning server which has an association relation with the target server. When return source information returned by a return source server successfully responding to a resource request is obtained, inputting each return source information into a preset machine learning model to determine the return source speed of each return source server, and determining the return source server with the maximum return source speed as a target return source server; and determining the shortest source returning path according to the source returning information of the target source returning server, and triggering the target source returning server to return the data resource according to the shortest source returning path. By the method, the source return path when the data resource returns to the source is shortened, the data resource does not need to be returned according to the original request path, excessive access bandwidth is avoided, and traffic consumption is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a method for returning a data resource to a source according to an embodiment of the present invention;
fig. 2 is a flowchart of another method of a data resource source returning method according to an embodiment of the present invention;
fig. 3 is a diagram illustrating an exemplary method of a data resource source returning method according to an embodiment of the present invention;
fig. 4 is a flowchart of another method of a data resource source returning method according to an embodiment of the present invention;
fig. 5 is a block diagram of an apparatus for returning data resources to a source device according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In this application, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions, and the terms "comprises", "comprising", or any other variation thereof are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The invention is operational with numerous general purpose or special purpose computing device environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multi-processor apparatus, distributed computing environments that include any of the above devices or equipment, and the like.
An embodiment of the present invention provides a data resource source returning method, which may be applied to multiple system platforms, where an execution subject of the method may be a computer terminal or a processor of various mobile devices, and a flowchart of the method is shown in fig. 1, and specifically includes:
s101: when detecting that a target server receives a resource request, judging whether the target server stores a data resource to be acquired corresponding to the resource request;
in the embodiment of the present invention, a content distribution network CDN system includes a plurality of servers, where each server is equivalent to a storage node of a data resource and is used to store each data resource in the CDN system. The data resources stored by each server in the CDN system may be the same or different. When the requester requests to acquire data resources, the requester sends a resource request to any server in the CDN system, and optionally, when the requester sends a resource request, the requester may send a resource request to its closest server according to a principle of proximity. Therefore, the server that receives the resource request sent by the requester is determined as the target server. When the processor detects that the target server receives the resource request, whether the target server stores the data resource to be acquired corresponding to the resource request is judged.
S102: if the data resource to be acquired is not stored in the target server, forwarding the resource request to each source returning server which has an association relation with the target server, and acquiring source returning information returned by each source returning server which successfully responds to the resource request;
in the embodiment of the present invention, when the data resource to be acquired is not stored in the target server, the data resource to be acquired needs to be returned to the source. And when the data resource to be acquired is returned to the source, forwarding the resource request to each source returning server of which the target server has an association relation. The source returning server having an association relationship with the target server may be each server that needs to determine whether to store the data resource to be acquired in the process of returning the data resource to the source. The servers with the association relation can have a certain connection relation, and the servers can also realize mutual communication through the resource request forwarded by the target server.
Specifically, after receiving the resource request, the source returning server determines whether the source returning server stores the data resource to be acquired corresponding to the resource request; and if the data resource to be acquired is stored, successfully responding to the resource request. When the source returning server fails, the source returning server may not respond to the resource request, so that the response of the source returning server to the resource request fails; or when the data resource corresponding to the resource request is not stored in the source server, a response failure message corresponding to the data resource to be acquired which is not stored is returned to the target server.
S103: inputting the acquired source returning information fed back by each source returning server into a machine learning model trained in advance, triggering the machine learning model to calculate the source returning speed of each source returning server when returning the data resource to be acquired according to each source returning information;
in the embodiment of the invention, if the data resource to be acquired is stored in the source returning server, the resource request is responded, and source returning information corresponding to the data resource to be acquired is returned to the target server. The source returning information includes a plurality of source returning parameters, and specifically may include parameters such as a response speed, a response time, a resource size for storing the data resource to be acquired, an IP address of the source returning server, and an IP address of the target server when the source returning server responds to the resource request. And inputting the source returning information returned by each source returning server into the machine learning model, and triggering the machine learning model to output the source returning speed of each source returning server for returning the data resource to be acquired. The machine learning model calculates the back source speed required by the back source information when returning the data resource to be acquired according to parameters such as response speed, response time, resource size and the like contained in the back source information.
It should be noted that, in the embodiment of the present invention, the greater the source returning speed of the source returning server is, the shorter the time it takes for the source returning server to return the data resource to the target server is.
S104: determining a source returning server with the maximum source returning speed as a target source returning server, determining a shortest source returning path between the target source returning server and the target server according to source returning information returned by the target source returning server, and triggering the target source returning server to return the data resource to be acquired to the target server through the shortest source returning path.
In the embodiment of the invention, when the source returning speed of each source returning server output by the machine learning model is received, the maximum source returning speed is selected from the source returning speeds. The maximum backlog speed represents that the backlog server corresponding to the maximum backlog speed is fastest when returning the data resource to be acquired, so that the backlog server with the maximum backlog speed is determined as the target backlog server. After the target source returning server is determined, in order to shorten the time for the target source returning server to return data resources, it is necessary to determine the shortest source returning path between the target source returning server and the target server according to the source returning information returned by the target source returning server. For example, if the target origin returning server E forwards the resource request from the target server a through the server B, the server C, and the server D, it is finally determined as the origin returning server storing the data resource to be acquired; when the target return-to-source server E returns the data resource to the target server, the data resource needs to be returned through the server D, the server C and the server B. However, if the shortest source returning path exists between the target server a and the target source returning server E, the target source returning server E can directly send the data resource to the target server a.
Specifically, after the target return-to-source server returns the data resource to the target server, the target server feeds back the data resource to the requester sending the resource request.
In the data resource source returning method provided by the embodiment of the invention, when the processor detects that any server in the CDN system receives the resource request, the server that receives the resource request is the target server. And judging whether the data resource to be acquired corresponding to the resource request is stored in the target server. And if the target server does not store the data resource to be acquired, forwarding the resource request to each source returning server. Specifically, the resource request may be forwarded to the source server layer by layer in the CDN system by using the target server as a center and spreading outward. After receiving the resource request, the source returning server detects whether the data resource to be acquired corresponding to the resource request is stored in the source returning server, if so, the source returning server responds to the resource request and generates source returning information corresponding to the resource request and returns the source returning information to the target server. And inputting the source returning information into the machine learning model so that the machine learning model outputs the source returning speed required when each source returning server returns the data resource to be acquired. In order to acquire data resources as soon as possible, the source returning server with the highest source returning speed is determined as a target source returning server, and the shortest source returning path between the target source returning server and the target server is determined according to the source returning information of the target source returning server, so that the target source returning server returns the data resources through the shortest source returning path.
It should be noted that, the storage space of each server in the CDN system is limited, and each resource file stored in the server needs to be regularly cleaned to ensure that a certain margin is left in the storage space of the server. Therefore, when a resource request is received, the data resource corresponding to the resource request may have been deleted in the target server.
Optionally, when the response to the resource request from the origin server fails, a response failure message may be returned to the target server, where the response failure message includes information such as a failure reason, response failure time, and an address of the origin server.
By applying the method provided by the embodiment of the invention, the source return path when the data resource returns to the source is shortened, the data resource does not need to be returned according to the original request path, the occupation of excessive import and export bandwidths is avoided, and the flow consumption is reduced.
In the method provided in the embodiment of the present invention, based on the content of step S102, a process of forwarding the resource request to each return-to-source server having an association relationship with the target server and acquiring return-to-source information returned by each return-to-source server that successfully responds to the resource request is shown in fig. 2, and specifically may include:
s201: determining each source returning path of the target server, wherein a plurality of source returning servers are sequentially arranged on each source returning path;
in the embodiment of the present invention, if the target server does not store the data resource to be acquired, each source returning path of the target server needs to be determined, a plurality of source returning servers are arranged in each source returning path at the last time, and each source returning server on each source returning path is associated with the target server.
S202: in each source returning path, judging whether the data resource to be acquired is stored in the current source returning server which has received the resource request, if the data resource to be acquired is not stored in the current source returning server, forwarding the resource request to the next source returning server of the current source returning server until the source returning server which successfully responds to the resource request is determined in the source returning path;
in the embodiment of the present invention, after determining each back-to-source path of the target server, the resource request is sequentially forwarded to each back-to-source server of each back-to-source path. And in the process of forwarding the resource request, judging whether the current return-to-source server which has received the resource request stores the data resource to be acquired. If the data resource to be acquired is not stored in the current return source server, the resource request is not responded, and the current return source server can forward the resource request to the next return source server of the return source path where the current return source server is located. And after the resource request is forwarded to the next source returning server, executing the process of judging whether to store the data resource to be acquired until the source returning server which successfully responds to the resource request in the source returning path is determined, namely, the server for storing the data resource to be acquired is determined.
S203: when a source returning server which successfully responds to the resource request is determined in each source returning path, starting a preset timer to time, and stopping the forwarding process of the resource request in each source returning path when the time reaches a preset time threshold;
in the embodiment of the present invention, in all the back-to-source paths of the target server, when it is determined that the first back-to-source server successfully responds to the resource request, that is, when the back-to-source server successfully responds to the resource request first appears in each back-to-source server of each back-to-source path, a preset timer is started to time. When the timing time of the timer reaches a preset timing threshold value, the forwarding process of the resource request in each back source path is stopped.
S204: and acquiring the return source information returned by each return source server which has successfully responded to the resource request at present.
In the embodiment of the invention, the source returning information returned by each source returning server which has successfully responded the resource request currently within the preset time threshold is obtained. If only one source returning server successfully responds to the resource request within the preset time threshold, source returning information returned by the source returning server is obtained; and if a plurality of back-source servers successfully respond to the resource request, obtaining back-source information returned by each back-source server.
In the data resource source returning method provided by the embodiment of the invention, the resource request is sequentially forwarded to each source returning server in each source returning path by determining each source returning path of the target server. The process of sequentially forwarding the resource request comprises the following steps: in the same source returning path, judging whether the current source returning server stores the data resource to be acquired or not after forwarding the resource request to the current source returning server; if the current return source server does not store the data resource to be acquired, forwarding the resource request to a next return source server through the current return source server; if the current return source server stores the data resource to be acquired, the resource request may not be forwarded to the next return source server. And starting a timer to time if one source returning server in each source returning path responds to the resource request for the first time, and stopping the resource forwarding process of each source returning path after the time reaches a preset time threshold.
It should be noted that, in the embodiment of the present invention, taking each server in the CDN system as an example, each back-to-source path of the target server may be specifically as shown in fig. 3, a node a is a target server, and a node B1, a node B2, and a node B3 are next back-to-source servers of the node a; node C1 and node C2 are next origin servers of node B1, and node C3, node C4, node C5 and node C6 are next origin servers of node B2; node D1 and node D2 are the next origin servers of node C2; node D3 and node D4 are next origin servers to node C5. And with the target server as a center, the resource request is outwards diffused and forwarded to obtain the data resource to be acquired. Wherein, the back-source path having an association relationship with the node A comprises: A-B1-C1, A-B1-C2-D1, A-B1-C2-D2, A-B2-C3, A-B2-C4, A-B2-C5-D3, A-B2-C5-D4, A-B2-C6 and A-B3. Each node is a server, and each server may belong to a CDN system.
By applying the method provided by the embodiment of the invention, in order to ensure that the data resource to be acquired corresponding to the resource request can be acquired, the resource request is forwarded through a plurality of source returning paths so as to determine the source returning server storing the data resource to be acquired.
In the data resource source returning method provided in the embodiment of the present invention, based on the content in step S201, in order to obtain the data resource to be obtained corresponding to the resource request, each source returning path of the target server needs to be determined, which may specifically include:
acquiring each historical source returning information of each upstream server adjacent to the target server, and determining the average historical source returning information of each upstream server according to each historical source returning information of each upstream server;
inputting the average historical back-source information of each upstream server into the machine learning model, and triggering the machine learning model to output the predicted back-source speed of each upstream server;
and determining an upstream server with the predicted backlog speed greater than a preset speed threshold value as a first backlog server of the target server, and determining each backlog path of the target server based on each first backlog server.
In the method provided by the embodiment of the invention, before determining each back-to-source path, in order to ensure that each back-to-source server responds fastest in the process of forwarding the resource request, a server with high server performance needs to be preferentially selected as a back-to-source server in each upstream server of the target server. The method comprises the steps of determining upstream servers of servers in a CDN system within a certain position range of a target server, and obtaining a plurality of historical source returning information in each upstream server. The historical backlog information is backlog information returned by the upstream server and the target server in the historical data resource backlog process, and the average historical backlog information corresponding to the historical backlog information is determined by the plurality of pieces of historical backlog information of each upstream server. Specifically, the average value of each parameter in each historical source returning information belonging to the same upstream server is obtained to obtain the average historical source returning information. And inputting the average historical source returning information of each upstream server into a machine learning model, wherein the machine learning model can calculate the average historical source returning information of each upstream server to obtain the predicted source returning speed when each upstream server returns the data resource to the target server. And determining the upstream server with the predicted backlog speed greater than the preset speed threshold as the first backlog server of the target server. And determining each back source path of the target back source server according to the first back source server.
In the embodiment of the present invention, taking each server in fig. 3 as an example, the node a is a target server, and the servers adjacent to the node a are a node B1, a node B2, and a node B3. Thus, node B1, node B2, and node B3 are upstream servers of node A. After average historical source returning information of the node B1, the node B2, and the node B3 is input into a machine learning model and trained to obtain predicted source returning speeds of the node B1, the node B2, and the node B3, if the predicted source returning speed of the node B2 is not greater than a preset speed threshold, a first source returning server of the node a is the node B1 and the node B3, and source returning paths of the node a respectively include: A-B1-C1, A-B1-C2-D1, A-B1-C2-D2 and A-B3.
It should be noted that, in order to prevent the resource request from being forwarded to the upstream server that has failed or has low server performance during the data resource back-sourcing process, the performance of the data resource returned by the server needs to be determined according to the average historical back-sourcing information of each upstream server. The higher the predicted backtracking speed is, the faster the data resource returned by the representation upstream server is, if the upstream server fails or is degraded in performance, the lower the predicted backtracking speed is, and in order to ensure that the upstream server can respond to the resource request more quickly and return the data resource quickly, the upstream server with the predicted backtracking speed greater than the speed threshold is selected as the backtracking server.
Further, in the embodiment of the present invention, a next source returning server of the current source returning server in the source returning path may be determined by an upstream server adjacent to the current source returning server and the machine learning model, and a determination process of the next source returning server may be consistent with a process of determining each first source returning server by the target server, which will not be described again here.
Optionally, in the process of determining the predicted backtracking speed of each upstream server, if the upstream server fails or has extremely low performance, the processor may feed back a prompt message corresponding to the server failure or the server performance reduction to a preset information receiving end.
By applying the method provided by the embodiment of the invention, the predicted source returning speed of each upstream server adjacent to the target server is determined, and the optimal upstream server is selected as the first source returning server of the target server to determine each source returning path, so that the source returning speed of the data resources is accelerated in the source returning process of the data resources, and the resource acquisition time is saved.
In the method provided by the embodiment of the invention, after the source returning information returned by each source returning server is obtained, the source returning speed of each source returning server for returning the data resource to be obtained needs to be determined through a machine learning model. The machine learning model is trained in advance, and therefore, the training process of the machine learning model may specifically include:
when the server time of the target server reaches the end-of-day processing time, acquiring each current-day source returning information prestored in the target server and a source returning speed corresponding to each current-day source returning information;
sequentially inputting the current-day return source information into the machine learning model so as to enable the machine learning model to carry out model training until model parameters of the machine learning model meet preset training conditions;
when each piece of the current source returning information is input into the machine learning model, obtaining a training result of the current source returning information input into the machine learning model; calling a preset loss function, and calculating the training result and the return source speed corresponding to the current day return source information input into the machine learning model to obtain a loss function value; judging whether the model parameters of the machine learning model meet the training conditions or not according to the loss function values; if not, adjusting the model parameters of the machine learning model according to the loss function values; and if so, obtaining the machine learning model which is trained.
In the method provided by the embodiment of the invention, in the CDN system, a plurality of servers may correspond to one machine learning model, or one server may correspond to one machine learning model. And after the source returning server returns the resource information to the target server, calling the machine learning model corresponding to the target server to determine the source returning speed of the source returning information. In particular, the machine learning model may be located in the target server. And when the server time of the target server reaches the preset end-of-day processing time, starting to train the machine learning model. The method comprises the steps of obtaining each piece of return-to-source information on the same day pre-stored in the machine learning model and the return-to-source speed corresponding to each piece of return-to-source information on the same day. Inputting the current day return source information into the machine learning model, triggering the machine learning model to train according to the current day return source information, and outputting a corresponding training result. And calculating the training result corresponding to the current source returning information and the source returning speed corresponding to the training result to obtain a loss function value. And determining whether the machine learning model meets the training condition according to the loss function value. For example, the training condition is that the loss function value reaches 0.95. If the loss function value of the current training result and the return source speed is 0.94, the training condition is not met, the model parameters in the machine learning model are adjusted according to the loss function value, the machine learning model is called again for training until the current loss function value is not less than 0.95, the machine learning model is obtained, and the training is finished.
The final day processing time may be a time-to-day time when the current day is switched to the next day. The machine learning model can be an autonomous learning machine, and can calculate the weight corresponding to each parameter according to each source returning parameter in the source returning information, and estimate the source returning speed when the source returning server returns data resources. The more the amount of the source returning information of the current day for training the machine learning model is, the more accurate the calculation of the source returning speed of the machine learning model is.
By applying the method provided by the embodiment of the invention, the machine learning model is trained, the accuracy of the model in calculating the source returning speed is improved, and the optimal source returning server is selected to return the data resources in the source returning process of the data resources.
In the method provided in the embodiment of the present invention, based on the content in step S104, after the target source returning server is determined, a multi-layer resource request forwarding process may be performed between the target source returning server and the target server. A process of determining the shortest return-to-source path between the target return-to-source server and the target server according to the return-to-source information returned by the target return-to-source server is shown in fig. 4, and may specifically include:
s401: acquiring an IP address of the target source returning server contained in source returning information returned by the target source returning server, and determining a target source returning path corresponding to the target source returning server;
in the embodiment of the invention, the source returning server generates the source returning information and returns the source returning information to the target server after successfully responding to the resource request. And after determining the target return-to-source server in each return-to-source server, acquiring an IP address in the return-to-source information of the target return-to-source server, wherein the IP address is the IP address of the target return-to-source server. And simultaneously determining a target back-to-source path between the target back-to-source server and the target server. For example, when the server a is a target server, and the server a obtains the source return information of the server E after passing through the server B, the server C, and the server D, and the server E is the target source return server, the target source return path between the target source return server and the target server is: A-B-C-D-E.
S402: sending a connection request corresponding to the IP address to each node in the target source returning path, so that each node is connected with the target source returning server after successfully responding to the connection request; each node is the target server and each return-source server arranged in the target return-source path respectively;
in the embodiment of the invention, after the original source return path is determined, a connection request connected with the IP address is sent to each node contained in the original source return path. Each node is a corresponding server in the original source return path, for example, the server a, the server B, the server C, and the server D in the above step S401 are corresponding nodes in the original source return path a-B-C-D-E, respectively. After each node receives the connection request, whether the node can be connected with the target return-to-source server or not is determined according to the IP address in the connection request. If the connection is available, the node is connected with the target back-source server.
Specifically, if the server a and the server C can be connected to the server E in the target source returning path a-B-C-D-E, the source returning path between the server a and the server E includes: a target return path A-B-C-D-E, a first return path A-E, and a second return path A-B-C-E.
S403: and determining a connection path between each node successfully responding to the connection request and the target source returning server, and determining the shortest source returning path between the source returning server and the target server according to each connection path.
In this embodiment of the present invention, the connection path between the target back-to-source server and each connected node is determined, such as the target back-to-source path a-B-C-D-E, the first back-to-source path a-E, and the second back-to-source path a-B-C-E in the above embodiment of step S402. And determining the shortest back-source path in each back-source path, wherein the first back-source path A-E is the shortest back-source path.
In the data resource source returning method provided by the embodiment of the invention, the IP address is connected with each node in the target source returning path, the connection path between each node successfully responding to the connection request and the target source returning server is determined, and the shortest source returning path is selected from each connection path, so that the path of the target source returning server when returning the data resource is optimized, the data resource does not need to be returned according to the original request path, the excessive access bandwidth is avoided, and the flow consumption is reduced.
In the method provided in the embodiment of the present invention, after determining the shortest source return path when the target source return server returns the data resource, the target source return server needs to be triggered to return the data resource to the target server through the shortest source return path, which may specifically include:
acquiring the resource size of the data resource to be acquired, which is contained in the return source information returned by the target return source server;
and determining a source returning mode of the target source returning server for returning the data resource to be acquired according to the shortest source returning path, the resource size and the source returning speed of the target source returning server, and triggering the target source returning server to return the data resource based on the source returning mode and the shortest source returning path.
In the data resource source returning method provided by the embodiment of the invention, when the data resource is returned, the data resource can be returned through a preset source returning mode, wherein the source returning mode comprises a single-path source returning mode and a multi-path source returning mode. The single-path source returning means that the data resource is returned to the target server according to a preset single-path configuration parameter; the multi-path source returning means that the data resources are divided into a plurality of sections according to preset multi-path configuration parameters and are sequentially returned to the target server. When the resource size of the data resource to be acquired is small, the source returning path is short, and the source returning speed is high, it can be determined that the source returning mode when the target source returning server returns the data resource to be acquired is a single-path source returning mode; when the resource size of the data resource to be acquired is large, the shortest source returning path is longer relative to the shortest distance between the target source returning server and the target server, and the source returning speed is also large, it can be determined that the source returning mode when the target source returning server returns the data resource to be acquired is a multi-path source returning mode. And after determining a source returning mode when the target source returning server returns the data resource to be acquired, triggering the target source returning server to return the data resource according to the source returning mode and the shortest source returning path.
By applying the method provided by the embodiment of the invention, before the target source returning server is triggered to return the data resource, the source returning mode which is most suitable for the source returning can be selected by determining the source returning mode, so that the source returning time when the data resource is returned is shortened.
The specific implementation procedures and derivatives thereof of the above embodiments are within the scope of the present invention.
Corresponding to the method described in fig. 1, an embodiment of the present invention further provides a data resource source returning device, which is used for specifically implementing the method in fig. 1, where the data resource source returning device provided in the embodiment of the present invention may be applied to a computer terminal or various mobile devices, and a schematic structural diagram of the data resource source returning device is shown in fig. 5, and specifically includes:
a determining unit 501, configured to determine, when it is detected that a target server receives a resource request, whether a data resource to be acquired corresponding to the resource request is stored in the target server;
an obtaining unit 502, configured to forward the resource request to each source return server having an association relationship with the target server if the data resource to be obtained is not stored in the target server, and obtain source return information returned by each source return server that successfully responds to the resource request;
a triggering unit 503, configured to input the obtained source returning information fed back by each source returning server into a machine learning model trained in advance, and trigger the machine learning model to calculate, according to each source returning information, a source returning speed at which each source returning server returns the data resource to be obtained;
and a source returning unit 504, configured to determine the source returning server with the highest source returning speed as a target source returning server, determine a shortest source returning path between the target source returning server and the target server according to source returning information returned by the target source returning server, and trigger the target source returning server to return the data resource to be acquired to the target server through the shortest source returning path.
In the apparatus provided in the embodiment of the present invention, when it is detected that a target server receives a resource request, a determining unit determines the resource request to determine whether the target server stores a data resource to be acquired corresponding to the resource request. And if the resource request is not stored, executing a resource request forwarding process, forwarding the resource request to each source returning server in the storage association relationship of the target server, and acquiring source returning information returned by the source returning servers which successfully respond to the resource request by the acquisition unit. After sending each source returning information to the machine learning model, triggering the machine learning model through the triggering unit to calculate the source returning speed of each source returning server, and determining the source returning server with the maximum source returning speed as the target source returning server. After the shortest source returning path is determined, the source returning unit triggers the target source returning server to return the data resources through the shortest source returning path. By applying the device provided by the embodiment of the invention, the path of the data resource when returning to the source can be optimized, the data resource does not need to be returned according to the original request path, the occupation of excessive import and export bandwidths is avoided, and the flow consumption is reduced.
In the apparatus provided in the embodiment of the present invention, the obtaining unit includes:
the first determining subunit is configured to determine each source returning path of the target server, where a plurality of source returning servers are sequentially arranged on each source returning path;
a determining subunit, configured to determine, in each source returning path, whether the data resource to be acquired is stored in a current source returning server that has received the resource request, and if the data resource to be acquired is not stored in the current source returning server, forward the resource request to a next source returning server of the current source returning server until a source returning server that successfully responds to the resource request is determined in the source returning path;
a timing subunit, configured to start a preset timer to perform timing when a first source returning server that successfully responds to the resource request is determined in each source returning path, and stop a forwarding process of the resource request in each source returning path when the timing reaches a preset time threshold;
and the first acquisition subunit is used for acquiring the source returning information returned by each source returning server which has successfully responded to the resource request currently.
In the apparatus provided in an embodiment of the present invention, the first determining subunit includes:
a second obtaining subunit, configured to obtain each piece of historical source returning information of each upstream server adjacent to the target server, and determine average historical source returning information of each upstream server according to each piece of historical source returning information of each upstream server;
the triggering subunit is configured to input the average historical backlog information of each upstream server into the machine learning model, and trigger the machine learning model to output the predicted backlog speed of each upstream server;
and the second determining subunit is used for determining the upstream server with the predicted backlog speed greater than the preset speed threshold as the first backlog server of the target server, and determining each backlog path of the target server based on each first backlog server.
The device provided by the embodiment of the invention further comprises:
the training unit is used for acquiring each current-day source returning information prestored in the target server and the source returning speed corresponding to each current-day source returning information when the server time of the target server reaches the final daily processing time; sequentially inputting the current-day source returning information into the machine learning model so as to enable the machine learning model to carry out model training until model parameters of the machine learning model meet preset training conditions; when each piece of the current source returning information is input into the machine learning model, obtaining a training result of the current source returning information input into the machine learning model; calling a preset loss function, and calculating the training result and the return source speed corresponding to the current day return source information input into the machine learning model to obtain a loss function value; judging whether the model parameters of the machine learning model meet the training conditions or not according to the loss function values; if not, adjusting the model parameters of the machine learning model according to the loss function values; and if so, obtaining the machine learning model which is trained.
In the apparatus provided in the embodiment of the present invention, the source returning unit includes:
a third obtaining subunit, configured to obtain an IP address of the target source returning server included in source returning information returned by the target source returning server, and determine a target source returning path corresponding to the target source returning server;
a sending subunit, configured to send a connection request corresponding to the IP address to each node in the target source return path, so that each node is connected to the target source return server after successfully responding to the connection request; each node is the target server and each source returning server arranged in the target source returning path respectively;
and the third determining subunit is configured to determine a connection path between each node that successfully responds to the connection request and the target origin returning server, and determine a shortest origin returning path between the origin returning server and the target server according to each connection path.
In the apparatus provided in the embodiment of the present invention, the source returning unit includes:
a fourth obtaining subunit, configured to obtain the resource size of the data resource to be obtained, where the return-source information returned by the target return-source server includes the resource size;
and the fourth determining subunit is configured to determine, according to the shortest source returning path, the resource size, and the source returning speed of the target source returning server, a source returning mode in which the target source returning server returns the data resource to be acquired, and trigger the target source returning server to return the data resource based on the source returning mode and the shortest source returning path.
The specific working processes of each unit and sub-unit in the data resource source returning device disclosed in the above embodiment of the present invention can refer to the corresponding contents in the data resource source returning method disclosed in the above embodiment of the present invention, and are not described herein again.
The embodiment of the invention also provides a storage medium, which comprises a stored instruction, wherein when the instruction runs, the device where the storage medium is located is controlled to execute the data resource back-source method.
An electronic device is provided in an embodiment of the present invention, and the structural diagram of the electronic device is shown in fig. 6, which specifically includes a memory 601 and one or more instructions 602, where the one or more instructions 602 are stored in the memory 601 and configured to be executed by one or more processors 603 to perform the following operations on the one or more instructions 602:
when detecting that a target server receives a resource request, judging whether the target server stores a data resource to be acquired corresponding to the resource request;
if the data resource to be acquired is not stored in the target server, forwarding the resource request to each source returning server which has an association relation with the target server, and acquiring source returning information returned by each source returning server which successfully responds to the resource request;
inputting the acquired source returning information fed back by each source returning server into a machine learning model trained in advance, triggering the machine learning model to calculate the source returning speed of each source returning server when returning the data resource to be acquired according to each source returning information;
determining a source returning server with the maximum source returning speed as a target source returning server, determining a shortest source returning path between the target source returning server and the target server according to source returning information returned by the target source returning server, and triggering the target source returning server to return the data resource to be acquired to the target server through the shortest source returning path.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, the system or system embodiments are substantially similar to the method embodiments and therefore are described in a relatively simple manner, and reference may be made to some of the descriptions of the method embodiments for related points. The above-described system and system embodiments are only illustrative, wherein the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both.
To clearly illustrate this interchangeability of hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A method for returning data resources to a source is characterized by comprising the following steps:
when detecting that a target server receives a resource request, judging whether the target server stores a data resource to be acquired corresponding to the resource request;
if the data resource to be acquired is not stored in the target server, forwarding the resource request to each source returning server which has an association relation with the target server, and acquiring source returning information returned by each source returning server which successfully responds to the resource request;
inputting the acquired source returning information fed back by each source returning server into a machine learning model trained in advance, triggering the machine learning model to calculate the source returning speed of each source returning server when returning the data resource to be acquired according to each source returning information;
determining a source returning server with the maximum source returning speed as a target source returning server, determining a shortest source returning path between the target source returning server and the target server according to source returning information returned by the target source returning server, and triggering the target source returning server to return the data resource to be acquired to the target server through the shortest source returning path;
wherein, the determining the shortest return-to-source path between the target return-to-source server and the target server according to the return-to-source information returned by the target return-to-source server includes:
acquiring an IP address of the target source returning server contained in source returning information returned by the target source returning server, and determining a target source returning path corresponding to the target source returning server;
sending a connection request corresponding to the IP address to each node in the target source returning path, so that each node is connected with the target source returning server after successfully responding to the connection request; each node is the target server and each source returning server arranged in the target source returning path respectively;
and determining a connection path between each node successfully responding to the connection request and the target source returning server, and determining the shortest source returning path between the source returning server and the target server according to each connection path.
2. The method according to claim 1, wherein forwarding the resource request to each back-to-source server having an association relationship with the target server and obtaining back-to-source information returned by each back-to-source server that successfully responds to the resource request comprises:
determining each source returning path of the target server, wherein a plurality of source returning servers are sequentially arranged on each source returning path;
in each source returning path, judging whether the data resource to be acquired is stored in the current source returning server which has received the resource request, if the data resource to be acquired is not stored in the current source returning server, forwarding the resource request to the next source returning server of the current source returning server until the source returning server which successfully responds to the resource request is determined in the source returning path;
when a source returning server which successfully responds to the resource request is determined in each source returning path, starting a preset timer to time, and stopping the forwarding process of the resource request in each source returning path when the time reaches a preset time threshold;
and acquiring the return source information returned by each return source server which has successfully responded to the resource request at present.
3. The method of claim 2, wherein determining the respective back-to-source paths of the target servers comprises:
acquiring each historical source returning information of each upstream server adjacent to the target server, and determining the average historical source returning information of each upstream server according to each historical source returning information of each upstream server;
inputting the average historical backsource information of each upstream server into the machine learning model, and triggering the machine learning model to output the predicted backsource speed of each upstream server;
and determining an upstream server with the predicted backlog speed greater than a preset speed threshold value as a first backlog server of the target server, and determining each backlog path of the target server based on each first backlog server.
4. The method of claim 1, wherein the training process of the machine learning model comprises:
when the server time of the target server reaches the end-of-day processing time, acquiring each current-day source returning information prestored in the target server and a source returning speed corresponding to each current-day source returning information;
sequentially inputting the current-day source returning information into the machine learning model so as to enable the machine learning model to carry out model training until model parameters of the machine learning model meet preset training conditions;
when each piece of the current source returning information is input into the machine learning model, obtaining a training result of the current source returning information input into the machine learning model; calling a preset loss function, and calculating the training result and the return source speed corresponding to the current day return source information input into the machine learning model to obtain a loss function value; judging whether the model parameters of the machine learning model meet the training conditions or not according to the loss function values; if not, adjusting the model parameters of the machine learning model according to the loss function values; and if so, obtaining the machine learning model which is trained.
5. The method of claim 1, wherein the triggering the target feed-back server to return data resources to the target server through the shortest feed-back path comprises:
acquiring the resource size of the data resource to be acquired, which is contained in the return source information returned by the target return source server;
and determining a source returning mode of the target source returning server for returning the data resource to be acquired according to the shortest source returning path, the resource size and the source returning speed of the target source returning server, and triggering the target source returning server to return the data resource based on the source returning mode and the shortest source returning path.
6. A data resource back-source device, comprising:
the system comprises a judging unit, a processing unit and a processing unit, wherein the judging unit is used for judging whether a target server stores data resources to be acquired corresponding to a resource request when detecting that the target server receives the resource request;
an obtaining unit, configured to forward the resource request to each source returning server having an association relationship with the target server if the data resource to be obtained is not stored in the target server, and obtain source returning information returned by each source returning server that successfully responds to the resource request;
the triggering unit is used for inputting the acquired source returning information fed back by each source returning server into a machine learning model which is trained in advance, triggering the machine learning model to calculate the source returning speed of each source returning server when returning the data resource to be acquired according to each source returning information;
the source returning unit is used for determining a source returning server with the maximum source returning speed as a target source returning server, determining a shortest source returning path between the target source returning server and the target server according to source returning information returned by the target source returning server, and triggering the target source returning server to return the data resource to be acquired to the target server through the shortest source returning path;
wherein the source returning unit is specifically configured to:
acquiring an IP address of the target source returning server contained in source returning information returned by the target source returning server, and determining a target source returning path corresponding to the target source returning server;
sending a connection request corresponding to the IP address to each node in the target source returning path, so that each node is connected with the target source returning server after successfully responding to the connection request; each node is the target server and each source returning server arranged in the target source returning path respectively;
and determining a connection path between each node successfully responding to the connection request and the target source returning server, and determining the shortest source returning path between the source returning server and the target server according to each connection path.
7. The apparatus of claim 6, wherein the obtaining unit comprises:
the first determining subunit is configured to determine each source returning path of the target server, where a plurality of source returning servers are sequentially arranged on each source returning path;
a determining subunit, configured to determine, in each source returning path, whether the data resource to be acquired is stored in a current source returning server that has received the resource request, and if the data resource to be acquired is not stored in the current source returning server, forward the resource request to a next source returning server of the current source returning server until a source returning server that successfully responds to the resource request is determined in the source returning path;
a timing subunit, configured to start a preset timer to perform timing when a first source returning server that successfully responds to the resource request is determined in each source returning path, and stop a forwarding process of the resource request in each source returning path when the timing reaches a preset time threshold;
and the first acquisition subunit is used for acquiring the source returning information returned by each source returning server which has successfully responded the resource request currently.
8. The apparatus of claim 7, wherein the first determining subunit comprises:
a second obtaining subunit, configured to obtain each piece of historical source returning information of each upstream server adjacent to the target server, and determine average historical source returning information of each upstream server according to each piece of historical source returning information of each upstream server;
the triggering subunit is used for inputting the average historical backsource information of each upstream server into the machine learning model and triggering the machine learning model to output the predicted backsource speed of each upstream server;
and the second determining subunit is used for determining the upstream server with the predicted backlog speed greater than the preset speed threshold as the first backlog server of the target server, and determining each backlog path of the target server based on each first backlog server.
9. The apparatus of claim 6, further comprising:
the training unit is used for acquiring each current-day source returning information prestored in the target server and the source returning speed corresponding to each current-day source returning information when the server time of the target server reaches the final daily processing time; sequentially inputting the current-day return source information into the machine learning model so as to enable the machine learning model to carry out model training until model parameters of the machine learning model meet preset training conditions; when each piece of the current source returning information is input into the machine learning model, obtaining a training result of the current source returning information input into the machine learning model;
calling a preset loss function, and calculating the training result and the return source speed corresponding to the current day return source information input into the machine learning model to obtain a loss function value; judging whether the model parameters of the machine learning model meet the training conditions or not according to the loss function values; if not, adjusting the model parameters of the machine learning model according to the loss function values; and if so, obtaining the machine learning model which is trained.
CN201911075777.3A 2019-11-06 2019-11-06 Data resource back-source method and device Active CN110825525B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911075777.3A CN110825525B (en) 2019-11-06 2019-11-06 Data resource back-source method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911075777.3A CN110825525B (en) 2019-11-06 2019-11-06 Data resource back-source method and device

Publications (2)

Publication Number Publication Date
CN110825525A CN110825525A (en) 2020-02-21
CN110825525B true CN110825525B (en) 2022-06-07

Family

ID=69552979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911075777.3A Active CN110825525B (en) 2019-11-06 2019-11-06 Data resource back-source method and device

Country Status (1)

Country Link
CN (1) CN110825525B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113452539B (en) * 2020-03-26 2022-07-19 北京金山云网络技术有限公司 Source station switching method and device, electronic equipment and storage medium
CN112333290B (en) * 2021-01-05 2021-04-06 腾讯科技(深圳)有限公司 Data access control method, device, storage medium and content distribution network system
CN116055565B (en) * 2023-01-28 2023-06-06 北京蓝色星际科技股份有限公司 Data transmission method, system, device, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871975A (en) * 2015-11-17 2016-08-17 乐视云计算有限公司 Method and device for selecting source server
CN105897836A (en) * 2015-12-07 2016-08-24 乐视云计算有限公司 Back source request processing method and device
CN109412946A (en) * 2018-11-14 2019-03-01 网宿科技股份有限公司 Method, apparatus, server and the readable storage medium storing program for executing of source path are returned in a kind of determination
CN109787868A (en) * 2019-03-18 2019-05-21 网宿科技股份有限公司 A kind of method, system and server for choosing routed path
CN110099081A (en) * 2018-01-30 2019-08-06 阿里巴巴集团控股有限公司 CDN system and its time source method, apparatus
CN110311987A (en) * 2019-07-24 2019-10-08 中南民族大学 Node scheduling method, apparatus, equipment and the storage medium of microserver

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105871975A (en) * 2015-11-17 2016-08-17 乐视云计算有限公司 Method and device for selecting source server
CN105897836A (en) * 2015-12-07 2016-08-24 乐视云计算有限公司 Back source request processing method and device
CN110099081A (en) * 2018-01-30 2019-08-06 阿里巴巴集团控股有限公司 CDN system and its time source method, apparatus
CN109412946A (en) * 2018-11-14 2019-03-01 网宿科技股份有限公司 Method, apparatus, server and the readable storage medium storing program for executing of source path are returned in a kind of determination
CN109787868A (en) * 2019-03-18 2019-05-21 网宿科技股份有限公司 A kind of method, system and server for choosing routed path
CN110311987A (en) * 2019-07-24 2019-10-08 中南民族大学 Node scheduling method, apparatus, equipment and the storage medium of microserver

Also Published As

Publication number Publication date
CN110825525A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110825525B (en) Data resource back-source method and device
CN109995653B (en) Cross-node data transmission method, device and system and readable storage medium
CN109561141B (en) CDN node selection method and equipment
CN108156013B (en) Page service disaster tolerance method and device and electronic equipment
WO2018152919A1 (en) Path selection method and system, network acceleration node, and network acceleration system
CN103716251B (en) For the load-balancing method and equipment of content distributing network
US20170171344A1 (en) Scheduling method and server for content delivery network service node
WO2015074500A1 (en) Cdn-based advertisement material download method, apparatus, and device
WO2017101366A1 (en) Cdn service node scheduling method and server
EP2093963A1 (en) A method, system and path computation element for obtaining path information
CN107172186B (en) Content acquisition method and system
EP3745678B1 (en) Storage system, and method and apparatus for allocating storage resources
US20140143427A1 (en) Providing Resources in a Cloud
CN107517229A (en) Generation, transmission method and the relevant apparatus of a kind of time source-routed information
US20170187820A1 (en) Caching service with client-enabled routing
US10862805B1 (en) Intelligent offloading of services for a network device
CN105554125B (en) A kind of method and its system for realizing webpage fit using CDN
US20160286434A1 (en) Method and Device for Controlling Processing Load of a Network Node
CN110086724B (en) Bandwidth adjusting method and device, electronic equipment and computer readable storage medium
CN111064802B (en) Network request processing method and device, electronic equipment and storage medium
JP7345645B2 (en) A system that provides accurate communication delay guarantees for request responses to distributed services.
CN111107118B (en) Picture access acceleration method, device, equipment, system and storage medium
CN110912926B (en) Data resource back-source method and device
CN106817267B (en) Fault detection method and equipment
US9973412B2 (en) Method and system for generating routing tables from link specific events

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant