WO2021253889A1 - Load balancing method and apparatus, proxy device, cache device and serving node - Google Patents

Load balancing method and apparatus, proxy device, cache device and serving node Download PDF

Info

Publication number
WO2021253889A1
WO2021253889A1 PCT/CN2021/080746 CN2021080746W WO2021253889A1 WO 2021253889 A1 WO2021253889 A1 WO 2021253889A1 CN 2021080746 W CN2021080746 W CN 2021080746W WO 2021253889 A1 WO2021253889 A1 WO 2021253889A1
Authority
WO
WIPO (PCT)
Prior art keywords
block
content
file
request
location information
Prior art date
Application number
PCT/CN2021/080746
Other languages
French (fr)
Chinese (zh)
Inventor
王永强
年彦东
Original Assignee
北京金迅瑞博网络技术有限公司
北京金山云网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京金迅瑞博网络技术有限公司, 北京金山云网络技术有限公司 filed Critical 北京金迅瑞博网络技术有限公司
Publication of WO2021253889A1 publication Critical patent/WO2021253889A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • H04L67/5682Policies or rules for updating, deleting or replacing the stored data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/60Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
    • H04L67/63Routing a service request depending on the request content or context

Definitions

  • the present disclosure belongs to the field of computer technology, and in particular relates to a load balancing method, device, proxy device, cache device, and service node.
  • CDN Content Delivery Network
  • the CDN network is composed of multiple CDN nodes, and a CDN node is a cluster composed of multiple physical devices (servers).
  • the CDN network uses functions such as load balancing, content distribution, and scheduling to enable users to obtain required content from nearby servers in the network to reduce network congestion and improve user access response speed and hit rate.
  • each physical device in the CDN node contains a reverse proxy nginx and a cache storage.
  • the reverse proxy nginx is used to provide load balancing and request forwarding functions, so as to request file content such as url (uniform resource locator, unified resource Locator)
  • the request is forwarded to the cache of the physical device that caches the requested content.
  • url uniform resource locator, unified resource Locator
  • the request is forwarded to the cache of the physical device that caches the requested content.
  • no matter which device is processed by nginx it will be forwarded to the cache storage of the same device through a consistent hashing algorithm. This ensures that the content of a URL is stored in the cache of one device. Reduce duplicate storage. That is, in the current technology, the reverse proxy nginx can only distribute the content of different URLs (that is, the content of different files) to different caches and store the same URL (file) content in the same cache.
  • the traffic between nginx and cache is carried out through the intranet, but the content length corresponding to different URLs is usually different, and the access frequency of different URLs is also different.
  • the corresponding URLs with large content length and high access frequency are hashed to the cache, Will consume a higher intranet bandwidth, on the contrary, only need to consume a lower intranet bandwidth.
  • the internal network bandwidth consumed by a physical device is the same as the external network bandwidth.
  • the uneven distribution of the internal network bandwidth of CND nodes will limit the utilization of external network bandwidth and bring about the use of external network bandwidth.
  • the bottleneck is caused, the maximum bandwidth consumed by the external network cannot reach the maximum saturation (different physical devices are configured with the same maximum available external network bandwidth), and the external network service capabilities of networks such as CDN are correspondingly reduced.
  • the embodiments of the present disclosure provide a load balancing method, device, proxy device, cache device, and service node.
  • the purpose is to achieve intranet bandwidth (consumption) between different cache parties (different caches) in service nodes such as CDN. Bandwidth) balance, break the bottleneck of external network bandwidth usage, and improve the external network service capabilities of CDN and other networks.
  • a load balancing method applied to an agent comprising: obtaining a content request carrying a data resource address of a target file; splitting the content request into at least one block request including corresponding block location information; wherein The block location information is determined by the agent according to the data resource address; the at least one block request is distributed to at least one cache respectively corresponding to the block location information in the at least one block request Obtain at least one block content fed back by at least one cache party that has received the corresponding block request in the at least one block request; the absolute difference of the data length of the different block content is less than a predetermined threshold, and the division of the same file When there are multiple pieces of content, the first number of cache parties storing multiple pieces of content is multiple; and the at least one piece of content is fed back to the requester.
  • a load balancing method applied to a caching party comprising: obtaining a block request including target block location information sent by an agent, where the target block location information is at least one determined by the agent according to a data resource address One of the block location information; the data resource address is carried in the content request set as the request target file received by the agent, and the block request is set to request that the target file corresponds to the target block The block content of the location information; obtain the block content according to the target block location information; send the block content to the agent.
  • a load balancing device which is applied to an agent, includes: a first obtaining unit, configured to obtain a content request carrying a data resource address of a target file; and a splitting unit, configured to split the content request into At least one segmentation request corresponding to the segmentation location information; wherein the segmentation location information is determined by the agent according to the data resource address; the distribution unit is configured to distribute the at least one segmentation request to the At least one cache party corresponding to the block location information in the at least one block request; the second acquiring unit is configured to obtain feedback from at least one cache party that has respectively received the corresponding block request in the at least one block request At least one piece of content; the absolute difference between the data lengths of different pieces of content is less than a predetermined threshold, and when there are multiple pieces of content of the same file, the first number of caches storing the multiple pieces of content is more A feedback unit, configured to feed back the at least one piece of content to the requester.
  • a load balancing device applied to a caching party comprising: a third obtaining unit, configured to obtain a block request including target block location information sent by the agent, and the target block location information is based on the agent's One of at least one piece location information determined by the data resource address; the data resource address is carried in the content request set as the request target file received by the agent, and the piece request is set as the request for the target file The block content corresponding to the target block location information; the fourth obtaining unit is configured to obtain block content according to the target block location information; the sending unit is configured to send the block content to the agent .
  • An agent device includes: a first memory configured to store a first computer instruction set; a first processor configured to execute the instruction set stored in the first memory to implement the application as described in any one of the preceding items The agent's load balancing method.
  • a cache device includes: a magnetic disk, configured to store the data content of a file; a second memory, configured to store a second computer instruction set; a second processor, configured to execute the instruction set stored on the second memory, Implement the load balancing method applied to the cache side as described in any of the above.
  • a service node includes a plurality of physical devices, the physical devices include a proxy and a cache; the proxy in the service node executes the load balancing method applied to the proxy as described in any one of the above and the At least one cache party in the service node interacts; the cache party in the service node interacts with the agent in the service node by executing the load balancing method applied to the cache party as described in any one of the above items.
  • the load balancing method, device, proxy device, cache device, and service node store files in blocks, and specifically disperse the content of the files to different cache parties relatively evenly in blocks.
  • the requesting party When requesting a file, the content request of the file is mapped to at least one block request, and at least one block content that can be used to form a complete target file is obtained from at least one cache party based on the at least one block request, so as to achieve a response to the requester. Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external. The use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.
  • FIG. 1 is a schematic flowchart of a load balancing method applied to a proxy provided by an embodiment of the present disclosure
  • FIG. 2 is a schematic diagram of another flow chart of a load balancing method applied to an agent provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of a load balancing method applied to a caching party according to an embodiment of the present disclosure
  • FIG. 4 is a flowchart of an implementation process of obtaining block content according to target block location information provided by an embodiment of the present disclosure
  • FIG. 5 is a processing logic diagram of an application example of obtaining a target file from a CDN node provided by an embodiment of the present disclosure
  • Fig. 6 is a schematic diagram of another process of a load balancing method applied to a caching party provided by an embodiment of the present disclosure
  • FIG. 7 is a schematic structural diagram of a load balancing device applied to an agent provided by an embodiment of the present disclosure
  • FIG. 8 is a schematic structural diagram of a load balancing device applied to a caching party according to an embodiment of the present disclosure
  • FIG. 9 is a schematic diagram of another structure of a load balancing device applied to a caching party according to an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of a structure of an agent device provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic diagram of a structure of a cache device provided by an embodiment of the present disclosure.
  • the embodiments of the present disclosure disclose a load balancing method, device, proxy device, cache device, storage medium, and service node.
  • the content of the files is relatively evenly distributed to different caches in the form of blocks.
  • intranet application scenarios such as the interaction scenario between the reverse proxy nginx and cache storage in the CDN node
  • the internal consumption of different cache parties The network bandwidth distribution will be relatively balanced, and it will not cause the uneven distribution of the internal network bandwidth of different cache parties due to the excessive/too small data volume of different file contents or the different access frequency as in related technologies, so that the internal network bandwidth usage distribution Will not bring bottlenecks to the use of external network bandwidth.
  • FIG. 1 a schematic flow chart of the load balancing method when applied to a proxy is shown, which may include the following steps:
  • Step 101 Obtain a content request carrying the data resource address of the target file.
  • a CDN node is a cluster composed of multiple physical devices (servers).
  • Each physical device of a CDN node includes a reverse proxy nginx and a cache storage, as the main body of execution of the method shown in Figure 1, and the proxy can be But it is not limited to the reverse proxy nginx on any physical device (such as a server) in the CDN node, which is responsible for the load balancing and request forwarding of the cache storage within the node, such as the CDN node.
  • Clients running on user electronic devices such as mobile phones and tablet computers when they need to get the content of the target file from the network, if they need to get the page content of a webpage, they will send the distributed CDN node (the CDN network uses load balancing, Content distribution, scheduling and other functions enable users to obtain the required content from the nearest server in the network.)
  • Send a content request for the target file at least carries the data resource address of the target file.
  • the data resource address is usually the uniform resource locator url of the file.
  • the reverse proxy nginx of the physical device in the CDN node it will receive a content request carrying the data resource address of the target file, such as url.
  • Step 102 Split the content request into at least one segment request including corresponding segment location information; wherein the segment location information is determined by the agent according to the data resource address.
  • the files are stored in different caches in blocks (for the specific case where the file size does not exceed the set data length, the distributed storage is not performed, and the files are directly stored in one cache).
  • the first number of cache parties storing the multiple pieces of content is multiple, and the absolute difference of the data length of different pieces of content is less than the set threshold to ensure different The distribution balance of bandwidth usage of the cache.
  • a fixed data length can be set (of course, the length can also be changed according to the strategy), the data length is the smallest unit for realizing file block and distributed storage, and the file content is dispersed in the unit of the data length Storage, the stored content of each block does not exceed the set data length.
  • the agent splits it into at least one block request to achieve different block content acquisition of the target file.
  • the function of the block location information carried in the block request is to facilitate the cache storage of a physical device in the corresponding caching party, such as a CDN node, to index and read the block corresponding to the block request from its content collection content.
  • Step 103 Distribute the at least one block request to at least one cache party respectively corresponding to the block location information in the at least one block request.
  • Step 104 Obtain at least one piece of content fed back by at least one cache party that has respectively received a corresponding piece of the at least one piece request.
  • the cache party After receiving the block request, the cache party indexes and reads the corresponding block content from the stored content set according to the block location information carried in the block request, and feeds back the read block content to the sender
  • the proxy for this block request such as the reverse proxy nginx.
  • the absolute difference of the data length of the different block content is less than a predetermined threshold, so as to balance the load of each cache party as much as possible.
  • the first number of cache parties storing the plurality of block contents is multiple.
  • Step 105 Feed back the at least one piece of content to the requester.
  • the agent may assemble the at least one piece of content to obtain the target file required by the requesting party, in an embodiment, And feedback the target file to the requesting party.
  • the agent can specifically assemble each block content in order to obtain the complete target file requested by the requesting party, and then feedback the complete target file obtained by the sequential assembly to the requesting party, for example, the CDN node receives it
  • the reverse proxy nginx of the content request of the target file will feed back the complete target file obtained by sequential assembly to the requesting client.
  • the agent may also directly send the at least one piece of content to the requester, so that the requester obtains the requested target file by assembling the at least one piece of content.
  • the agent can also carry each piece of content at the same time The corresponding block number.
  • the file is stored in blocks and the content of the file is relatively evenly distributed to different caching parties in blocks.
  • the file content request is mapped to at least one block request based on At least one segment request obtains at least one segment content that can be used to form a complete target file from at least one cache party, so as to implement a response to the requester. Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external. The use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.
  • the load balancing method applied to the agent of the present disclosure can be implemented through the following processing procedures:
  • Step 201 Obtain a content request carrying the data resource address of the target file.
  • step 201 is the same as step 101 in the foregoing embodiment.
  • step 101 please refer to the related description or description of step 101 in the foregoing embodiment. In order to avoid repetition, a detailed description of step 201 is not provided here.
  • Step 202 Determine the file body data length of the target file.
  • a file In the back-end storage, a file includes two parts: the file header and the file body.
  • the file header is used to record the relevant description information of the file, such as the data length of the data content in the file body; the file body is Include the actual data content of the file.
  • the agent After the agent receives the content request carrying the data resource address of the target file, such as url, in order to facilitate the splitting of the content request into several block requests, the file header of the target file needs to be read from the cache to facilitate the extraction of the target from it.
  • the actual data length of the data content in the file body of the file is convenient for subsequent processing. In order to simplify the explanation and description, the actual data length of the data content in the file body of the target file is simply recorded as "file body data length".
  • the process of determining the file body data length of the target file may include:
  • a file header request including the file header location information may be generated, that is, the file header location information is carried in the file header request.
  • the proxy For obtaining the file header of the target file, if the proxy obtains the reverse proxy nginx of the content request, on the one hand, it generates the intranet address of the file header according to the url carried in the content request for locating, routing and storing the file header.
  • the caching side of the file header of the target file on the other hand, generates file header location information according to the url, which is used as an index to implement indexing and hitting of the file header of the target file on the located caching side.
  • the process of generating the internal network address of the file header of the target file includes: splicing the url and the number of the file header carried in the content request of the target file to obtain splicing information; and performing a consistent hash operation on the splicing information , Get the file header intranet address of the target file.
  • the number of the file header is set to 0 in this embodiment, and the number of each block in the file body increases sequentially from 1 until the number of the nth block of the file body is n, where n It is a natural number greater than 1. It is easy to understand that this numbering rule is not unique, and there can be many other numbering rules to achieve similar purposes.
  • the number of the file header is set to 1, and the number of each block in the file body increases sequentially from 2. ;
  • the number of the file header is set to a, and the number of each block in the file body increases sequentially from b, etc.
  • the file header/block number can also be a character composed of multiple numbers and/or letters , No longer show them one by one.
  • the url and the number of the file header/block are spliced together, the difference in the splicing information corresponding to the file header and different blocks is small, and it is easy to map the file header and different blocks to a smaller number by performing a consistent hash operation on it.
  • the content of the same file can’t be evenly distributed to more cache parties, such as evenly distributed to more cache storage in the CDN node.
  • the url and number file The splicing information of the header/block number
  • each hash result is normally distributed, resulting in a large number of blocks being scattered to a few cache parties.
  • the process of generating the internal network address of the file header of the target file may also be: splicing the url and the number of the file header carried in the content request of the target file to obtain the splicing information; Perform digest calculation on the spliced information; perform consistent hashing on the calculated information digest to obtain the file header intranet address of the target file;
  • the digest algorithm used can be, but is not limited to, md5 (message digest 5 algorithm), md4 (message digest 4 algorithm), etc.
  • the process of generating the file header location information of the target file may include: combining the url of the target file with at least one of the number of the file header and the corresponding position of the file header in the target file. Assemble to obtain the file header position information of the file header.
  • the file header location information of the target file must include at least the url of the target file, and the number of the file header (such as "0") and the corresponding position of the file header in the target file (such as the file header in the file Corresponding range: at least one of 0k ⁇ 128k).
  • the file header request carrying the file header location information can be sent to the cache party indicated by the file header intranet address; in addition, the file header intranet address can also be carried in the file as part of the file header location information. There is no restriction on this in the header request.
  • the file header determined and fed back by the cache party according to the file header location information may be obtained, and the file header carries the file body data length of the target file.
  • the cache party indicated by the internal network address of the file header after receiving the file header request of the target file, indexes the corresponding file header from the data set cached by itself according to the file header location information carried in the request, such as url and file header number. , And read it.
  • the agent receives the file header of the target file read and fed back by the caching party, and reads the file body data length of the target file from the received file header.
  • Step 203 Determine a second amount of block content included in the file body of the target file according to the predetermined data length and the file body data length of the target file.
  • the predetermined data length for example, can be 128k or other data lengths different from 128k.
  • the second quantity of the block content included in the file body of the target file can be determined as
  • Step 204 Determine a second number of block location information according to the data resource address, and generate a second number of block requests each including different block location information.
  • the cache party that stores the corresponding block content of the target file needs to be located and routed first, the corresponding block content can be obtained in the located cache party according to the block location information of the block content. Therefore, in addition to determining the number of The block location information of the second quantity may further include the following processing: determining the number of block intranet addresses of the second quantity according to the data resource address.
  • the function of the divided intranet address of the divided content is to locate and route the cache of the corresponding divided content of the target file.
  • the function of the block location information of the block content is to be used as an index to implement indexing and hitting of the corresponding block content of the target file on the located cache side.
  • the caching party such as the cache storage on the physical device in the CDN node, uses the key-value method to store the file header and block content of the file.
  • Location information (such as url+number+range corresponding to the number, etc.) is used as the "key" of the data content (that is, the "value" of the file header/block content). Based on the storage method, the location of the file header or block content is obtained After the information, the location information can be used as the "key” to obtain its corresponding "value”.
  • the process of generating the block intranet address may include: splicing the url of the target file and the block number of the block content (such as 1 or 2 or 3). ...Or n, etc., where n is a natural number greater than 1) to obtain the splicing information; perform a consistent hash operation on the splicing information to obtain the block intranet address of the block content.
  • the block intranet address of the block content can also be generated through the following process: Combine the url of the target file and the block number of the block content to obtain the spliced information; use a predetermined digest algorithm to perform digest calculation on the spliced information; perform a consistent hash operation on the calculated information summary to obtain the block content of the block Intranet address.
  • the process of generating the block location information of the block content may include: performing at least one of the url of the target file, the block number of the block content, and the content location of the block content in the target file. Assemble, obtain the block location information of the block content.
  • the block location information of the block content of the target file must at least include the url of the target file, and the block number of the block content and the corresponding content position of the block content in the target file (such as number x Corresponding content in the file body: at least one of 128k ⁇ 256k).
  • Step 205 Distribute the at least one block request to the cache party indicated by the block intranet address corresponding to the corresponding block request.
  • the cache party after receiving the block request, obtains and feeds back the block content indicated by the block location information in the block request to the agent.
  • the block location information is used as the "key”, and the corresponding "value” is obtained, and feedback is provided to the agent.
  • Step 206 Obtain at least one piece of content fed back by at least one cache party that has respectively received a corresponding piece of the at least one piece request.
  • the absolute difference of the data lengths of different block contents is less than the set threshold.
  • the first number of cache parties storing the plurality of block contents is multiple.
  • Step 207 Feed back the at least one piece of content to the requesting party.
  • the agent may assemble the at least one piece of content to obtain the target file required by the requester, and Feed back the target file to the requesting party.
  • the agent can specifically assemble the content of each segment based on the number corresponding to the content of each segment, such as 1, 2, 3...n, where n is a natural number greater than 1, and assemble the content of each segment in order to obtain the completeness requested by the requesting party.
  • the target file, and the complete target file assembled in sequence is fed back to the requester, such as the reverse proxy nginx in the CDN node that receives the content request of the target file, and the complete target file assembled in sequence is fed back to the requesting client Wait.
  • the agent may also directly send the at least one piece of content to the requester, so that the requester obtains the requested target file by assembling the at least one piece of content.
  • the agent can also carry each piece of content at the same time The corresponding block number.
  • the file is stored in blocks, the file content is relatively evenly distributed to different cache parties, and the content request of the file is mapped to at least one block request, so that the traffic of the requested file can be distributed to different caches in a balanced manner.
  • the caching side as far as possible, enables the use of internal network bandwidth to be evenly distributed, which improves the external network service capabilities of CDN and other networks.
  • this embodiment also performs digest operations on the splicing information of url and file header/block number information, and performs consistent hashing on the obtained information digest, so that the content of different blocks of the same file can be stored in more caches.
  • the more balanced distribution of the internal network can further reduce the limitation of the internal network bandwidth usage and the external network bandwidth utilization rate.
  • FIG. 3 shows a schematic flowchart of the load balancing method when applied to the caching side, including the following steps:
  • Step 301 Obtain a block request sent by the agent that includes the location information of the target block.
  • the target segment location information is one of at least one segment location information determined by the agent according to the data resource address; the data resource address is carried in the content request for the target file received by the agent, and The block request is used to request the block content corresponding to the target block location information in the target file.
  • the block request sent by the agent based on the corresponding block intranet address can be obtained.
  • the corresponding block intranet address is one of the second number of block intranet addresses determined by the agent according to the data resource address; the process of determining the second number of block intranet addresses includes: The agent determines the block number of each block content of the second quantity; generates the block intranet address of the block content according to the uniform resource locator and the block number of the block content, and obtains the block number of the second number Block intranet address.
  • the agent determines the block number of each block content of the second quantity; generates the block intranet address of the block content according to the uniform resource locator and the block number of the block content, and obtains the block number of the second number Block intranet address.
  • the target block location information in the block request may include the url of the complete target file corresponding to the block content, as well as the number of the block content and the content corresponding to the number in the target file. At least one of location (such as content range).
  • Step 302 Obtain the block content according to the target block location information.
  • Step 401 Determine whether the block content indicated by the target block location information exists locally on the caching party.
  • the data content of the file is not cached in the CDN network when the data content of the file is requested for the first time or is cleared based on the clearing strategy, which will cause the file data content to hit the CDN network If it fails, correspondingly, the CDN network needs to pull (ie obtain) the data from the data source site of the file and cache the data (combined with the CDN network protocol and operation and maintenance characteristics, in the embodiment of the present disclosure, storing the data content on the caching side is equivalent to storing the data on the caching side Cache data content).
  • the cache party such as the cache storage on a physical device in the CDN node, after obtaining the block request, first uses the target block location information carried in the request as an index to determine the cache party’s local Whether there is the block content indicated by the target block location information.
  • Step 402 In a case where it is determined that it exists, obtain the segmented content from the local.
  • the caching party can use the target block location information carried in the request as an index, index and read the corresponding block content from the stored content set.
  • Step 403 In a case where it is determined that it does not exist, perform predetermined back-to-source processing, and obtain the block content corresponding to the block request through the back-to-source processing.
  • the back-to-origin process is executed to obtain the requested segmented content.
  • the process of back-to-origin processing includes:
  • the cache party that receives the block request and fails the local hit, extracts the url from the block location information of the block request, which is the url of the target file corresponding to the block, and performs consistency on the extracted url. It is hoped to calculate the intranet address of the target file; then the cache party indicated by the intranet address of the target file is used as the back-to-source cache party.
  • the back-to-origin caching party is responsible for pulling and retrieving the target file from the source site when the target file is not stored in each caching party (for example, the target file is requested for the first time or has been cleared from each caching party based on a clearing strategy). Store the data content of the target file. Subsequently, when the stored target file meets the clearing condition, the back-to-source caching party clears the data of the target file from its own content collection.
  • the block location information in the block request such as url + block number + content range corresponding to the number
  • the cache party that receives other block requests for the target file will also converge to the back-to-origin cache party due to its own hit failure, read the required block content from it, and store a copy of the block content in itself In this way, two copies of the content of a file are stored in the node.
  • One copy is stored in the back-to-origin caching side used to converge back to the source, and the other is divided into different caches (non-back-to-origin) in the form of blocks.
  • the agent will request the contents of each block of the target file in blocks based on the routing strategy to achieve load balancing, and for the complete data of the file stored by the back-to-source caching party
  • the content will be cleared based on the clearing strategy when the clearing condition is met, and the clearing condition may be, but not limited to, the unrequested duration reaching the set duration, etc.
  • Step 303 Send the segmented content to the agent.
  • the cache party may send the obtained divided content to the agent that issued the request for dividing.
  • the file header will also fail locally when the file header is requested, and the cache party that receives the file header request and fails the local hit is based on The same principle as above converges to the back-to-source caching side.
  • the target file does not exist in the back-to-source caching side. Accordingly, the data content of the target file needs to be pulled from the source site, and the back-to-source caching side completes the data pull And after storing the complete data content, the cache party that receives the file header request obtains the required file header from the back-to-source cache party and stores a copy in itself.
  • each cache party that receives the block request does not store the required block content, and thus converges to the back-to-source cache party, and reads the required block from the complete target file stored by the back-to-source cache party
  • the content is stored separately.
  • the processing logic of an application instance that obtains the target file from the CDN node is provided.
  • the reverse proxy nginx After obtaining the url request, the reverse proxy nginx performs the following processing to obtain the target content:
  • Step 1 Send a file header request carrying url and file header range to cache1 indicated by the file header internal network address;
  • Step 2 After cache1 receives the file header request, if it needs to return to the source, hash the url consistency to cache2 (if it does not need to return to the source, directly store the response nginx based on cache1 itself);
  • Step 3 Cache2 pulls and caches the content of the file from the origin site, and returns the file header to cache1;
  • Step 4 Cache1 responds to nginx with the file header to inform nginx of the data length of the file body; at the same time, cache1 caches the file header;
  • Step 5 In the process of distributing the block request, nginx sends a block request containing the url and block range generated according to the data length of the file body to the cache3 corresponding to the block intranet address;
  • Step 6 If you need to return to the source, cache3 will hash the URL consistency to cache2 (if you don't need to return to the source, it will directly store the response nginx based on cache3 itself);
  • Step 7 Cache2 returns the corresponding block content to cache3;
  • Step 8 Cache3 responds to nginx with the segmented content, and cache3 caches the segmented content;
  • Steps 9-12 and steps 9-12 are similar to steps 5-8, and steps 9-12 and 5-8 can be executed in parallel, so they will not be repeated;
  • Step 13 Nginx assembles the block content returned by each cache to obtain the complete target file and feed it back to the client.
  • the back-to-source cache party pulls the complete data content of the file from the source station, and each cache party that receives the block request reads and stores the required data from the back-to-source cache party.
  • the block content can enable the block request and routing of the target file data content to be realized when the target file is requested again, and the internal network load balance can be realized as quickly as possible.
  • a back-to-source cache party is used to converge back The source, without expanding the amount of return to the source to a higher degree, will not bring a higher additional burden to the CDN node.
  • a caching party such as the cache storage on a physical device in a CDN node may cache one or more block contents of the same file, or cache block contents and file headers of the same file .
  • the load balancing method applied to the cache side may further include the following processing before step 301:
  • Step 301' If the file header position information of the target file sent by the agent is obtained, the file body data length of the target file is obtained according to the file header position information, and the file body data length is sent to the agent.
  • the file header location information can be carried in the file header request, and the caching party can specifically obtain the file header request sent by the agent, and extract the file header location information from it.
  • the length of the file body data can be carried in the file header.
  • the caching party can specifically use the file header location information (such as url+file header number) as the index (key), obtain the corresponding file header (value) from its stored content collection, and feed it back to the requesting file header
  • the agent is used by the agent to extract the length of the file body data from the obtained file header. For example, the agent determines the second quantity of the block content of the target file according to the file body data length and the set data length, and then the target file The content request of the file is split into a second number of multiple block requests, etc. For details, please refer to the previous description, which will not be repeated here.
  • an embodiment of the present disclosure also discloses a load balancing device. As shown in FIG. 7, when applied to an agent, the load balancing device includes:
  • the first obtaining unit 701 is configured to obtain a content request carrying a data resource address of the target file
  • the splitting unit 702 is configured to split the content request into at least one segment request including corresponding segment location information; wherein the segment location information is determined by the agent according to the data resource address;
  • the distributing unit 703 is configured to distribute the at least one block request to at least one cache party respectively corresponding to the block location information in the at least one block request;
  • the second acquiring unit 704 is configured to acquire at least one piece of content fed back by at least one cache party that has respectively received the corresponding piece of the at least one piece of request; the absolute difference of the data length of the different piece of content is less than the set value Threshold, in the case that there are multiple pieces of content in the same file, the first number of cache parties storing the multiple pieces of content is multiple;
  • the feedback unit 705 is configured to feed back the at least one piece of content to the requester.
  • the splitting unit 702 is specifically configured as follows:
  • a second number of block intranet addresses and block location information are determined, and a second number of block requests each including different block location information are generated.
  • the splitting unit 702 determining the file body data length of the target file includes:
  • the splitting unit 702 is further configured to: determine a second number of block intranet addresses according to the data resource address;
  • the data resource address includes the uniform resource locator of the target file
  • the splitting unit 702 determines, according to the data resource address, a second number of block intranet addresses or block location information, including:
  • the block intranet address or block location information of the block content is generated according to the uniform resource locator and the block number of the block content, and the block intranet address or block location information of the second number is obtained.
  • the splitting unit 702 generates the block intranet address or block location information of the block content according to the uniform resource locator and the block number of the block content ,include:
  • the distribution unit 703 is specifically configured as follows:
  • the feedback unit 705 is specifically configured as follows:
  • the load balancing device includes:
  • the third obtaining unit 801 is configured to obtain a block request including target block location information sent by the agent, where the target block location information is one of at least one block location information determined by the agent according to the data resource address
  • the data resource address is carried in a content request received by the agent and set to request a target file, and the block request is set to request the block content corresponding to the target block location information in the target file;
  • the fourth obtaining unit 802 is configured to obtain block content according to the target block location information
  • the sending unit 803 is configured to send the divided content to the agent.
  • the fourth acquiring unit 802 is specifically configured as follows:
  • a predetermined back-to-source process is performed, and the block content corresponding to the block request is obtained through the back-to-source process.
  • the fourth acquiring unit 802 performs predetermined back-to-origin processing, including:
  • the data resource address includes a uniform resource locator of the target file
  • the fourth obtaining unit 802 determining the back-to-source cache party corresponding to the data resource address in the target block location information includes:
  • the back-to-source caching party obtains and caches the target file's information from the source site of the target file when the target file is not stored in each caching party. Data content; when the cached target file meets the clearing condition, clear the target file from the back-to-source cache party.
  • the device may further include:
  • the fifth obtaining unit 804 is configured to obtain the file header request including the file header position information of the target file sent by the agent before obtaining the file header request including the file header position information of the target file sent by the agent, according to the file header
  • the location information acquires the file body data length of the target file, and sends the file body data length to the agent.
  • the third acquiring unit 801 is specifically configured as follows:
  • the corresponding block intranet address is one of the second number of block intranet addresses determined by the agent according to the data resource address; the process of determining the second number of block intranet addresses includes : The agent determines the block number of each block content of the second quantity; generates the block intranet address of the block content according to the uniform resource locator and the block number of the block content, and the obtained number is the second The number of block intranet addresses.
  • the embodiment of the present disclosure also discloses a proxy device, which can be, but is not limited to, a reverse proxy nginx on any physical device (such as a server) in a CDN network node, as shown in FIG. 10, a schematic structural diagram of the proxy device ,
  • the agent equipment at least includes:
  • the first memory 1001 is configured to store the first computer instruction set
  • the first set of computer instructions can be implemented in the form of a computer program.
  • the first memory 1001 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the first processor 1002 is configured to implement any load balancing method applied to the agent as described above by executing the instruction set stored in the first memory.
  • the first processor 1002 may be a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), and a field programmable Gate array (FPGA) or other programmable logic devices, etc.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • DSP digital signal processor
  • ASIC application-specific integrated circuit
  • FPGA field programmable Gate array
  • the proxy device can also include components such as a communication interface and a communication bus.
  • the first memory, the first processor and the communication interface communicate with each other through the communication bus.
  • the communication interface is set to communicate between the proxy device and other devices (such as other physical devices in the CDN node).
  • the communication bus can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc.
  • PCI Peripheral Component Interconnect
  • EISA Extended Industry Standard Architecture
  • the communication bus can be divided into an address bus, a data bus, and a control bus.
  • the first processor in the proxy device executes the first computer instruction set stored in the first memory to implement block storage of the file and specifically distribute the file content to different cache parties relatively evenly.
  • the requester requests a file
  • the content request of the file is mapped to at least one block request, and at least one block content that can be used to form a complete target file is obtained from at least one cache party based on the at least one block request, so as to achieve the request to the requester.
  • the response to Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external.
  • the use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.
  • the embodiment of the present disclosure also discloses a cache device.
  • the cache device can be, but is not limited to, cache storage on any physical device (such as a server) in the CDN network node.
  • the cache device includes at least:
  • Disk 1101, set to store the data content of the file
  • the second memory 1102 is configured to store a second computer instruction set
  • the second set of computer instructions can be implemented in the form of a computer program.
  • the second processor 1103 is configured to implement any load balancing method applied to the cache side as described above by executing the instruction set stored in the second memory.
  • the second processor 1103 executes the second set of computer instructions in the second memory 1102 to implement block storage of files and specifically disperses the content of the files to different cache parties relatively evenly. At the same time, it realizes the block request and routing of files, which can make the use of internal network bandwidth evenly distributed, will not bring bottleneck to the use of external network bandwidth, and improve the service capability of the external network.
  • the embodiment of the present disclosure also discloses a computer-readable storage medium, the computer-readable storage medium stores a first set of computer instructions, and when the first set of computer instructions is executed by a processor, any one of the above is applied to the agent. Load balancing method.
  • the embodiment of the present disclosure also discloses another computer-readable storage medium, and the second computer instruction set is stored in the second computer instruction set when the second computer instruction set is executed by the processor. Realize any of the above load balancing methods applied to the caching side.
  • the instructions stored in the above-mentioned two computer-readable storage media can realize the block storage of the file and the specific content of the file is relatively evenly distributed to different cache parties when running, and can realize the block division of the file at the same time. Requests and routing can make the use of internal network bandwidth evenly distributed, will not bring bottleneck to the use of external network bandwidth, and improve the service capability of the external network.
  • the embodiment of the present disclosure also discloses a service node, which includes a plurality of physical devices, and the physical devices include a proxy party and a cache party;
  • the agent in the service node interacts with at least one cache party in the service node by executing any load balancing method applied to the agent as described above;
  • the cache party in the service node interacts with the agent in the service node by executing any load balancing method applied to the cache party as described above.
  • the service node can be a CDN node.
  • the aforementioned agent can be a reverse proxy nginx on a physical device (such as a server) in the CDN node
  • the cache party can be a cache on a physical device (such as a server) in the CDN node. storage.
  • the present disclosure can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the technical solution of the present disclosure can be embodied in the form of a software product in essence or a part that contributes to the related technology.
  • the computer software product can be stored in a storage medium, such as ROM/RAM, magnetic disk, An optical disc, etc., includes a number of instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the various embodiments or some parts of the embodiments of the present disclosure.
  • the internal network bandwidth consumed by a physical device of the CDN network is the same as the external network bandwidth.
  • the unbalanced distribution of the internal network bandwidth of the CND node will limit the utilization of the external network bandwidth and bring about the use of external network bandwidth.
  • the bottleneck has caused the maximum bandwidth consumed by the external network to be unable to reach the maximum saturation (different physical devices are configured with the same maximum available external network bandwidth), which correspondingly reduces the external network service capabilities of networks such as CDN.
  • the embodiments of the present disclosure store files in blocks, and specifically disperse the content of the files to different cache parties relatively evenly in blocks.
  • the content of the file is requested
  • the mapping is at least one block request, and at least one block content that can be used to form a complete target file is obtained from at least one cache party based on the at least one block request, so as to implement a response to the requester. Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external.
  • the use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.

Abstract

The present disclosure relates to a load balancing method and apparatus, a proxy device, a cache device and a serving node. The method comprises: storing a file in blocks, and specifically scattering the file content to different cache parties relatively evenly in a block form; and when a request party requests the file, mapping a content request of the file into at least one block request; and obtaining, on the basis of the at least one block request, from at least one cache party, at least one block content available for constituting a complete target file, so as to realize a response to the request party. As the size of the block content corresponding to each block request is relatively uniform, and accordingly, the traffic of the requested file is evenly distributed to different cache parties, the internal network traffic of different cache parties is relatively balanced, which causes no bottleneck to the use of the bandwidth of an external network, improving the external network service capability of a network such as a CDN.

Description

负载均衡方法、装置、代理设备、缓存设备及服务节点Load balancing method, device, proxy equipment, cache equipment and service node
本公开要求于2020年06月17日提交中国专利局、申请号为202010555608.6、发明名称为“负载均衡方法、装置、代理设备、缓存设备及服务节点”的中国专利申请的优先权,其全部内容通过引用结合在本公开中。This disclosure requires the priority of a Chinese patent application filed with the Chinese Patent Office, the application number is 202010555608.6, and the invention title is "load balancing method, device, proxy device, cache device and service node" on June 17, 2020, and its entire content Incorporated in this disclosure by reference.
技术领域Technical field
本公开属于计算机技术领域,尤其涉及一种负载均衡方法、装置、代理设备、缓存设备及服务节点。The present disclosure belongs to the field of computer technology, and in particular relates to a load balancing method, device, proxy device, cache device, and service node.
背景技术Background technique
CDN(Content Delivery Network,内容分发网络)提供运行于公网上的文件多镜像缓存,CDN网络由多个CDN节点构成,一个CDN节点是由多台物理设备(服务器)组成的一个集群。CDN网络通过负载均衡、内容分发、调度等功能,使用户从该网络中的就近服务器获取所需内容,以降低网络拥塞,提高用户访问响应速度和命中率。CDN (Content Delivery Network, Content Delivery Network) provides a multi-mirror cache of files running on the public network. The CDN network is composed of multiple CDN nodes, and a CDN node is a cluster composed of multiple physical devices (servers). The CDN network uses functions such as load balancing, content distribution, and scheduling to enable users to obtain required content from nearby servers in the network to reduce network congestion and improve user access response speed and hit rate.
其中,CDN节点内的每台物理设备包含一个反向代理nginx和一个cache存储,反向代理nginx用于提供负载均衡及请求转发功能,实现将文件的内容请求如url(uniform resource locator,统一资源定位符)请求转发到缓存有所请求内容的物理设备的cache。针对同一条url请求,无论哪一台设备的nginx处理,经过一致性哈希算法,都会转发到相同一台设备的cache存储,这样可以保证一条url的内容都存储在一台设备的cache上,减少重复存储。也即,当前的技术中,反向代理nginx仅能实现将不同url的内容(也即不同文件的内容)分散到不同cache存储,同一url(文件)内容存储于同一cache。Among them, each physical device in the CDN node contains a reverse proxy nginx and a cache storage. The reverse proxy nginx is used to provide load balancing and request forwarding functions, so as to request file content such as url (uniform resource locator, unified resource Locator) The request is forwarded to the cache of the physical device that caches the requested content. For the same URL request, no matter which device is processed by nginx, it will be forwarded to the cache storage of the same device through a consistent hashing algorithm. This ensures that the content of a URL is stored in the cache of one device. Reduce duplicate storage. That is, in the current technology, the reverse proxy nginx can only distribute the content of different URLs (that is, the content of different files) to different caches and store the same URL (file) content in the same cache.
nginx和cache之间的流量通过内网进行,但是不同的url所对应的内容长度通常不同,不同url的访问频率也不相同,对应的内容长度大、访问频率高的url哈希到的cache,会消耗较高的内网带宽,反之,则仅需消耗较低的内网带宽。当前,CDN网络中,一台物理设备消耗的内网带宽和外网带宽是一致的,CND节点内网带宽分布的不均衡,会限制外网带宽的利用率,为外网带宽的使用带来了瓶颈,导致外网消耗的最大带宽无法达到最饱和(不同物理设备配置有相同的最大可用外网带宽),相应降低了CDN等网络的外网服务能力。The traffic between nginx and cache is carried out through the intranet, but the content length corresponding to different URLs is usually different, and the access frequency of different URLs is also different. The corresponding URLs with large content length and high access frequency are hashed to the cache, Will consume a higher intranet bandwidth, on the contrary, only need to consume a lower intranet bandwidth. At present, in a CDN network, the internal network bandwidth consumed by a physical device is the same as the external network bandwidth. The uneven distribution of the internal network bandwidth of CND nodes will limit the utilization of external network bandwidth and bring about the use of external network bandwidth. The bottleneck is caused, the maximum bandwidth consumed by the external network cannot reach the maximum saturation (different physical devices are configured with the same maximum available external network bandwidth), and the external network service capabilities of networks such as CDN are correspondingly reduced.
发明内容Summary of the invention
有鉴于此,本公开实施例提供了一种负载均衡方法、装置、代理设备、缓存设备及服务节点,目的在于实现CDN等服务节点内不同缓存方(不同cache)之间的内网带宽(消耗的带宽)平衡,突破外网带宽的使用瓶颈,提升CDN 等网络的外网服务能力。In view of this, the embodiments of the present disclosure provide a load balancing method, device, proxy device, cache device, and service node. The purpose is to achieve intranet bandwidth (consumption) between different cache parties (different caches) in service nodes such as CDN. Bandwidth) balance, break the bottleneck of external network bandwidth usage, and improve the external network service capabilities of CDN and other networks.
具体技术方案如下:The specific technical solutions are as follows:
一种负载均衡方法,应用于代理方,所述方法包括:获取携带目标文件的数据资源地址的内容请求;将所述内容请求拆分为包括相应分块位置信息的至少一个分块请求;其中,所述分块位置信息由所述代理方根据所述数据资源地址确定;将所述至少一个分块请求分发至与所述至少一个分块请求中的分块位置信息分别对应的至少一个缓存方;获取分别接收到所述至少一个分块请求中相应分块请求的至少一个缓存方反馈的至少一个分块内容;不同分块内容的数据长度绝对差值小于预定阈值,在同一文件的分块内容为多个的情况下,存储多个分块内容的缓存方的第一数量为多个;将所述至少一个分块内容反馈给请求方。A load balancing method applied to an agent, the method comprising: obtaining a content request carrying a data resource address of a target file; splitting the content request into at least one block request including corresponding block location information; wherein The block location information is determined by the agent according to the data resource address; the at least one block request is distributed to at least one cache respectively corresponding to the block location information in the at least one block request Obtain at least one block content fed back by at least one cache party that has received the corresponding block request in the at least one block request; the absolute difference of the data length of the different block content is less than a predetermined threshold, and the division of the same file When there are multiple pieces of content, the first number of cache parties storing multiple pieces of content is multiple; and the at least one piece of content is fed back to the requester.
一种负载均衡方法,应用于缓存方,所述方法包括:获取代理方发送的包括目标分块位置信息的分块请求,所述目标分块位置信息为代理方根据数据资源地址确定的至少一个分块位置信息中的之一;所述数据资源地址携带在代理方接收的设置为请求目标文件的内容请求中,所述分块请求设置为请求所述目标文件中对应于所述目标分块位置信息的分块内容;根据所述目标分块位置信息获取分块内容;向所述代理方发送所述分块内容。A load balancing method applied to a caching party, the method comprising: obtaining a block request including target block location information sent by an agent, where the target block location information is at least one determined by the agent according to a data resource address One of the block location information; the data resource address is carried in the content request set as the request target file received by the agent, and the block request is set to request that the target file corresponds to the target block The block content of the location information; obtain the block content according to the target block location information; send the block content to the agent.
一种负载均衡装置,应用于代理方,所述装置包括:第一获取单元,设置为获取携带目标文件的数据资源地址的内容请求;拆分单元,设置为将所述内容请求拆分为包括相应分块位置信息的至少一个分块请求;其中,所述分块位置信息由所述代理方根据所述数据资源地址确定;分发单元,设置为将所述至少一个分块请求分发至与所述至少一个分块请求中的分块位置信息分别对应的至少一个缓存方;第二获取单元,设置为获取分别接收到所述至少一个分块请求中相应分块请求的至少一个缓存方反馈的至少一个分块内容;不同分块内容的数据长度绝对差值小于预定阈值,在同一文件的分块内容为多个的情况下,存储该多个分块内容的缓存方的第一数量为多个;反馈单元,设置为将所述至少一个分块内容反馈给请求方。A load balancing device, which is applied to an agent, includes: a first obtaining unit, configured to obtain a content request carrying a data resource address of a target file; and a splitting unit, configured to split the content request into At least one segmentation request corresponding to the segmentation location information; wherein the segmentation location information is determined by the agent according to the data resource address; the distribution unit is configured to distribute the at least one segmentation request to the At least one cache party corresponding to the block location information in the at least one block request; the second acquiring unit is configured to obtain feedback from at least one cache party that has respectively received the corresponding block request in the at least one block request At least one piece of content; the absolute difference between the data lengths of different pieces of content is less than a predetermined threshold, and when there are multiple pieces of content of the same file, the first number of caches storing the multiple pieces of content is more A feedback unit, configured to feed back the at least one piece of content to the requester.
一种负载均衡装置,应用于缓存方,所述装置包括:第三获取单元,设置为获取代理方发送的包括目标分块位置信息的分块请求,所述目标分块位置信息为代理方根据数据资源地址确定的至少一个分块位置信息中的之一;所述数据资源地址携带在代理方接收的设置为请求目标文件的内容请求中,所述分块请求设置为请求所述目标文件中对应于所述目标分块位置信息的分块内容;第四获取单元,设置为根据所述目标分块位置信息获取分块内容;发送单元,设置为向所述代理方发送所述分块内容。A load balancing device applied to a caching party, the device comprising: a third obtaining unit, configured to obtain a block request including target block location information sent by the agent, and the target block location information is based on the agent's One of at least one piece location information determined by the data resource address; the data resource address is carried in the content request set as the request target file received by the agent, and the piece request is set as the request for the target file The block content corresponding to the target block location information; the fourth obtaining unit is configured to obtain block content according to the target block location information; the sending unit is configured to send the block content to the agent .
一种代理设备,包括:第一存储器,设置为存放第一计算机指令集;第一处理器,设置为通过执行所述第一存储器上存放的指令集,实现如上任一项所 述的应用于代理方的负载均衡方法。An agent device includes: a first memory configured to store a first computer instruction set; a first processor configured to execute the instruction set stored in the first memory to implement the application as described in any one of the preceding items The agent's load balancing method.
一种缓存设备,包括:磁盘,设置为存储文件的数据内容;第二存储器,设置为存放第二计算机指令集;第二处理器,设置为通过执行所述第二存储器上存放的指令集,实现如上任一项所述的应用于缓存方的负载均衡方法。A cache device includes: a magnetic disk, configured to store the data content of a file; a second memory, configured to store a second computer instruction set; a second processor, configured to execute the instruction set stored on the second memory, Implement the load balancing method applied to the cache side as described in any of the above.
一种计算机可读存储介质,所述计算机可读存储介质内存储有第一计算机指令集,所述第一计算机指令集被处理器执行时实现如上任一项所述的应用于代理方的负载均衡方法。A computer-readable storage medium in which a first set of computer instructions is stored, and when the first set of computer instructions is executed by a processor, the load applied to an agent as described in any one of the above is realized Balanced approach.
一种计算机可读存储介质,所述计算机可读存储介质内存储有第二计算机指令集,所述第二计算机指令集被处理器执行时实现如上任一项所述的应用于缓存方的负载均衡方法。A computer-readable storage medium having a second set of computer instructions stored in the computer-readable storage medium, and when the second set of computer instructions is executed by a processor, the load applied to the cache side as described in any one of the preceding items is realized Balanced approach.
一种服务节点,包括多个物理设备,所述物理设备包括代理方和缓存方;所述服务节点内的代理方通过执行如上任一项所述的应用于代理方的负载均衡方法与所述服务节点内的至少一个缓存方交互;所述服务节点内的缓存方通过执行如上任一项所述的应用于缓存方的负载均衡方法与所述服务节点内的代理方交互。A service node includes a plurality of physical devices, the physical devices include a proxy and a cache; the proxy in the service node executes the load balancing method applied to the proxy as described in any one of the above and the At least one cache party in the service node interacts; the cache party in the service node interacts with the agent in the service node by executing the load balancing method applied to the cache party as described in any one of the above items.
本公开实施例提供的负载均衡方法、装置、代理设备、缓存设备及服务节点,对文件进行分块存储并具体将文件内容以分块形式相对均匀地打散至不同的缓存方,当请求方请求文件时,将文件的内容请求映射为至少一个分块请求,基于至少一个分块请求从至少一个缓存方获得可用于组成完整目标文件的至少一个分块内容,以实现对请求方的响应。由于每条分块请求所对应分块内容的大小相对均匀,相应实现了将所请求文件的流量均衡分摊至不同的缓存方,所以不同缓存方的内网流量就会相对均衡,不会为外网带宽的使用带来瓶颈,提升了CDN等网络的外网服务能力。The load balancing method, device, proxy device, cache device, and service node provided by the embodiments of the present disclosure store files in blocks, and specifically disperse the content of the files to different cache parties relatively evenly in blocks. When the requesting party When requesting a file, the content request of the file is mapped to at least one block request, and at least one block content that can be used to form a complete target file is obtained from at least one cache party based on the at least one block request, so as to achieve a response to the requester. Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external. The use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.
附图说明Description of the drawings
图1是本公开实施例提供的应用于代理方的负载均衡方法的一种流程示意图;FIG. 1 is a schematic flowchart of a load balancing method applied to a proxy provided by an embodiment of the present disclosure;
图2是本公开实施例提供的应用于代理方的负载均衡方法的另一种流程示意图;2 is a schematic diagram of another flow chart of a load balancing method applied to an agent provided by an embodiment of the present disclosure;
图3是本公开实施例提供的应用于缓存方的负载均衡方法的一种流程示意图;FIG. 3 is a schematic flowchart of a load balancing method applied to a caching party according to an embodiment of the present disclosure;
图4是本公开实施例提供的根据目标分块位置信息获取分块内容的实现过程流程图;FIG. 4 is a flowchart of an implementation process of obtaining block content according to target block location information provided by an embodiment of the present disclosure;
图5是本公开实施例提供的从CDN节点获取目标文件的一应用实例的处理逻辑示意图;FIG. 5 is a processing logic diagram of an application example of obtaining a target file from a CDN node provided by an embodiment of the present disclosure;
图6是本公开实施例提供的应用于缓存方的负载均衡方法的另一种流程示 意图;Fig. 6 is a schematic diagram of another process of a load balancing method applied to a caching party provided by an embodiment of the present disclosure;
图7是本公开实施例提供的应用于代理方的负载均衡装置的结构示意图;FIG. 7 is a schematic structural diagram of a load balancing device applied to an agent provided by an embodiment of the present disclosure;
图8是本公开实施例提供的应用于缓存方的负载均衡装置的一种结构示意图;FIG. 8 is a schematic structural diagram of a load balancing device applied to a caching party according to an embodiment of the present disclosure;
图9是本公开实施例提供的应用于缓存方的负载均衡装置的另一种结构示意图;FIG. 9 is a schematic diagram of another structure of a load balancing device applied to a caching party according to an embodiment of the present disclosure;
图10是本公开实施例提供的代理设备的一种结构示意图;FIG. 10 is a schematic diagram of a structure of an agent device provided by an embodiment of the present disclosure;
图11是本公开实施例提供的缓存设备的一种结构示意图。FIG. 11 is a schematic diagram of a structure of a cache device provided by an embodiment of the present disclosure.
具体实施方式detailed description
下面将结合本公开实施例中的附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。基于本公开中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本公开保护的范围。The technical solutions in the embodiments of the present disclosure will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present disclosure. Obviously, the described embodiments are only a part of the embodiments of the present disclosure, rather than all the embodiments. Based on the embodiments in the present disclosure, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of the present disclosure.
本公开实施例公开了一种负载均衡方法、装置、代理设备、缓存设备、存储介质及服务节点,通过对文件进行分块存储、将文件内容按分块形式相对均匀地打散至不同的缓存方,实现将文件的内容请求需消耗的流量均衡分摊至不同的缓存方,这样,对于内网应用场景,如CDN节点中反向代理nginx与cache存储之间的交互场景,不同缓存方消耗的内网带宽分布就会相对均衡,不会如相关技术因不同文件内容的数据量过大/过小或访问频度不同而导致不同缓存方内网带宽分布的不均衡,从而内网的带宽使用分布不会为外网带宽的使用带来瓶颈。The embodiments of the present disclosure disclose a load balancing method, device, proxy device, cache device, storage medium, and service node. By storing files in blocks, the content of the files is relatively evenly distributed to different caches in the form of blocks. , To achieve a balanced allocation of the traffic consumed by file content requests to different cache parties. In this way, for intranet application scenarios, such as the interaction scenario between the reverse proxy nginx and cache storage in the CDN node, the internal consumption of different cache parties The network bandwidth distribution will be relatively balanced, and it will not cause the uneven distribution of the internal network bandwidth of different cache parties due to the excessive/too small data volume of different file contents or the different access frequency as in related technologies, so that the internal network bandwidth usage distribution Will not bring bottlenecks to the use of external network bandwidth.
以下首先对本公开实施例提供的负载均衡方法进行说明,参阅图1,示出了应用于代理方时该负载均衡方法的流程示意图,可以包括以下步骤:The following first describes the load balancing method provided by the embodiments of the present disclosure. Referring to FIG. 1, a schematic flow chart of the load balancing method when applied to a proxy is shown, which may include the following steps:
步骤101、获取携带目标文件的数据资源地址的内容请求。Step 101: Obtain a content request carrying the data resource address of the target file.
CDN节点是由多台物理设备(服务器)组成的一个集群,CDN节点的每台物理设备包含一个反向代理nginx和一个cache存储,作为图1所示方法的执行主体,所述代理方可以是但不限于CDN节点内任一物理设备(如服务器)上的反向代理nginx,用于负责所处节点内部如CDN节点内各cache存储的负载均衡及请求转发。A CDN node is a cluster composed of multiple physical devices (servers). Each physical device of a CDN node includes a reverse proxy nginx and a cache storage, as the main body of execution of the method shown in Figure 1, and the proxy can be But it is not limited to the reverse proxy nginx on any physical device (such as a server) in the CDN node, which is responsible for the load balancing and request forwarding of the cache storage within the node, such as the CDN node.
运行于用户电子设备如手机、平板电脑等上的客户端,在需从网络获取目标文件的内容时,如需获取一个网页的页面内容时,会向分配的CDN节点(CDN网络通过负载均衡、内容分发、调度等功能,使用户从该网络中的就近服务器获取所需内容)发送对目标文件的内容请求,该内容请求至少携带目标文件的数据资源地址,在公网环境中,所携带的数据资源地址通常为文件的统一资源定位符url,于CDN节点内物理设备的反向代理nginx来说,则会接收到 携带目标文件的数据资源地址如url的内容请求。Clients running on user electronic devices such as mobile phones and tablet computers, when they need to get the content of the target file from the network, if they need to get the page content of a webpage, they will send the distributed CDN node (the CDN network uses load balancing, Content distribution, scheduling and other functions enable users to obtain the required content from the nearest server in the network.) Send a content request for the target file. The content request at least carries the data resource address of the target file. In the public network environment, the content carried The data resource address is usually the uniform resource locator url of the file. For the reverse proxy nginx of the physical device in the CDN node, it will receive a content request carrying the data resource address of the target file, such as url.
步骤102、将内容请求拆分为包括相应分块位置信息的至少一个分块请求;其中,所述分块位置信息由代理方根据所述数据资源地址确定。Step 102: Split the content request into at least one segment request including corresponding segment location information; wherein the segment location information is determined by the agent according to the data resource address.
本公开实施例中,文件以分块形式分散存储于不同的缓存方(对于文件大小未超过设定的数据长度的特定情况,则不进行分散存储,直接将其存储于一个缓存方),在同一文件的分块内容为多个的情况下,存储该多个分块内容的缓存方的第一数量为多个,且不同分块内容的数据长度绝对差值小于设定阈值,以保证不同缓存方的带宽使用的分布均衡性。In the embodiments of the present disclosure, the files are stored in different caches in blocks (for the specific case where the file size does not exceed the set data length, the distributed storage is not performed, and the files are directly stored in one cache). When there are multiple pieces of content in the same file, the first number of cache parties storing the multiple pieces of content is multiple, and the absolute difference of the data length of different pieces of content is less than the set threshold to ensure different The distribution balance of bandwidth usage of the cache.
在一实施方式中,可设置一个固定的数据长度(当然也可以根据策略更改该长度),该数据长度为实现文件分块及分散存储的最小单位,以该数据长度为单位对文件内容进行分散存储,所存储的每一分块内容不超出该设定的数据长度。In one embodiment, a fixed data length can be set (of course, the length can also be changed according to the strategy), the data length is the smallest unit for realizing file block and distributed storage, and the file content is dispersed in the unit of the data length Storage, the stored content of each block does not exceed the set data length.
对于获取的针对完整目标文件的内容请求,代理方将其拆分为至少一个分块请求,以实现目标文件的不同分块的内容获取。For the obtained content request for the complete target file, the agent splits it into at least one block request to achieve different block content acquisition of the target file.
分块请求中携带的分块位置信息的作用在于,便于所对应的缓存方如CDN节点中某一物理设备的cache存储,从其内容集合中索引并读取对应于该分块请求的分块内容。The function of the block location information carried in the block request is to facilitate the cache storage of a physical device in the corresponding caching party, such as a CDN node, to index and read the block corresponding to the block request from its content collection content.
步骤103、将所述至少一个分块请求分发至与所述至少一个分块请求中的分块位置信息分别对应的至少一个缓存方。Step 103: Distribute the at least one block request to at least one cache party respectively corresponding to the block location information in the at least one block request.
步骤104、获取分别接收到所述至少一个分块请求中相应分块请求的至少一个缓存方反馈的至少一个分块内容。Step 104: Obtain at least one piece of content fed back by at least one cache party that has respectively received a corresponding piece of the at least one piece request.
缓存方在接收到分块请求后,根据分块请求中携带的分块位置信息从其存储的内容集合中索引并读取相对应的分块内容,并将读取的分块内容反馈至发送该分块请求的代理方,如反向代理nginx。After receiving the block request, the cache party indexes and reads the corresponding block content from the stored content set according to the block location information carried in the block request, and feeds back the read block content to the sender The proxy for this block request, such as the reverse proxy nginx.
其中,不同分块内容的数据长度绝对差值小于预定阈值,从而尽量使得各缓存方负载均衡。在同一文件的分块内容为多个的情况下,存储多个分块内容的缓存方的第一数量为多个。Among them, the absolute difference of the data length of the different block content is less than a predetermined threshold, so as to balance the load of each cache party as much as possible. In a case where there are multiple block contents of the same file, the first number of cache parties storing the plurality of block contents is multiple.
步骤105、将所述至少一个分块内容反馈给请求方。Step 105: Feed back the at least one piece of content to the requester.
代理方在获得至少一个缓存方反馈的对应于至少一个分块请求的至少一个分块内容后,在一实施方式中,可组装所述至少一个分块内容以得到请求方所需的目标文件,并向请求方反馈所述目标文件。After obtaining at least one piece of content corresponding to the at least one piece request fed back by at least one caching party, the agent may assemble the at least one piece of content to obtain the target file required by the requesting party, in an embodiment, And feedback the target file to the requesting party.
在该实现方式中,代理方具体可按序组装各个分块内容,得到请求方所请求的完整目标文件,之后,将按序组装得到的完整目标文件反馈至请求方,如CDN节点中接收到目标文件的内容请求的反向代理nginx,将按序组装得到的完整目标文件反馈至发出请求的客户端等。In this implementation, the agent can specifically assemble each block content in order to obtain the complete target file requested by the requesting party, and then feedback the complete target file obtained by the sequential assembly to the requesting party, for example, the CDN node receives it The reverse proxy nginx of the content request of the target file will feed back the complete target file obtained by sequential assembly to the requesting client.
或者,在一实施方式中,代理方还可以直接发送所述至少一个分块内容至 请求方,以便由请求方通过组装所述至少一个分块内容得到所请求的目标文件。在至少一个分块内容为多个时,为便于请求方对多个分块内容进行按序组装,代理方在向请求方发送多个分块内容时,还可以同时为每个分块内容携带相对应的分块编号。Alternatively, in an embodiment, the agent may also directly send the at least one piece of content to the requester, so that the requester obtains the requested target file by assembling the at least one piece of content. When there are multiple pieces of at least one piece of content, in order to facilitate the requester to assemble multiple pieces of content in sequence, when sending multiple pieces of content to the requester, the agent can also carry each piece of content at the same time The corresponding block number.
本实施例对文件进行分块存储并具体将文件内容以分块形式相对均匀地打散至不同的缓存方,当请求方请求文件时,将文件的内容请求映射为至少一个分块请求,基于至少一个分块请求从至少一个缓存方获得可用于组成完整目标文件的至少一个分块内容,以实现对请求方的响应。由于每条分块请求所对应分块内容的大小相对均匀,相应实现了将所请求文件的流量均衡分摊至不同的缓存方,所以不同缓存方的内网流量就会相对均衡,不会为外网带宽的使用带来瓶颈,提升了CDN等网络的外网服务能力。In this embodiment, the file is stored in blocks and the content of the file is relatively evenly distributed to different caching parties in blocks. When the requesting party requests the file, the file content request is mapped to at least one block request based on At least one segment request obtains at least one segment content that can be used to form a complete target file from at least one cache party, so as to implement a response to the requester. Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external. The use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.
参阅图2,在一实施方式中,本公开的应用于代理方的负载均衡方法可通过如下的处理过程实现:Referring to FIG. 2, in one embodiment, the load balancing method applied to the agent of the present disclosure can be implemented through the following processing procedures:
步骤201、获取携带目标文件的数据资源地址的内容请求。Step 201: Obtain a content request carrying the data resource address of the target file.
本实施例中,目标文件的数据资源地址包括目标文件的url。另外,步骤201与前文实施例中的步骤101相同,具体可参阅前文实施例对步骤101的相关说明或描述,为避免赘述,这里不再提供该步骤201的详细说明。In this embodiment, the data resource address of the target file includes the url of the target file. In addition, step 201 is the same as step 101 in the foregoing embodiment. For details, please refer to the related description or description of step 101 in the foregoing embodiment. In order to avoid repetition, a detailed description of step 201 is not provided here.
步骤202、确定所述目标文件的文件体数据长度。Step 202: Determine the file body data length of the target file.
在后端存储中,一个文件包括文件头(文件header)和文件体(文件body)两部分,文件头用于记录文件的相关描述信息,如文件体中数据内容的数据长度等;文件体则包括文件的实际数据内容。In the back-end storage, a file includes two parts: the file header and the file body. The file header is used to record the relevant description information of the file, such as the data length of the data content in the file body; the file body is Include the actual data content of the file.
代理方在接收到携带目标文件的数据资源地址如url的内容请求后,为了便于将内容请求拆分为若干分块请求,需先从缓存方读取目标文件的文件头,以便于从中提取目标文件的文件体中数据内容的实际数据长度,进而便于后续处理,为了说明与描述的简洁化,这里将目标文件的文件体中数据内容的实际数据长度简单记为“文件体数据长度”。After the agent receives the content request carrying the data resource address of the target file, such as url, in order to facilitate the splitting of the content request into several block requests, the file header of the target file needs to be read from the cache to facilitate the extraction of the target from it. The actual data length of the data content in the file body of the file is convenient for subsequent processing. In order to simplify the explanation and description, the actual data length of the data content in the file body of the target file is simply recorded as "file body data length".
本步骤中,确定目标文件的文件体数据长度的过程,可以包括:In this step, the process of determining the file body data length of the target file may include:
1)、根据所述数据资源地址确定目标文件的文件头内网地址及文件头位置信息;1) Determine the file header intranet address and file header location information of the target file according to the data resource address;
在一实施方式中,在确定出文件头位置信息后,可生成包括所述文件头位置信息的文件头请求,也即,将文件头位置信息携带在文件头请求中。In one embodiment, after the file header location information is determined, a file header request including the file header location information may be generated, that is, the file header location information is carried in the file header request.
对于目标文件的文件头的获取,代理方如获取到所述内容请求的反向代理nginx,一方面,根据内容请求中携带的url生成文件头的内网地址,以用于定位、路由存储该目标文件的文件头的缓存方,另一方面,根据url生成文件头位置信息,用于作为索引实现在所定位的缓存方进行目标文件的文件头的索引、 命中。For obtaining the file header of the target file, if the proxy obtains the reverse proxy nginx of the content request, on the one hand, it generates the intranet address of the file header according to the url carried in the content request for locating, routing and storing the file header. The caching side of the file header of the target file, on the other hand, generates file header location information according to the url, which is used as an index to implement indexing and hitting of the file header of the target file on the located caching side.
作为一可选实施方式,生成目标文件的文件头内网地址的过程包括:拼接目标文件的内容请求中携带的url和文件头的编号,得到拼接信息;对该拼接信息进行一致性哈希运算,得到目标文件的文件头内网地址。As an optional implementation manner, the process of generating the internal network address of the file header of the target file includes: splicing the url and the number of the file header carried in the content request of the target file to obtain splicing information; and performing a consistent hash operation on the splicing information , Get the file header intranet address of the target file.
对于一个文件,本实施例将其文件头的编号设定为0,而文件体中各分块的编号从1按序递增,直至文件体的第n个分块的编号为n,其中,n为大于1的自然数。容易理解的是,该编号规则不是唯一的,还可以有其他多种编号规则来达到相类似目的,例如,文件头的编号设定为1,文件体中各分块的编号从2按序递增;或者,文件头的编号设定为a,文件体中各分块的编号从b按序递增等等,当然,文件头/分块编号也可以是由多位数字和/或字母构成的字符,不再一一示出。For a file, the number of the file header is set to 0 in this embodiment, and the number of each block in the file body increases sequentially from 1 until the number of the nth block of the file body is n, where n It is a natural number greater than 1. It is easy to understand that this numbering rule is not unique, and there can be many other numbering rules to achieve similar purposes. For example, the number of the file header is set to 1, and the number of each block in the file body increases sequentially from 2. ; Or, the number of the file header is set to a, and the number of each block in the file body increases sequentially from b, etc. Of course, the file header/block number can also be a character composed of multiple numbers and/or letters , No longer show them one by one.
由于将url和文件头/分块的编号拼接后,文件头及不同分块对应的拼接信息差异较小,对其作一致性哈希运算容易将文件头及不同分块集中映射至数量较少的几个缓存方,而不能将同一文件的分块内容均匀分散至更多的缓存方,如均匀分散至CDN节点中更多的cache存储,经发明人研究与验证,对url和编号(文件头/分块的编号)的拼接信息直接进行一致性哈希,各个哈希结果呈正态分布,导致会将大量的分块集中打散至较少的几个缓存方。Since the url and the number of the file header/block are spliced together, the difference in the splicing information corresponding to the file header and different blocks is small, and it is easy to map the file header and different blocks to a smaller number by performing a consistent hash operation on it. The content of the same file can’t be evenly distributed to more cache parties, such as evenly distributed to more cache storage in the CDN node. After the inventor’s research and verification, the url and number (file The splicing information of the header/block number) is directly hashed consistently, and each hash result is normally distributed, resulting in a large number of blocks being scattered to a few cache parties.
为了改善这一情况,本实施例中,在将url和文件头/分块的编号拼接后,提出对拼接结果进行信息摘要计算,并对信息摘要计算所得的信息摘要进行一致性哈希,以此可达到将同一文件的文件头及不同分块均匀分散至更多的缓存方的目的。In order to improve this situation, in this embodiment, after the url and the file header/block number are spliced, it is proposed to perform information digest calculation on the splicing result, and perform consistent hashing on the information digest calculated from the information digest to This can achieve the purpose of evenly distributing the file header and different blocks of the same file to more cache parties.
从而,作为另一可选实施方式,生成目标文件的文件头内网地址的过程还可以是:拼接目标文件的内容请求中携带的url和文件头的编号,得到拼接信息;利用预定摘要算法对该拼接信息进行摘要计算;对计算所得的信息摘要进行一致性哈希运算,得到目标文件的文件头内网地址;Therefore, as another optional implementation manner, the process of generating the internal network address of the file header of the target file may also be: splicing the url and the number of the file header carried in the content request of the target file to obtain the splicing information; Perform digest calculation on the spliced information; perform consistent hashing on the calculated information digest to obtain the file header intranet address of the target file;
所采用的摘要算法,如可以是但不限于md5(消息摘要5算法)、md4(消息摘要4算法)等。The digest algorithm used can be, but is not limited to, md5 (message digest 5 algorithm), md4 (message digest 4 algorithm), etc.
另外,在一实施方式中,生成目标文件的文件头位置信息的过程可以包括:将目标文件的url,与文件头的编号、文件头在目标文件中对应的位置这两项信息中的至少一种进行组装,得到文件头的文件头位置信息。In addition, in one embodiment, the process of generating the file header location information of the target file may include: combining the url of the target file with at least one of the number of the file header and the corresponding position of the file header in the target file. Assemble to obtain the file header position information of the file header.
也即,目标文件的文件头位置信息,至少需包括目标文件的url,以及还需包括文件头的编号(如“0”)和文件头在目标文件中对应的位置(如文件头在文件中对应的range:0k~128k)中的至少一种。That is, the file header location information of the target file must include at least the url of the target file, and the number of the file header (such as "0") and the corresponding position of the file header in the target file (such as the file header in the file Corresponding range: at least one of 0k~128k).
2)、发送所述文件头位置信息至所述文件头内网地址指示的缓存方。2). Send the file header location information to the cache party indicated by the file header intranet address.
实施中,具体可以发送携带所述文件头位置信息的文件头请求至所述文件头内网地址指示的缓存方;另外,还可以将文件头内网地址作为文件头位置信 息的一部分携带在文件头请求中,对此不作限定。In the implementation, specifically, the file header request carrying the file header location information can be sent to the cache party indicated by the file header intranet address; in addition, the file header intranet address can also be carried in the file as part of the file header location information. There is no restriction on this in the header request.
3)、获取所述缓存方根据所述文件头位置信息确定并反馈的所述目标文件的文件体数据长度。3) Obtain the file body data length of the target file determined and fed back by the caching party according to the file header location information.
在一实施方式中,可获取所述缓存方根据所述文件头位置信息确定并反馈的文件头,文件头中携带目标文件的文件体数据长度。In an embodiment, the file header determined and fed back by the cache party according to the file header location information may be obtained, and the file header carries the file body data length of the target file.
文件头内网地址指示的缓存方,在接收到目标文件的文件头请求后,根据该请求中携带的文件头位置信息如url和文件头编号从自身缓存的数据集合中索引所对应的文件头,并对其进行读取。The cache party indicated by the internal network address of the file header, after receiving the file header request of the target file, indexes the corresponding file header from the data set cached by itself according to the file header location information carried in the request, such as url and file header number. , And read it.
代理方接收该缓存方读取并反馈的目标文件的文件头,并从接收的文件头中读取目标文件的文件体数据长度。The agent receives the file header of the target file read and fed back by the caching party, and reads the file body data length of the target file from the received file header.
步骤203、根据预定的数据长度和所述目标文件的文件体数据长度,确定所述目标文件的文件体包括的分块内容的第二数量。Step 203: Determine a second amount of block content included in the file body of the target file according to the predetermined data length and the file body data length of the target file.
预定的数据长度,如可以是128k或异于128k的其他数据长度。The predetermined data length, for example, can be 128k or other data lengths different from 128k.
现以128k进行举例,假设目标文件的文件体数据长度为h,则目标文件的文件体包括的分块内容的第二数量可确定为
Figure PCTCN2021080746-appb-000001
Now take 128k as an example. Assuming that the file body data length of the target file is h, the second quantity of the block content included in the file body of the target file can be determined as
Figure PCTCN2021080746-appb-000001
步骤204、根据所述数据资源地址确定个数为第二数量的分块位置信息,并生成个数为第二数量的分别包括不同分块位置信息的分块请求。Step 204: Determine a second number of block location information according to the data resource address, and generate a second number of block requests each including different block location information.
实施中,由于需首先定位、路由存储目标文件的相应分块内容的缓存方,才能在定位的缓存方中根据分块内容的分块位置信息获取相应分块内容,因此,除了确定个数为第二数量的分块位置信息,还可以包括以下处理:根据所述数据资源地址确定个数为第二数量的分块内网地址。In the implementation, since the cache party that stores the corresponding block content of the target file needs to be located and routed first, the corresponding block content can be obtained in the located cache party according to the block location information of the block content. Therefore, in addition to determining the number of The block location information of the second quantity may further include the following processing: determining the number of block intranet addresses of the second quantity according to the data resource address.
分块内容的分块内网地址的作用在于,用于定位、路由存储目标文件的相应分块内容的缓存方。The function of the divided intranet address of the divided content is to locate and route the cache of the corresponding divided content of the target file.
分块内容的分块位置信息的作用在于,用于作为索引实现在所定位的缓存方进行目标文件的相应分块内容的索引、命中。在一实施方式中,本实施例中,缓存方,如CDN节点内物理设备上的cache存储采用key-value方式存储文件的文件头及分块内容,其中,将文件的文件头/分块的位置信息(如url+编号+编号对应的range等)作为数据内容(也即文件头/分块内容的“value”)的“key”,基于该存储方式,在获得文件头或分块内容的位置信息后,可以以该位置信息为“key”,获取其对应的“value”。The function of the block location information of the block content is to be used as an index to implement indexing and hitting of the corresponding block content of the target file on the located cache side. In one embodiment, in this embodiment, the caching party, such as the cache storage on the physical device in the CDN node, uses the key-value method to store the file header and block content of the file. Location information (such as url+number+range corresponding to the number, etc.) is used as the "key" of the data content (that is, the "value" of the file header/block content). Based on the storage method, the location of the file header or block content is obtained After the information, the location information can be used as the "key" to obtain its corresponding "value".
与文件头内网地址的生成过程相类似,作为一可选实施方式,生成分块内网地址的过程可以包括:拼接目标文件的url和分块内容的分块编号(如1或2或3…或n等,n为大于1的自然数),得到拼接信息;对该拼接信息进行一致性哈希运算,得到分块内容的分块内网地址。Similar to the generation process of the file header intranet address, as an optional implementation, the process of generating the block intranet address may include: splicing the url of the target file and the block number of the block content (such as 1 or 2 or 3). ...Or n, etc., where n is a natural number greater than 1) to obtain the splicing information; perform a consistent hash operation on the splicing information to obtain the block intranet address of the block content.
为了将同一文件的不同分块内容均匀分散至更多的缓存方,更高程度地实现负载均衡,作为另一可选实施方式,还可以通过以下过程来生成分块内容的 分块内网地址:拼接目标文件的url和分块内容的分块编号,得到拼接信息;利用预定摘要算法对该拼接信息进行摘要计算;对计算所得的信息摘要进行一致性哈希运算,得到分块内容的分块内网地址。In order to evenly distribute the content of different blocks of the same file to more cache parties and achieve a higher degree of load balancing, as an alternative implementation, the block intranet address of the block content can also be generated through the following process: Combine the url of the target file and the block number of the block content to obtain the spliced information; use a predetermined digest algorithm to perform digest calculation on the spliced information; perform a consistent hash operation on the calculated information summary to obtain the block content of the block Intranet address.
生成分块内容的分块位置信息的过程可以包括:将目标文件的url,与分块内容的分块编号、分块内容在目标文件中对应的内容位置这两项信息中的至少一种进行组装,得到分块内容的分块位置信息。The process of generating the block location information of the block content may include: performing at least one of the url of the target file, the block number of the block content, and the content location of the block content in the target file. Assemble, obtain the block location information of the block content.
也即,目标文件的分块内容的分块位置信息,至少需包括目标文件的url,以及还需包括分块内容的分块编号和分块内容在目标文件中对应的内容位置(如编号x在文件体中对应的内容range:128k~256k)中的至少一种。That is, the block location information of the block content of the target file must at least include the url of the target file, and the block number of the block content and the corresponding content position of the block content in the target file (such as number x Corresponding content in the file body: at least one of 128k~256k).
步骤205、将所述至少一个分块请求分发至对应于相应分块请求的分块内网地址所指示的缓存方。Step 205: Distribute the at least one block request to the cache party indicated by the block intranet address corresponding to the corresponding block request.
其中,缓存方在接收到分块请求后,获取并向代理方反馈分块请求中的分块位置信息指示的分块内容。如以分块位置信息为“key”,获取其对应的“value”,并向代理方反馈等。Wherein, after receiving the block request, the cache party obtains and feeds back the block content indicated by the block location information in the block request to the agent. For example, the block location information is used as the "key", and the corresponding "value" is obtained, and feedback is provided to the agent.
步骤206、获取分别接收到所述至少一个分块请求中的相应分块请求的至少一个缓存方反馈的至少一个分块内容。Step 206: Obtain at least one piece of content fed back by at least one cache party that has respectively received a corresponding piece of the at least one piece request.
不同分块内容的数据长度绝对差值小于设定阈值,在同一文件的分块内容为多个的情况下,存储多个分块内容的缓存方的第一数量为多个。The absolute difference of the data lengths of different block contents is less than the set threshold. In the case where there are multiple block contents of the same file, the first number of cache parties storing the plurality of block contents is multiple.
步骤207、将所述至少一个分块内容反馈给请求方。Step 207: Feed back the at least one piece of content to the requesting party.
在一实施方式中,代理方在获得至少一个缓存方反馈的对应于至少一个分块请求的至少一个分块内容后,可组装所述至少一个分块内容得到请求方所需的目标文件,并向请求方反馈所述目标文件。In an embodiment, after obtaining at least one piece of content corresponding to the at least one piece request fed back by at least one caching party, the agent may assemble the at least one piece of content to obtain the target file required by the requester, and Feed back the target file to the requesting party.
在该实现方式中,代理方具体可以基于各个分块内容对应的编号,如1、2、3…n,n为大于1的自然数,按序组装各个分块内容,得到请求方所请求的完整目标文件,并将按序组装得到的完整目标文件反馈至请求方,如CDN节点中接收到目标文件的内容请求的反向代理nginx,将按序组装得到的完整目标文件反馈至发出请求的客户端等。In this implementation, the agent can specifically assemble the content of each segment based on the number corresponding to the content of each segment, such as 1, 2, 3...n, where n is a natural number greater than 1, and assemble the content of each segment in order to obtain the completeness requested by the requesting party. The target file, and the complete target file assembled in sequence is fed back to the requester, such as the reverse proxy nginx in the CDN node that receives the content request of the target file, and the complete target file assembled in sequence is fed back to the requesting client Wait.
或者,在一实施方式中,代理方还可以直接发送所述至少一个分块内容至请求方,以便由请求方通过组装所述至少一个分块内容得到所请求的目标文件。在至少一个分块内容为多个时,为便于请求方对多个分块内容进行按序组装,代理方在向请求方发送多个分块内容时,还可以同时为每个分块内容携带相对应的分块编号。Alternatively, in an embodiment, the agent may also directly send the at least one piece of content to the requester, so that the requester obtains the requested target file by assembling the at least one piece of content. When there are multiple pieces of at least one piece of content, in order to facilitate the requester to assemble multiple pieces of content in sequence, when sending multiple pieces of content to the requester, the agent can also carry each piece of content at the same time The corresponding block number.
本实施例对文件进行分块存储将文件内容相对均匀地打散至不同的缓存方,并将文件的内容请求映射为至少一个分块请求,实现了将所请求文件的流量均衡分摊至不同的缓存方,尽可能使内网带宽的使用能够均匀分布,提升了CDN等网络的外网服务能力。另外,本实施例还通过对url和文件头/分块编号 信息的拼接信息进行摘要运算,并对所得的信息摘要进行一致性哈希,实现了同一文件的不同分块内容在更多缓存方的更均衡的分布,能够进一步降低内网带宽使用对外网带宽利用率的限制。In this embodiment, the file is stored in blocks, the file content is relatively evenly distributed to different cache parties, and the content request of the file is mapped to at least one block request, so that the traffic of the requested file can be distributed to different caches in a balanced manner. The caching side, as far as possible, enables the use of internal network bandwidth to be evenly distributed, which improves the external network service capabilities of CDN and other networks. In addition, this embodiment also performs digest operations on the splicing information of url and file header/block number information, and performs consistent hashing on the obtained information digest, so that the content of different blocks of the same file can be stored in more caches. The more balanced distribution of the internal network can further reduce the limitation of the internal network bandwidth usage and the external network bandwidth utilization rate.
接下来,对本公开实施例的应用于缓存方的负载均衡方法进行说明,参阅图3,示出了应用于缓存方时该负载均衡方法的流程示意图,包括以下步骤:Next, the load balancing method applied to the caching side of the embodiment of the present disclosure will be described. Refer to FIG. 3, which shows a schematic flowchart of the load balancing method when applied to the caching side, including the following steps:
步骤301、获取代理方发送的包括目标分块位置信息的分块请求。Step 301: Obtain a block request sent by the agent that includes the location information of the target block.
所述目标分块位置信息为代理方根据数据资源地址确定的至少一个分块位置信息中的之一;所述数据资源地址携带在代理方接收的用于请求目标文件的内容请求中,所述分块请求用于请求所述目标文件中对应于所述目标分块位置信息的分块内容。The target segment location information is one of at least one segment location information determined by the agent according to the data resource address; the data resource address is carried in the content request for the target file received by the agent, and The block request is used to request the block content corresponding to the target block location information in the target file.
本步骤中,具体可获取代理方基于相应分块内网地址发送的分块请求。In this step, specifically, the block request sent by the agent based on the corresponding block intranet address can be obtained.
其中,所述相应分块内网地址为代理方根据所述数据资源地址确定的第二数量的分块内网地址中的之一;第二数量的分块内网地址的确定过程包括:由代理方确定第二数量的各个分块内容的分块编号;根据所述统一资源定位符和分块内容的分块编号生成分块内容的分块内网地址,得到个数为第二数量的分块内网地址。对于第二数量的分块内网地址的更具体的确定过程,可参阅上文的相关说明,这里不再详述。Wherein, the corresponding block intranet address is one of the second number of block intranet addresses determined by the agent according to the data resource address; the process of determining the second number of block intranet addresses includes: The agent determines the block number of each block content of the second quantity; generates the block intranet address of the block content according to the uniform resource locator and the block number of the block content, and obtains the block number of the second number Block intranet address. For a more specific determination process of the second number of block intranet addresses, please refer to the relevant description above, which will not be described in detail here.
结合前文的说明,可以确定出,分块请求中的目标分块位置信息可以包括分块内容所对应的完整目标文件的url,以及还包括分块内容的编号和编号在目标文件中对应的内容位置(如内容range)中的至少一种。Combining with the previous description, it can be determined that the target block location information in the block request may include the url of the complete target file corresponding to the block content, as well as the number of the block content and the content corresponding to the number in the target file. At least one of location (such as content range).
步骤302、根据所述目标分块位置信息获取分块内容。Step 302: Obtain the block content according to the target block location information.
如图4所示(其中图中“Y”表示“是”,“N”表示“否”),该根据目标分块位置信息获取分块内容的过程可以实现为:As shown in Figure 4 (where "Y" in the figure represents "Yes" and "N" represents "No"), the process of obtaining the block content according to the target block location information can be implemented as follows:
步骤401、确定所述缓存方本地是否存在所述目标分块位置信息指示的分块内容。Step 401: Determine whether the block content indicated by the target block location information exists locally on the caching party.
基于CDN网络的运维特点,文件的数据内容在第一次被请求或者基于清除策略已清除的情况下,CDN网络中不会缓存文件的数据内容,这会导致文件数据内容在CDN网络中命中失败,相应需要CDN网络从文件的数据源站拉取(即获取)并缓存数据(结合CDN网络的协议及运维特点,本公开实施例中,在缓存方存储数据内容,等同于在缓存方缓存数据内容)。Based on the operation and maintenance characteristics of the CDN network, the data content of the file is not cached in the CDN network when the data content of the file is requested for the first time or is cleared based on the clearing strategy, which will cause the file data content to hit the CDN network If it fails, correspondingly, the CDN network needs to pull (ie obtain) the data from the data source site of the file and cache the data (combined with the CDN network protocol and operation and maintenance characteristics, in the embodiment of the present disclosure, storing the data content on the caching side is equivalent to storing the data on the caching side Cache data content).
针对该特点,本实施例中,缓存方如CDN节点内某一物理设备上的cache存储,在获得分块请求后,首先以该请求中携带的目标分块位置信息为索引,确定缓存方本地是否存在该目标分块位置信息指示的分块内容。In view of this feature, in this embodiment, the cache party, such as the cache storage on a physical device in the CDN node, after obtaining the block request, first uses the target block location information carried in the request as an index to determine the cache party’s local Whether there is the block content indicated by the target block location information.
步骤402、在确定存在的情况下,从所述本地获取所述分块内容。Step 402: In a case where it is determined that it exists, obtain the segmented content from the local.
如果确定结果表示存在,缓存方可以以该请求中携带的目标分块位置信息 为索引,从其存储的内容集合中索引、读取相对应的分块内容。If the result of the determination indicates that it exists, the caching party can use the target block location information carried in the request as an index, index and read the corresponding block content from the stored content set.
步骤403、在确定不存在的情况下,执行预定的回源处理,通过所述回源处理获得对应于所述分块请求的分块内容。Step 403: In a case where it is determined that it does not exist, perform predetermined back-to-source processing, and obtain the block content corresponding to the block request through the back-to-source processing.
如果确定结果表示不存在,则执行回源处理,以得到所请求的分块内容。If the result of the determination indicates that it does not exist, the back-to-origin process is executed to obtain the requested segmented content.
该回源处理的过程包括:The process of back-to-origin processing includes:
1)确定对应于目标分块位置信息中的url的回源缓存方;1) Determine the back-to-origin caching party corresponding to the URL in the target block location information;
其中,接收到该分块请求且本地命中失败的缓存方,从分块请求的分块位置信息中提取url,即为分块所对应的目标文件的url,并对提取的url进行一致性哈希运算,得到目标文件的内网地址;进而将目标文件的内网地址指示的缓存方作为回源缓存方。Among them, the cache party that receives the block request and fails the local hit, extracts the url from the block location information of the block request, which is the url of the target file corresponding to the block, and performs consistency on the extracted url. It is hoped to calculate the intranet address of the target file; then the cache party indicated by the intranet address of the target file is used as the back-to-source cache party.
回源缓存方用于负责在目标文件未存储于各缓存方的情况下(如目标文件第一次被请求或基于清除策略已被从各缓存方清除),从目标文件的源站拉取并存储目标文件的数据内容。后续,在存储的目标文件满足清除条件时,所述回源缓存方从自身内容集合中清除该目标文件的数据。The back-to-origin caching party is responsible for pulling and retrieving the target file from the source site when the target file is not stored in each caching party (for example, the target file is requested for the first time or has been cleared from each caching party based on a clearing strategy). Store the data content of the target file. Subsequently, when the stored target file meets the clearing condition, the back-to-source caching party clears the data of the target file from its own content collection.
2)从回源缓存方获得对应于分块请求的分块内容,并将获得的分块内容缓存于目标分块位置信息指示的缓存方的相应位置。2) Obtain the block content corresponding to the block request from the back-to-source caching party, and cache the obtained block content in the corresponding location of the cache party indicated by the target block location information.
实施中,同样可以以分块请求中的分块位置信息(如url+分块编号+编号对应的内容range)为“key”,从回源缓存方获取对应于分块请求的分块内容,并将获取的分块内容缓存于分块内网地址/分块位置信息指示的缓存方的相应位置,也即将读取的分块内容缓存于该本地命中失败的缓存方的相对应位置;之后,当该目标文件的该分块内容再次被请求时,由于分块内网地址所指向的该缓存方已存储有一份该分块内容,从而可成功地进行本地命中。In implementation, you can also use the block location information in the block request (such as url + block number + content range corresponding to the number) as the "key" to obtain the block content corresponding to the block request from the back-to-origin cache, and Cache the obtained block content at the corresponding location of the cache party indicated by the block intranet address/block location information, that is, cache the read block content at the corresponding location of the cache party that failed the local hit; then, When the block content of the target file is requested again, since the cache party pointed to by the block intranet address already stores a copy of the block content, a local hit can be successfully performed.
同理,接收该目标文件的其他分块请求的缓存方同样会因在自身命中失败,而收敛至回源缓存方,从中读取所需的分块内容并于自身存储一份该分块内容,这样做到了一个文件内容在节点内存储了两份,一份内容完整存储于用于收敛回源的回源缓存方,另一份以分块形式打散在各个不同的缓存方(非回源缓存方)中,在两份存储共存期间,代理方会基于路由策略以分块形式分散地请求目标文件的各个分块内容,以实现负载均衡,而对于回源缓存方存储的文件的完整数据内容,则会基于清除策略在满足清除条件时清除掉,所述的清除条件可以是但不限于未被请求的时长达到设定的时长等。In the same way, the cache party that receives other block requests for the target file will also converge to the back-to-origin cache party due to its own hit failure, read the required block content from it, and store a copy of the block content in itself In this way, two copies of the content of a file are stored in the node. One copy is stored in the back-to-origin caching side used to converge back to the source, and the other is divided into different caches (non-back-to-origin) in the form of blocks. In the caching side), during the coexistence of the two storages, the agent will request the contents of each block of the target file in blocks based on the routing strategy to achieve load balancing, and for the complete data of the file stored by the back-to-source caching party The content will be cleared based on the clearing strategy when the clearing condition is met, and the clearing condition may be, but not limited to, the unrequested duration reaching the set duration, etc.
步骤303、向所述代理方发送所述分块内容。Step 303: Send the segmented content to the agent.
在一实施方式中,在获取分块内容之后,缓存方可将获取的分块内容发送至发出分块请求的代理方。In one embodiment, after obtaining the divided content, the cache party may send the obtained divided content to the agent that issued the request for dividing.
需要说明的是,对于缓存方未存储的目标文件(CDN节点中未存储的目标文件),在请求其文件头时同样会本地命中失败,而接收文件头请求且本地命中失败的缓存方,基于与上述相同的原理收敛至回源缓存方,此种情况下,回 源缓存方中同样不存在目标文件,相应需从源站拉取该目标文件的数据内容,回源缓存方完成数据拉取并存储完整数据内容后,接收文件头请求的缓存方从回源缓存方获取所需的文件头并在自身存储一份,后续,当继续向不同的缓存方分发分块请求以获得该目标文件的分块内容时,接收分块请求的各缓存方未存储所需的分块内容,从而收敛至回源缓存方,从回源缓存方存储的完整目标文件中读取各自所需的分块内容并分散存储。It should be noted that for the target file that is not stored by the cache party (target file that is not stored in the CDN node), the file header will also fail locally when the file header is requested, and the cache party that receives the file header request and fails the local hit is based on The same principle as above converges to the back-to-source caching side. In this case, the target file does not exist in the back-to-source caching side. Accordingly, the data content of the target file needs to be pulled from the source site, and the back-to-source caching side completes the data pull And after storing the complete data content, the cache party that receives the file header request obtains the required file header from the back-to-source cache party and stores a copy in itself. Later, when it continues to distribute block requests to different cache parties to obtain the target file When the content of the block, each cache party that receives the block request does not store the required block content, and thus converges to the back-to-source cache party, and reads the required block from the complete target file stored by the back-to-source cache party The content is stored separately.
如图5所示,提供了从CDN节点获取目标文件的一应用实例的处理逻辑,其中,反向代理nginx在获得url请求后,执行以下处理以获得目标内容:As shown in Figure 5, the processing logic of an application instance that obtains the target file from the CDN node is provided. After obtaining the url request, the reverse proxy nginx performs the following processing to obtain the target content:
步骤1、向文件头内网地址指示的cache1发送携带url、文件头range的文件头请求; Step 1. Send a file header request carrying url and file header range to cache1 indicated by the file header internal network address;
步骤2、cache1收到文件头请求后,如果需要回源将url一致性哈希到cache2(如果不需回源,则直接基于cache1自身存储响应nginx);Step 2. After cache1 receives the file header request, if it needs to return to the source, hash the url consistency to cache2 (if it does not need to return to the source, directly store the response nginx based on cache1 itself);
步骤3、cache2从源站拉取并缓存文件内容,并将文件头返回至cache1; Step 3. Cache2 pulls and caches the content of the file from the origin site, and returns the file header to cache1;
步骤4、cache1将文件头响应给nginx,以告知nginx文件体数据长度;同时cache1缓存该文件头;Step 4. Cache1 responds to nginx with the file header to inform nginx of the data length of the file body; at the same time, cache1 caches the file header;
步骤5、在分块请求分发过程中,nginx将根据文件体数据长度生成的携带url和分块range的某一分块请求发送至分块内网地址对应的cache3;Step 5. In the process of distributing the block request, nginx sends a block request containing the url and block range generated according to the data length of the file body to the cache3 corresponding to the block intranet address;
步骤6、如果需要回源,则cache3将url一致性哈希到cache2(如果不需回源,则直接基于cache3自身存储响应nginx);Step 6. If you need to return to the source, cache3 will hash the URL consistency to cache2 (if you don't need to return to the source, it will directly store the response nginx based on cache3 itself);
步骤7、cache2将所对应的分块内容返回至cache3;Step 7. Cache2 returns the corresponding block content to cache3;
步骤8、cache3将分块内容响应给nginx,同时cache3缓存该分块内容;Step 8. Cache3 responds to nginx with the segmented content, and cache3 caches the segmented content;
步骤9-12、步骤9-12与步骤5-8的过程类似,且步骤9-12与步骤5-8可并行执行,不再赘述;Steps 9-12 and steps 9-12 are similar to steps 5-8, and steps 9-12 and 5-8 can be executed in parallel, so they will not be repeated;
步骤13、nginx组装各cache返回的分块内容,得到完整目标文件,并反馈给客户端。Step 13. Nginx assembles the block content returned by each cache to obtain the complete target file and feed it back to the client.
本实施例中,在收敛回源过程中,通过由回源缓存方从源站拉取文件的完整数据内容,并由各个接收分块请求的缓存方从回源缓存方读取及存储所需的分块内容,可使得在目标文件再次被请求时,实现目标文件数据内容的分块请求与路由,相应尽可能快速地实现了内网负载均衡,另外,采用一个回源缓存方进行收敛回源,没有较高程度地扩大回源量,不会为CDN节点带来较高的额外负担。In this embodiment, in the process of converging back to the source, the back-to-source cache party pulls the complete data content of the file from the source station, and each cache party that receives the block request reads and stores the required data from the back-to-source cache party. The block content can enable the block request and routing of the target file data content to be realized when the target file is requested again, and the internal network load balance can be realized as quickly as possible. In addition, a back-to-source cache party is used to converge back The source, without expanding the amount of return to the source to a higher degree, will not bring a higher additional burden to the CDN node.
实际应用中,基于负载均衡策略,一个缓存方如CDN节点内物理设备上的cache存储,可能缓存有同一文件的一个或多个分块内容,也可能缓存有同一文件的分块内容和文件头。如此,参阅图6,该应用于缓存方的负载均衡方法,在步骤301之前,还可以包括以下处理:In practical applications, based on the load balancing strategy, a caching party such as the cache storage on a physical device in a CDN node may cache one or more block contents of the same file, or cache block contents and file headers of the same file . In this way, referring to FIG. 6, the load balancing method applied to the cache side may further include the following processing before step 301:
步骤301’、若获得代理方发送的目标文件的文件头位置信息,根据文件头位置信息获取所述目标文件的文件体数据长度,并向所述代理方发送所述文件体数据长度。Step 301': If the file header position information of the target file sent by the agent is obtained, the file body data length of the target file is obtained according to the file header position information, and the file body data length is sent to the agent.
文件头位置信息可以携带在文件头请求中,缓存方具体可获得代理方发送的文件头请求,并从中提取出文件头位置信息。The file header location information can be carried in the file header request, and the caching party can specifically obtain the file header request sent by the agent, and extract the file header location information from it.
而文件体数据长度则可以携带在文件头中。The length of the file body data can be carried in the file header.
缓存方具体可以以文件头位置信息(如url+文件头编号)为索引(key),从其存储的内容集合中获得相对应的文件头(value),并将其反馈至发出该文件头请求的代理方,以供代理方从获得的文件头中提取文件体数据长度进行使用,比如代理方根据文件体数据长度及设定的数据长度确定目标文件的分块内容的第二数量,进而将目标文件的内容请求拆分为第二数量的多个分块请求等,具体可参阅前文的说明,这里不再赘述。The caching party can specifically use the file header location information (such as url+file header number) as the index (key), obtain the corresponding file header (value) from its stored content collection, and feed it back to the requesting file header The agent is used by the agent to extract the length of the file body data from the obtained file header. For example, the agent determines the second quantity of the block content of the target file according to the file body data length and the set data length, and then the target file The content request of the file is split into a second number of multiple block requests, etc. For details, please refer to the previous description, which will not be repeated here.
对应于上述的负载均衡方法,本公开实施例还公开了一种负载均衡装置,如图7所示,在应用于代理方的情况下,该负载均衡装置包括:Corresponding to the above-mentioned load balancing method, an embodiment of the present disclosure also discloses a load balancing device. As shown in FIG. 7, when applied to an agent, the load balancing device includes:
第一获取单元701,设置为获取携带目标文件的数据资源地址的内容请求;The first obtaining unit 701 is configured to obtain a content request carrying a data resource address of the target file;
拆分单元702,设置为将所述内容请求拆分为包括相应分块位置信息的至少一个分块请求;其中,所述分块位置信息由所述代理方根据所述数据资源地址确定;The splitting unit 702 is configured to split the content request into at least one segment request including corresponding segment location information; wherein the segment location information is determined by the agent according to the data resource address;
分发单元703,设置为将所述至少一个分块请求分发至与所述至少一个分块请求中的分块位置信息分别对应的至少一个缓存方;The distributing unit 703 is configured to distribute the at least one block request to at least one cache party respectively corresponding to the block location information in the at least one block request;
第二获取单元704,设置为获取分别接收到所述至少一个分块请求中相应分块请求的至少一个缓存方反馈的至少一个分块内容;不同分块内容的数据长度绝对差值小于设定阈值,在同一文件的分块内容为多个的情况下,存储该多个分块内容的缓存方的第一数量为多个;The second acquiring unit 704 is configured to acquire at least one piece of content fed back by at least one cache party that has respectively received the corresponding piece of the at least one piece of request; the absolute difference of the data length of the different piece of content is less than the set value Threshold, in the case that there are multiple pieces of content in the same file, the first number of cache parties storing the multiple pieces of content is multiple;
反馈单元705,设置为将所述至少一个分块内容反馈给请求方。The feedback unit 705 is configured to feed back the at least one piece of content to the requester.
在本公开实施例的一可选实施方式中,所述拆分单元702,具体设置为:In an optional implementation manner of the embodiment of the present disclosure, the splitting unit 702 is specifically configured as follows:
确定所述目标文件的文件体数据长度;Determining the file body data length of the target file;
根据预定的数据长度和所述文件体数据长度,确定所述目标文件的文件体包括的分块内容的第二数量;Determine the second quantity of the block content included in the file body of the target file according to the predetermined data length and the data length of the file body;
根据所述数据资源地址确定个数为第二数量的分块内网地址及分块位置信息,并生成个数为第二数量的分别包括不同分块位置信息的分块请求。According to the data resource address, a second number of block intranet addresses and block location information are determined, and a second number of block requests each including different block location information are generated.
在本公开实施例的一可选实施方式中,所述拆分单元702确定所述目标文件的文件体数据长度,包括:In an optional implementation manner of the embodiment of the present disclosure, the splitting unit 702 determining the file body data length of the target file includes:
根据所述数据资源地址确定所述目标文件的文件头内网地址及文件头位置信息;Determining the file header intranet address and file header location information of the target file according to the data resource address;
发送所述文件头位置信息至所述文件头内网地址指示的缓存方;Sending the file header location information to the caching party indicated by the file header's intranet address;
获取所述缓存方根据所述文件头位置信息确定并反馈的所述目标文件的文件体数据长度。Obtain the file body data length of the target file determined and fed back by the caching party according to the file header location information.
在本公开实施例的一可选实施方式中,所述拆分单元702还设置为:根据所述数据资源地址确定个数为第二数量的分块内网地址;In an optional implementation manner of the embodiment of the present disclosure, the splitting unit 702 is further configured to: determine a second number of block intranet addresses according to the data resource address;
所述数据资源地址包括目标文件的统一资源定位符;The data resource address includes the uniform resource locator of the target file;
所述拆分单元702,根据所述数据资源地址确定个数为第二数量的分块内网地址或分块位置信息,包括:The splitting unit 702 determines, according to the data resource address, a second number of block intranet addresses or block location information, including:
确定第二数量的各个分块内容的分块编号;Determine the block number of each block content of the second quantity;
根据所述统一资源定位符和分块内容的分块编号生成分块内容的分块内网地址或分块位置信息,得到个数为第二数量的分块内网地址或分块位置信息。The block intranet address or block location information of the block content is generated according to the uniform resource locator and the block number of the block content, and the block intranet address or block location information of the second number is obtained.
在本公开实施例的一可选实施方式中,所述拆分单元702,根据所述统一资源定位符和分块内容的分块编号生成分块内容的分块内网地址或分块位置信息,包括:In an optional implementation manner of the embodiment of the present disclosure, the splitting unit 702 generates the block intranet address or block location information of the block content according to the uniform resource locator and the block number of the block content ,include:
拼接所述统一资源定位符和分块内容的分块编号,得到拼接信息;Splicing the uniform resource locator and the block number of the block content to obtain splicing information;
利用预定摘要算法计算所述拼接信息对应的信息摘要;Calculating an information abstract corresponding to the spliced information by using a predetermined abstract algorithm;
对所述信息摘要进行一致性哈希运算,得到所述分块内容的分块内网地址;Perform a consistent hash operation on the information digest to obtain the block intranet address of the block content;
将所述统一资源定位符,与分块内容的分块编号、分块内容在目标文件中对应的内容位置这两种信息中的至少一种进行组装,得到所述分块内容的分块位置信息。Assemble the uniform resource locator with at least one of the segment number of the segmented content and the content location of the segmented content in the target file to obtain the segmented location of the segmented content information.
在本公开实施例的一可选实施方式中,分发单元703,具体设置为:In an optional implementation manner of the embodiment of the present disclosure, the distribution unit 703 is specifically configured as follows:
将所述至少一个分块请求分发至对应于相应分块请求的分块内网地址所指示的缓存方;其中,缓存方在接收到分块请求后,获取分块请求中的分块位置信息指示的分块内容并反馈至所述代理方。Distribute the at least one block request to the cache party indicated by the block intranet address corresponding to the corresponding block request; wherein, after receiving the block request, the cache party obtains block location information in the block request The indicated block content is fed back to the agent.
在本公开实施例的一可选实施方式中,所述反馈单元705,具体设置为:In an optional implementation manner of the embodiment of the present disclosure, the feedback unit 705 is specifically configured as follows:
组装所述至少一个分块内容,得到所述目标文件,并向请求方发送所述目标文件;或者,Assemble the at least one piece of content to obtain the target file, and send the target file to the requesting party; or,
发送所述至少一个分块内容至请求方,以便请求方通过组装所述至少一个分块内容得到所述目标文件。Send the at least one piece of content to the requester, so that the requester obtains the target file by assembling the at least one piece of content.
对于本公开实施例公开的应用于代理方的负载均衡装置而言,由于其与上文相应各实施例公开的应用于代理方的负载均衡方法相对应,所以描述的比较简单,相关相似之处请参见上文相应各实施例中应用于代理方的负载均衡方法部分的说明即可,此处不再详述。For the load balancing device applied to the agent disclosed in the embodiments of the present disclosure, since it corresponds to the load balancing method applied to the agent disclosed in the respective embodiments above, the description is relatively simple, and the related similarities are Please refer to the description of the load balancing method applied to the proxy in the respective embodiments above, and the details will not be described here.
如图8所示,在应用于缓存方的情况下,所述负载均衡装置包括:As shown in Figure 8, in the case of being applied to the caching side, the load balancing device includes:
第三获取单元801,设置为获取代理方发送的包括目标分块位置信息的分块请求,所述目标分块位置信息为代理方根据数据资源地址确定的至少一个分块位置信息中的之一;所述数据资源地址携带在代理方接收的设置为请求目标文件的内容请求中,所述分块请求设置为请求所述目标文件中对应于所述目标分块位置信息的分块内容;The third obtaining unit 801 is configured to obtain a block request including target block location information sent by the agent, where the target block location information is one of at least one block location information determined by the agent according to the data resource address The data resource address is carried in a content request received by the agent and set to request a target file, and the block request is set to request the block content corresponding to the target block location information in the target file;
第四获取单元802,设置为根据所述目标分块位置信息获取分块内容;The fourth obtaining unit 802 is configured to obtain block content according to the target block location information;
发送单元803,设置为向所述代理方发送所述分块内容。The sending unit 803 is configured to send the divided content to the agent.
在本公开实施例的一可选实施方式中,所述第四获取单元802,具体设置为:In an optional implementation manner of the embodiment of the present disclosure, the fourth acquiring unit 802 is specifically configured as follows:
确定所述缓存方本地是否存在所述目标分块位置信息指示的分块内容;Determining whether the block content indicated by the target block location information exists locally on the caching party;
在确定存在的情况下,从所述本地获取所述分块内容;In the case where it is determined to exist, obtain the segmented content from the local;
在确定不存在的情况下,执行预定的回源处理,通过所述回源处理获得对应于所述分块请求的分块内容。In a case where it is determined that it does not exist, a predetermined back-to-source process is performed, and the block content corresponding to the block request is obtained through the back-to-source process.
在本公开实施例的一可选实施方式中,所述第四获取单元802执行预定的回源处理,包括:In an optional implementation manner of the embodiment of the present disclosure, the fourth acquiring unit 802 performs predetermined back-to-origin processing, including:
确定对应于所述目标分块位置信息中的数据资源地址的回源缓存方;Determining the back-to-origin cache party corresponding to the data resource address in the target block location information;
从所述回源缓存方获取对应于所述分块请求的分块内容,并将所述分块内容缓存于所述目标分块位置信息指示的缓存方的相应位置。Obtain the block content corresponding to the block request from the back-to-source caching party, and cache the block content in a corresponding location of the cache party indicated by the target block location information.
在本公开实施例的一可选实施方式中,所述数据资源地址包括所述目标文件的统一资源定位符;In an optional implementation manner of the embodiment of the present disclosure, the data resource address includes a uniform resource locator of the target file;
所述第四获取单元802,确定对应于所述目标分块位置信息中的数据资源地址的回源缓存方,包括:The fourth obtaining unit 802 determining the back-to-source cache party corresponding to the data resource address in the target block location information includes:
对所述目标文件的统一资源定位符进行一致性哈希运算,得到所述目标文件的内网地址;Performing a consistent hash operation on the uniform resource locator of the target file to obtain the intranet address of the target file;
确定所述目标文件的内网地址指示的缓存方作为所述回源缓存方;Determining the cache party indicated by the intranet address of the target file as the back-to-source cache party;
在本公开实施例的一可选实施方式中,所述回源缓存方在所述目标文件未存储于各缓存方的情况下,从所述目标文件的源站获取并缓存所述目标文件的数据内容;在缓存的所述目标文件满足清除条件时,从所述回源缓存方清除所述目标文件。In an optional implementation manner of the embodiment of the present disclosure, the back-to-source caching party obtains and caches the target file's information from the source site of the target file when the target file is not stored in each caching party. Data content; when the cached target file meets the clearing condition, clear the target file from the back-to-source cache party.
在本公开实施例的一可选实施方式中,参阅图9,所述装置还可以包括:In an optional implementation manner of the embodiment of the present disclosure, referring to FIG. 9, the device may further include:
第五获取单元804,设置为在获取代理方发送的包括目标分块位置信息的分块请求之前,获得代理方发送的包括目标文件的文件头位置信息的文件头请求的情况下,根据文件头位置信息获取所述目标文件的文件体数据长度,并向所述代理方发送所述文件体数据长度。The fifth obtaining unit 804 is configured to obtain the file header request including the file header position information of the target file sent by the agent before obtaining the file header request including the file header position information of the target file sent by the agent, according to the file header The location information acquires the file body data length of the target file, and sends the file body data length to the agent.
在本公开实施例的一可选实施方式中,所述第三获取单元801,具体设置 为:In an optional implementation manner of the embodiment of the present disclosure, the third acquiring unit 801 is specifically configured as follows:
获取代理方基于相应分块内网地址发送的所述分块请求;Obtain the block request sent by the agent based on the corresponding block intranet address;
其中,所述相应分块内网地址为代理方根据所述数据资源地址确定的第二数量的分块内网地址中的之一;所述第二数量的分块内网地址的确定过程包括:由代理方确定第二数量的各个分块内容的分块编号;根据所述统一资源定位符和分块内容的分块编号生成分块内容的分块内网地址,得到个数为第二数量的分块内网地址。Wherein, the corresponding block intranet address is one of the second number of block intranet addresses determined by the agent according to the data resource address; the process of determining the second number of block intranet addresses includes : The agent determines the block number of each block content of the second quantity; generates the block intranet address of the block content according to the uniform resource locator and the block number of the block content, and the obtained number is the second The number of block intranet addresses.
对于本公开实施例公开的应用于缓存方的负载均衡装置而言,由于其与上文相应各实施例公开的应用于缓存方的负载均衡方法相对应,所以描述的比较简单,相关相似之处请参见上文相应各实施例中应用于缓存方的负载均衡方法部分的说明即可,此处不再详述。For the load balancing device applied to the caching side disclosed in the embodiments of the present disclosure, since it corresponds to the load balancing method applied to the caching side disclosed in the respective embodiments above, the description is relatively simple, and the related similarities are Please refer to the description of the load balancing method applied to the caching party in the respective embodiments above, and the details will not be described here.
本公开实施例还公开了一种代理设备,该代理设备可以是但不限于CDN网络节点内任一物理设备(如服务器)上的反向代理nginx,如图10示出的代理设备的结构示意图,该代理设备至少包括:The embodiment of the present disclosure also discloses a proxy device, which can be, but is not limited to, a reverse proxy nginx on any physical device (such as a server) in a CDN network node, as shown in FIG. 10, a schematic structural diagram of the proxy device , The agent equipment at least includes:
第一存储器1001,设置为存放第一计算机指令集;The first memory 1001 is configured to store the first computer instruction set;
所述的第一计算机指令集可以采用计算机程序的形式实现。The first set of computer instructions can be implemented in the form of a computer program.
第一存储器1001可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The first memory 1001 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
第一处理器1002,设置为通过执行所述第一存储器上存放的指令集,实现如上任一应用于代理方的负载均衡方法。The first processor 1002 is configured to implement any load balancing method applied to the agent as described above by executing the instruction set stored in the first memory.
第一处理器1002可以为中央处理器(Central Processing Unit,CPU),特定应用集成电路(application-specific integrated circuit,ASIC),数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件等。The first processor 1002 may be a central processing unit (CPU), an application-specific integrated circuit (ASIC), a digital signal processor (DSP), an application-specific integrated circuit (ASIC), and a field programmable Gate array (FPGA) or other programmable logic devices, etc.
除此之外,代理设备还可以包括通信接口、通信总线等组成部分。第一存储器、第一处理器和通信接口通过通信总线完成相互间的通信。In addition, the proxy device can also include components such as a communication interface and a communication bus. The first memory, the first processor and the communication interface communicate with each other through the communication bus.
通信接口设置为代理设备与其他设备(如CDN节点中其他物理设备)之间的通信。通信总线可以是外设部件互连标准(Peripheral Component Interconnect,PCI)总线或扩展工业标准结构(Extended Industry Standard Architecture,EISA)总线等,该通信总线可以分为地址总线、数据总线、控制总线等。The communication interface is set to communicate between the proxy device and other devices (such as other physical devices in the CDN node). The communication bus can be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (EISA) bus, etc. The communication bus can be divided into an address bus, a data bus, and a control bus.
本实施例中,代理设备中的第一处理器通过执行第一存储器中存放的第一计算机指令集,实现对文件的分块存储并具体将文件内容相对均匀地打散至不同的缓存方,当请求方请求文件时,将文件的内容请求映射为至少一个分块请求,基于至少一个分块请求从至少一个缓存方获得可用于组成完整目标文件的 至少一个分块内容,以实现对请求方的响应。由于每条分块请求所对应分块内容的大小相对均匀,相应实现了将所请求文件的流量均衡分摊至不同的缓存方,所以不同缓存方的内网流量就会相对均衡,不会为外网带宽的使用带来瓶颈,提升了CDN等网络的外网服务能力。In this embodiment, the first processor in the proxy device executes the first computer instruction set stored in the first memory to implement block storage of the file and specifically distribute the file content to different cache parties relatively evenly. When the requester requests a file, the content request of the file is mapped to at least one block request, and at least one block content that can be used to form a complete target file is obtained from at least one cache party based on the at least one block request, so as to achieve the request to the requester. the response to. Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external. The use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.
本公开实施例还公开了一种缓存设备,该缓存设备可以是但不限于CDN网络节点内任一物理设备(如服务器)上的cache存储,如图11示出的缓存设备的结构示意图,该缓存设备至少包括:The embodiment of the present disclosure also discloses a cache device. The cache device can be, but is not limited to, cache storage on any physical device (such as a server) in the CDN network node. The cache device includes at least:
磁盘1101,设置为存储文件的数据内容; Disk 1101, set to store the data content of the file;
第二存储器1102,设置为存放第二计算机指令集;The second memory 1102 is configured to store a second computer instruction set;
所述的第二计算机指令集可以采用计算机程序的形式实现。The second set of computer instructions can be implemented in the form of a computer program.
第二处理器1103,设置为通过执行所述第二存储器上存放的指令集,实现如上任一应用于缓存方的负载均衡方法。The second processor 1103 is configured to implement any load balancing method applied to the cache side as described above by executing the instruction set stored in the second memory.
本实施例的缓存设备中,第二处理器1103通过执行第二存储器1102中的第二计算机指令集,实现对文件的分块存储并具体将文件内容相对均匀地打散至不同的缓存方,同时实现了对文件的分块请求与路由,可使得内网带宽的使用均衡分布,不会为外网带宽的使用带来瓶颈,提升了外网服务能力。In the cache device of this embodiment, the second processor 1103 executes the second set of computer instructions in the second memory 1102 to implement block storage of files and specifically disperses the content of the files to different cache parties relatively evenly. At the same time, it realizes the block request and routing of files, which can make the use of internal network bandwidth evenly distributed, will not bring bottleneck to the use of external network bandwidth, and improve the service capability of the external network.
本公开实施例还公开了一种计算机可读存储介质,该计算机可读存储介质内存储有第一计算机指令集,所述第一计算机指令集被处理器执行时实现如上任一应用于代理方的负载均衡方法。The embodiment of the present disclosure also discloses a computer-readable storage medium, the computer-readable storage medium stores a first set of computer instructions, and when the first set of computer instructions is executed by a processor, any one of the above is applied to the agent. Load balancing method.
相对应地,本公开实施例还公开了另一种计算机可读存储介质,该另一种计算机可读存储介质内存储有第二计算机指令集,所述第二计算机指令集被处理器执行时实现如上任一应用于缓存方的负载均衡方法。Correspondingly, the embodiment of the present disclosure also discloses another computer-readable storage medium, and the second computer instruction set is stored in the second computer instruction set when the second computer instruction set is executed by the processor. Realize any of the above load balancing methods applied to the caching side.
上述的两种计算机可读存储介质中存储的指令在运行时,可实现对文件的分块存储并具体将文件内容相对均匀地打散至不同的缓存方,并同时可实现对文件的分块请求与路由,可使得内网带宽的使用均衡分布,不会为外网带宽的使用带来瓶颈,提升了外网服务能力。The instructions stored in the above-mentioned two computer-readable storage media can realize the block storage of the file and the specific content of the file is relatively evenly distributed to different cache parties when running, and can realize the block division of the file at the same time. Requests and routing can make the use of internal network bandwidth evenly distributed, will not bring bottleneck to the use of external network bandwidth, and improve the service capability of the external network.
本公开实施例还公开了一种服务节点,包括多个物理设备,所述物理设备包括代理方和缓存方;The embodiment of the present disclosure also discloses a service node, which includes a plurality of physical devices, and the physical devices include a proxy party and a cache party;
服务节点内的代理方通过执行如上任一应用于代理方的负载均衡方法与服务节点内的至少一个缓存方交互;The agent in the service node interacts with at least one cache party in the service node by executing any load balancing method applied to the agent as described above;
服务节点内的缓存方通过执行如上任一应用于缓存方的负载均衡方法与服务节点内的代理方交互。The cache party in the service node interacts with the agent in the service node by executing any load balancing method applied to the cache party as described above.
该服务节点可以是CDN节点,相应地,上述的代理方可以是CDN节点内 物理设备(如服务器)上的反向代理nginx,缓存方则可以是CDN节点内物理设备(如服务器)上的cache存储。The service node can be a CDN node. Correspondingly, the aforementioned agent can be a reverse proxy nginx on a physical device (such as a server) in the CDN node, and the cache party can be a cache on a physical device (such as a server) in the CDN node. storage.
关于代理方与缓存方之间的基于负载均衡的通信交互过程,具体可参阅前文相关实施例的介绍,这里不再详述。Regarding the load balancing-based communication interaction process between the proxy party and the cache party, please refer to the introduction of the previous related embodiments for details, which will not be described in detail here.
需要说明的是,本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。It should be noted that the various embodiments in this specification are described in a progressive manner, and each embodiment focuses on the differences from other embodiments. For the same and similar parts between the various embodiments, refer to each other. Can.
为了描述的方便,描述以上系统或装置时以功能分为各种模块或单元分别描述。当然,在实施本公开时可以把各单元的功能在同一个或多个软件和/或硬件中实现。For the convenience of description, when describing the above system or device, the functions are divided into various modules or units to be described separately. Of course, when implementing the present disclosure, the functions of each unit may be implemented in the same one or more software and/or hardware.
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到本公开可借助软件加必需的通用硬件平台的方式来实现。基于这样的理解,本公开的技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例或者实施例的某些部分所述的方法。It can be seen from the description of the above embodiments that those skilled in the art can clearly understand that the present disclosure can be implemented by means of software plus a necessary general hardware platform. Based on this understanding, the technical solution of the present disclosure can be embodied in the form of a software product in essence or a part that contributes to the related technology. The computer software product can be stored in a storage medium, such as ROM/RAM, magnetic disk, An optical disc, etc., includes a number of instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the various embodiments or some parts of the embodiments of the present disclosure.
最后,还需要说明的是,在本文中,诸如第一、第二、第三和第四等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。Finally, it should be noted that in this article, relational terms such as first, second, third, and fourth are only used to distinguish one entity or operation from another entity or operation. Any such actual relationship or sequence between these entities or operations must be required or implied. Moreover, the terms "including", "including" or any other variants thereof are intended to cover non-exclusive inclusion, so that a process, method, article, or device that includes a series of elements includes not only those elements, but also those that are not explicitly listed Other elements of, or also include elements inherent to this process, method, article or equipment. If there are no more restrictions, the element defined by the sentence "including a..." does not exclude the existence of other identical elements in the process, method, article, or equipment that includes the element.
以上所述仅是本公开的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本公开原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本公开的保护范围。The above are only the preferred embodiments of the present disclosure. It should be pointed out that for those of ordinary skill in the art, without departing from the principles of the present disclosure, several improvements and modifications can be made, and these improvements and modifications are also It should be regarded as the protection scope of this disclosure.
工业实用性Industrial applicability
相关技术中,CDN网络的一台物理设备消耗的内网带宽和外网带宽是一致的,CND节点内网带宽分布的不均衡,会限制外网带宽的利用率,为外网带宽的使用带来了瓶颈,导致外网消耗的最大带宽无法达到最饱和(不同物理设备配置有相同的最大可用外网带宽),相应降低了CDN等网络的外网服务能力。In related technologies, the internal network bandwidth consumed by a physical device of the CDN network is the same as the external network bandwidth. The unbalanced distribution of the internal network bandwidth of the CND node will limit the utilization of the external network bandwidth and bring about the use of external network bandwidth. The bottleneck has caused the maximum bandwidth consumed by the external network to be unable to reach the maximum saturation (different physical devices are configured with the same maximum available external network bandwidth), which correspondingly reduces the external network service capabilities of networks such as CDN.
针对相关技术存在的问题,本公开实施例,对文件进行分块存储并具体将文件内容以分块形式相对均匀地打散至不同的缓存方,当请求方请求文件时, 将文件的内容请求映射为至少一个分块请求,基于至少一个分块请求从至少一个缓存方获得可用于组成完整目标文件的至少一个分块内容,以实现对请求方的响应。由于每条分块请求所对应分块内容的大小相对均匀,相应实现了将所请求文件的流量均衡分摊至不同的缓存方,所以不同缓存方的内网流量就会相对均衡,不会为外网带宽的使用带来瓶颈,提升了CDN等网络的外网服务能力。In view of the problems in the related technology, the embodiments of the present disclosure store files in blocks, and specifically disperse the content of the files to different cache parties relatively evenly in blocks. When the requesting party requests the file, the content of the file is requested The mapping is at least one block request, and at least one block content that can be used to form a complete target file is obtained from at least one cache party based on the at least one block request, so as to implement a response to the requester. Since the size of the block content corresponding to each block request is relatively uniform, the traffic of the requested file is allocated to different caching parties accordingly, so the intranet traffic of different caching parties will be relatively balanced, and will not be external. The use of network bandwidth has brought bottlenecks to improve the external network service capabilities of CDN and other networks.

Claims (21)

  1. 一种负载均衡方法,应用于代理方,所述方法包括:A load balancing method applied to an agent, and the method includes:
    获取携带目标文件的数据资源地址的内容请求;Get the content request carrying the data resource address of the target file;
    将所述内容请求拆分为包括相应分块位置信息的至少一个分块请求;其中,所述分块位置信息由所述代理方根据所述数据资源地址确定;Splitting the content request into at least one segment request including corresponding segment location information; wherein the segment location information is determined by the agent according to the data resource address;
    将所述至少一个分块请求分发至与所述至少一个分块请求中的分块位置信息分别对应的至少一个缓存方;Distributing the at least one block request to at least one cache party respectively corresponding to the block location information in the at least one block request;
    获取分别接收到所述至少一个分块请求中相应分块请求的至少一个缓存方反馈的至少一个分块内容;不同分块内容的数据长度绝对差值小于预定阈值,在同一文件的分块内容为多个的情况下,存储多个分块内容的缓存方的第一数量为多个;Obtain at least one piece of content fed back by at least one cache party that has respectively received the corresponding piece request in the at least one piece of request; the absolute difference of the data length of different piece of content is less than a predetermined threshold, and the piece of content of the same file In the case of multiple pieces, the first number of cache parties storing multiple pieces of content is multiple;
    将所述至少一个分块内容反馈给请求方。The at least one piece of content is fed back to the requesting party.
  2. 根据权利要求1所述的方法,其中,所述将所述内容请求拆分为包括相应分块位置信息的至少一个分块请求,包括:The method according to claim 1, wherein the splitting the content request into at least one segment request including corresponding segment location information comprises:
    确定所述目标文件的文件体数据长度;Determining the file body data length of the target file;
    根据预定的数据长度和所述文件体数据长度,确定所述目标文件的文件体包括的分块内容的第二数量;Determine the second quantity of the block content included in the file body of the target file according to the predetermined data length and the data length of the file body;
    根据所述数据资源地址确定个数为第二数量的分块位置信息,并生成个数为第二数量的分别包括不同分块位置信息的分块请求。According to the data resource address, a second number of block location information is determined, and a second number of block requests each including different block location information are generated.
  3. 根据权利要求2所述的方法,其中,所述确定所述目标文件的文件体数据长度,包括:The method according to claim 2, wherein said determining the file body data length of the target file comprises:
    根据所述数据资源地址确定所述目标文件的文件头内网地址及文件头位置信息;Determining the file header intranet address and file header location information of the target file according to the data resource address;
    发送所述文件头位置信息至所述文件头内网地址指示的缓存方;Sending the file header location information to the caching party indicated by the file header's intranet address;
    获取所述缓存方根据所述文件头位置信息确定并反馈的所述目标文件的文件体数据长度。Obtain the file body data length of the target file determined and fed back by the caching party according to the file header location information.
  4. 根据权利要求2或3所述的方法,其中,还包括:根据所述数据资源地址确定个数为第二数量的分块内网地址;The method according to claim 2 or 3, further comprising: determining a second number of block intranet addresses according to the data resource address;
    所述数据资源地址包括所述目标文件的统一资源定位符;The data resource address includes the uniform resource locator of the target file;
    根据所述数据资源地址确定个数为第二数量的分块内网地址或分块位置信息,包括:Determining the second number of block intranet addresses or block location information according to the data resource address includes:
    确定第二数量的各个分块内容的分块编号;Determine the block number of each block content of the second quantity;
    根据所述统一资源定位符和分块内容的分块编号生成分块内容的分块内网地址或分块位置信息,得到个数为第二数量的分块内网地址或分块位置信息。The block intranet address or block location information of the block content is generated according to the uniform resource locator and the block number of the block content, and the block intranet address or block location information of the second number is obtained.
  5. 根据权利要求4所述的方法,其中,所述根据所述统一资源定位符和分块内容的分块编号生成分块内容的分块内网地址或分块位置信息,包括:The method according to claim 4, wherein the generating the block intranet address or block location information of the block content according to the uniform resource locator and the block number of the block content comprises:
    拼接所述统一资源定位符和分块内容的分块编号,得到拼接信息;Splicing the uniform resource locator and the block number of the block content to obtain splicing information;
    利用预定摘要算法计算所述拼接信息对应的信息摘要;Calculating an information abstract corresponding to the spliced information by using a predetermined abstract algorithm;
    对所述信息摘要进行一致性哈希运算,得到所述分块内容的分块内网地址;Perform a consistent hash operation on the information digest to obtain the block intranet address of the block content;
    将所述统一资源定位符,与分块内容的分块编号、分块内容在目标文件中对应的内容位置这两种信息中的至少一种进行组装,得到所述分块内容的分块位置信息。Assemble the uniform resource locator with at least one of the segment number of the segmented content and the content location of the segmented content in the target file to obtain the segmented location of the segmented content information.
  6. 根据权利要求4或5所述的方法,其中,所述将所述至少一个分块请求分发至与所述至少一个分块请求中的分块位置信息分别对应的至少一个缓存方,包括:The method according to claim 4 or 5, wherein the distributing the at least one block request to at least one cache party respectively corresponding to the block location information in the at least one block request comprises:
    将所述至少一个分块请求分发至对应于相应分块请求的分块内网地址所指示的缓存方;Distributing the at least one block request to the cache party indicated by the block intranet address corresponding to the corresponding block request;
    其中,缓存方在接收到分块请求后,获取分块请求中的分块位置信息指示的分块内容并反馈至所述代理方。Wherein, after receiving the block request, the cache party obtains the block content indicated by the block location information in the block request and feeds it back to the agent.
  7. 根据权利要求1-6任一项所述的方法,其中,所述将所述至少一个分块内容反馈给请求方,包括:The method according to any one of claims 1 to 6, wherein the feeding back the at least one piece of content to the requesting party comprises:
    组装所述至少一个分块内容,得到所述目标文件,并向请求方发送所述目标文件;或者,Assemble the at least one piece of content to obtain the target file, and send the target file to the requesting party; or,
    发送所述至少一个分块内容至请求方,以便请求方通过组装所述至少一个分块内容得到所述目标文件。Send the at least one piece of content to the requester, so that the requester obtains the target file by assembling the at least one piece of content.
  8. 一种负载均衡方法,应用于缓存方,所述方法包括:A load balancing method applied to a caching party, the method comprising:
    获取代理方发送的包括目标分块位置信息的分块请求,所述目标分块位置信息为代理方根据数据资源地址确定的至少一个分块位置信息中的一个分块位置信息;所述数据资源地址携带在代理方接收的设置为请求目标文件的内容请求中,所述分块请求设置为请求所述目标文件中对应于所述目标分块位置信息的分块内容;Obtain a segmentation request including target segmentation location information sent by the agent, where the target segmentation location information is one segmentation location information of at least one segmentation location information determined by the agent according to the data resource address; the data resource The address is carried in a content request set to request a target file received by the agent, and the block request is set to request the block content corresponding to the target block location information in the target file;
    根据所述目标分块位置信息获取分块内容;Acquiring block content according to the target block location information;
    向所述代理方发送所述分块内容。Sending the segmented content to the agent.
  9. 根据权利要求8所述的方法,其中,所述根据所述目标分块位置信息获取分块内容,包括:The method according to claim 8, wherein said obtaining the block content according to the location information of the target block comprises:
    确定所述缓存方本地是否存在所述目标分块位置信息指示的分块内容;Determining whether the block content indicated by the target block location information exists locally on the caching party;
    在确定存在的情况下,从所述本地获取所述分块内容;In the case where it is determined to exist, obtain the segmented content from the local;
    在确定不存在的情况下,执行预定的回源处理,通过所述回源处理获得对应于所述分块请求的分块内容。In a case where it is determined that it does not exist, a predetermined back-to-source process is executed, and the block content corresponding to the block request is obtained through the back-to-source process.
  10. 根据权利要求9所述的方法,其中,所述执行预定的回源处理,包括:The method according to claim 9, wherein said executing predetermined back-to-origin processing comprises:
    确定对应于所述目标分块位置信息中的数据资源地址的回源缓存方;Determining the back-to-origin cache party corresponding to the data resource address in the target block location information;
    从所述回源缓存方获取对应于所述分块请求的分块内容,并将所述分块内容缓存于所述目标分块位置信息指示的缓存方的相应位置。Obtain the block content corresponding to the block request from the back-to-source caching party, and cache the block content in a corresponding location of the cache party indicated by the target block location information.
  11. 根据权利要求10所述的方法,其中,所述数据资源地址包括所述目标文件的统一资源定位符;The method according to claim 10, wherein the data resource address includes a uniform resource locator of the target file;
    所述确定对应于所述目标分块位置信息中的数据资源地址的回源缓存方,包括:The determining the back-to-source cache party corresponding to the data resource address in the target block location information includes:
    对所述目标文件的统一资源定位符进行一致性哈希运算,得到所述目标文件的内网地址;Performing a consistent hash operation on the uniform resource locator of the target file to obtain the intranet address of the target file;
    确定所述目标文件的内网地址指示的缓存方作为所述回源缓存方。Determine the cache party indicated by the intranet address of the target file as the back-to-source cache party.
  12. 根据权利要求10或11所述的方法,其中,所述回源缓存方在所述目标文件未存储于各缓存方的情况下,从所述目标文件的源站获取并缓存所述目标文件的数据内容;在缓存的所述目标文件满足清除条件时,从所述回源缓存方清除所述目标文件。The method according to claim 10 or 11, wherein the back-to-origin caching party obtains and caches the target file's data from the source site of the target file when the target file is not stored in each caching party. Data content; when the cached target file meets the clearing condition, clear the target file from the back-to-source cache party.
  13. 根据权利要求8-12任一项所述的方法,其中,在获取代理方发送的包括目标分块位置信息的分块请求之前,所述方法还包括:The method according to any one of claims 8-12, wherein, before acquiring the block request including the location information of the target block sent by the agent, the method further comprises:
    若获得代理方发送的目标文件的文件头位置信息,根据所述文件头位置信息获取所述目标文件的文件体数据长度,并向所述代理方发送所述文件体数据长度。If the file header position information of the target file sent by the agent is obtained, the file body data length of the target file is obtained according to the file header position information, and the file body data length is sent to the agent.
  14. 根据权利要求8-13任一项所述的方法,其中,所述获取代理方发送的包括目标分块位置信息的分块请求,包括:The method according to any one of claims 8-13, wherein the block request including the location information of the target block sent by the obtaining agent comprises:
    获取代理方基于相应分块内网地址发送的所述分块请求;Obtain the block request sent by the agent based on the corresponding block intranet address;
    其中,所述相应分块内网地址为代理方根据所述数据资源地址确定的第二数量的分块内网地址中的一个分块内网地址;所述数据资源地址包括所述目标文件的统一资源定位符;所述第二数量的分块内网地址的确定过程包括:由代理方确定第二数量的各个分块内容的分块编号;根据所述统一资源定位符和分块内容的分块编号生成分块内容的分块内网地址,得到个数为第二数量的分块内网地址。Wherein, the corresponding block intranet address is one of the second number of block intranet addresses determined by the agent according to the data resource address; the data resource address includes the address of the target file Uniform resource locator; the process of determining the intranet address of the second number of blocks includes: the agent determines the block number of each block content of the second number; according to the uniform resource locator and the block content The block number generates the block intranet address of the block content, and the block intranet address with the second number is obtained.
  15. 一种负载均衡装置,应用于代理方,所述装置包括:A load balancing device applied to an agent, the device comprising:
    第一获取单元,设置为取携带目标文件的数据资源地址的内容请求;The first obtaining unit is configured to obtain a content request carrying a data resource address of the target file;
    拆分单元,设置为将所述内容请求拆分为包括相应分块位置信息的至少一个分块请求;其中,所述分块位置信息由所述代理方根据所述数据资源地址确定;A splitting unit, configured to split the content request into at least one segment request including corresponding segment location information; wherein the segment location information is determined by the agent according to the data resource address;
    分发单元,设置为将所述至少一个分块请求分发至与所述至少一个分块请求中的分块位置信息分别对应的至少一个缓存方;A distributing unit, configured to distribute the at least one block request to at least one cache party corresponding to the block location information in the at least one block request;
    第二获取单元,设置为获取分别接收到所述至少一个分块请求中相应分块请求的至少一个缓存方反馈的至少一个分块内容;不同分块内容的数据长度绝对差值小于预定阈值,在同一文件的分块内容为多个的情况下,存储多个分块内容的缓存方的第一数量为多个;The second obtaining unit is configured to obtain at least one piece of content fed back by at least one cache party that has respectively received the corresponding piece request in the at least one piece of piece request; the absolute difference of the data length of the different piece of content is less than a predetermined threshold, In the case where there are multiple pieces of content in the same file, the first number of cache parties storing multiple pieces of content is multiple;
    反馈单元,设置为将所述至少一个分块内容反馈给请求方。The feedback unit is configured to feed back the at least one piece of content to the requester.
  16. 一种负载均衡装置,应用于缓存方,所述装置包括:A load balancing device applied to a caching party, the device comprising:
    第三获取单元,设置为获取代理方发送的包括目标分块位置信息的分块请求,所述目标分块位置信息为代理方根据数据资源地址确定的至少一个分块位置信息中的一个分块位置信息;所述数据资源地址携带在代理方接收的设置为请求目标文件的内容请求中,所述分块请求设置为请求所述目标文件中对应于所述目标分块位置信息的分块内容;The third obtaining unit is configured to obtain a block request including target block location information sent by the agent, where the target block location information is one block of at least one block location information determined by the agent according to the data resource address Location information; the data resource address is carried in the content request received by the agent and set to request the target file, and the block request is set to request the block content corresponding to the target block location information in the target file ;
    第四获取单元,设置为根据所述目标分块位置信息获取分块内容;The fourth obtaining unit is configured to obtain the block content according to the target block location information;
    发送单元,设置为向所述代理方发送所述分块内容。The sending unit is configured to send the segmented content to the agent.
  17. 一种代理设备,包括:An agent device, including:
    第一存储器,设置为存放第一计算机指令集;The first memory is configured to store the first computer instruction set;
    第一处理器,设置为通过执行所述第一存储器上存放的指令集,实现如权利要求1-7任一项所述的负载均衡方法。The first processor is configured to implement the load balancing method according to any one of claims 1-7 by executing the instruction set stored in the first memory.
  18. 一种缓存设备,包括:A cache device, including:
    磁盘,设置为存储文件的数据内容;Disk, set to store the data content of the file;
    第二存储器,设置为存放第二计算机指令集;The second memory is set to store the second computer instruction set;
    第二处理器,设置为通过执行所述第二存储器上存放的指令集,实现如权利要求8-14任一项所述的负载均衡方法。The second processor is configured to implement the load balancing method according to any one of claims 8-14 by executing the instruction set stored in the second memory.
  19. 一种计算机可读存储介质,所述计算机可读存储介质内存储有第一计算机指令集,所述第一计算机指令集被处理器执行时实现如权利要求1-7任一项所述的负载均衡方法。A computer-readable storage medium, the computer-readable storage medium stores a first set of computer instructions, and when the first set of computer instructions is executed by a processor, the load according to any one of claims 1-7 is realized Balanced approach.
  20. 一种计算机可读存储介质,所述计算机可读存储介质内存储有第二计算机指令集,所述第二计算机指令集被处理器执行时实现如权利要求8-14任一项所述的负载均衡方法。A computer-readable storage medium having a second set of computer instructions stored in the computer-readable storage medium, and when the second set of computer instructions is executed by a processor, the load according to any one of claims 8-14 is realized Balanced approach.
  21. 一种服务节点,包括多个物理设备,所述物理设备包括代理方和缓存方;A service node includes multiple physical devices, and the physical devices include a proxy party and a cache party;
    所述服务节点内的代理方通过执行如权利要求1-7任一项所述的方法与所述服务节点内的至少一个缓存方交互;The agent in the service node interacts with at least one caching party in the service node by executing the method according to any one of claims 1-7;
    所述服务节点内的缓存方通过执行如权利要求8-14任一项所述的方法与所述服务节点内的代理方交互。The cache party in the service node interacts with the agent party in the service node by executing the method according to any one of claims 8-14.
PCT/CN2021/080746 2020-06-17 2021-03-15 Load balancing method and apparatus, proxy device, cache device and serving node WO2021253889A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010555608.6A CN111464661B (en) 2020-06-17 2020-06-17 Load balancing method and device, proxy equipment, cache equipment and service node
CN202010555608.6 2020-06-17

Publications (1)

Publication Number Publication Date
WO2021253889A1 true WO2021253889A1 (en) 2021-12-23

Family

ID=71680397

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/080746 WO2021253889A1 (en) 2020-06-17 2021-03-15 Load balancing method and apparatus, proxy device, cache device and serving node

Country Status (2)

Country Link
CN (1) CN111464661B (en)
WO (1) WO2021253889A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553965A (en) * 2022-01-04 2022-05-27 网宿科技股份有限公司 Scheduling method of intranet equipment, network equipment and storage medium
CN114900562A (en) * 2022-05-09 2022-08-12 北京百度网讯科技有限公司 Resource acquisition method and device, electronic equipment and storage medium
CN116527691A (en) * 2023-06-27 2023-08-01 天津中远海运散运数字科技有限公司 Method, device, equipment and medium for synchronizing ship-shore data

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111464661B (en) * 2020-06-17 2020-09-22 北京金迅瑞博网络技术有限公司 Load balancing method and device, proxy equipment, cache equipment and service node
CN111935017B (en) * 2020-10-14 2021-01-15 腾讯科技(深圳)有限公司 Cross-network application calling method and device and routing equipment
CN116880928B (en) * 2023-09-06 2023-11-21 菲特(天津)检测技术有限公司 Model deployment method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102801784A (en) * 2012-07-03 2012-11-28 华为技术有限公司 Distributed type data storing method and equipment
CN102833294A (en) * 2011-06-17 2012-12-19 阿里巴巴集团控股有限公司 File processing method and system based on cloud storage, and server cluster system
US20130060815A1 (en) * 2011-09-02 2013-03-07 Fujitsu Limited Recording medium, distribution controlling method, and information processing device
CN104618444A (en) * 2014-12-30 2015-05-13 北京奇虎科技有限公司 Reverse agent server processing request based method and device
US20170324796A1 (en) * 2009-12-28 2017-11-09 Akamai Technologies, Inc. Stream handling using an intermediate format
US20190199818A1 (en) * 2015-12-31 2019-06-27 Hughes Network Systems, Llc Accurate caching in adaptive video streaming based on collision resistant hash applied to segment contents and ephemeral request and url data
CN111464661A (en) * 2020-06-17 2020-07-28 北京金迅瑞博网络技术有限公司 Load balancing method and device, proxy equipment, cache equipment and service node

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170324796A1 (en) * 2009-12-28 2017-11-09 Akamai Technologies, Inc. Stream handling using an intermediate format
CN102833294A (en) * 2011-06-17 2012-12-19 阿里巴巴集团控股有限公司 File processing method and system based on cloud storage, and server cluster system
US20130060815A1 (en) * 2011-09-02 2013-03-07 Fujitsu Limited Recording medium, distribution controlling method, and information processing device
CN102801784A (en) * 2012-07-03 2012-11-28 华为技术有限公司 Distributed type data storing method and equipment
CN104618444A (en) * 2014-12-30 2015-05-13 北京奇虎科技有限公司 Reverse agent server processing request based method and device
US20190199818A1 (en) * 2015-12-31 2019-06-27 Hughes Network Systems, Llc Accurate caching in adaptive video streaming based on collision resistant hash applied to segment contents and ephemeral request and url data
CN111464661A (en) * 2020-06-17 2020-07-28 北京金迅瑞博网络技术有限公司 Load balancing method and device, proxy equipment, cache equipment and service node

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114553965A (en) * 2022-01-04 2022-05-27 网宿科技股份有限公司 Scheduling method of intranet equipment, network equipment and storage medium
CN114900562A (en) * 2022-05-09 2022-08-12 北京百度网讯科技有限公司 Resource acquisition method and device, electronic equipment and storage medium
CN116527691A (en) * 2023-06-27 2023-08-01 天津中远海运散运数字科技有限公司 Method, device, equipment and medium for synchronizing ship-shore data
CN116527691B (en) * 2023-06-27 2023-11-03 天津中远海运散运数字科技有限公司 Method, device, equipment and medium for synchronizing ship-shore data

Also Published As

Publication number Publication date
CN111464661A (en) 2020-07-28
CN111464661B (en) 2020-09-22

Similar Documents

Publication Publication Date Title
WO2021253889A1 (en) Load balancing method and apparatus, proxy device, cache device and serving node
US10778801B2 (en) Content delivery network architecture with edge proxy
US11194719B2 (en) Cache optimization
KR101383905B1 (en) method and apparatus for processing server load balancing with the result of hash function
WO2017084393A1 (en) Content distribution method, virtual server management method, cloud platform and system
US20120102226A1 (en) Application specific web request routing
KR20130088774A (en) System and method for delivering segmented content
US9407687B2 (en) Method, apparatus, and network system for acquiring content
CN110430274A (en) A kind of document down loading method and system based on cloud storage
US20130275618A1 (en) Method and apparatus for reducing content redundancy in content-centric networking
US20140143339A1 (en) Method, apparatus, and system for resource sharing
CN105978936A (en) CDN server and data caching method thereof
RU2483457C2 (en) Message routing platform
WO2020249128A1 (en) Service routing method and apparatus
Jin et al. Content routing and lookup schemes using global bloom filter for content-delivery-as-a-service
Tiwari et al. Load balancing in distributed web caching: a novel clustering approach
Gupta et al. 2-Tiered cloud based content delivery network architecture: An efficient load balancing approach for video streaming
EP2721781A2 (en) Application specific web request routing
CN109495525B (en) Network component, method of resolving content identification, and computer-readable storage medium
CN115567591A (en) Content resource distribution method, content distribution network, cluster and medium
CN114338720A (en) Distributed file storage and transmission method, system and storage medium
LATENCY INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET)
TW201828653A (en) Data acquisition method and device wherein a domain name is returned to multiple storage centers by the routing device, thereby saving the resources and improving the user's experience
JP2011141608A (en) Content control device for p2p communication
Tan et al. Caching Mechanism in Publish/Subscribe Network

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21825827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21825827

Country of ref document: EP

Kind code of ref document: A1