WO2020164612A1 - 一种智能热点打散的方法、装置、存储介质及计算机设备 - Google Patents

一种智能热点打散的方法、装置、存储介质及计算机设备 Download PDF

Info

Publication number
WO2020164612A1
WO2020164612A1 PCT/CN2020/075343 CN2020075343W WO2020164612A1 WO 2020164612 A1 WO2020164612 A1 WO 2020164612A1 CN 2020075343 W CN2020075343 W CN 2020075343W WO 2020164612 A1 WO2020164612 A1 WO 2020164612A1
Authority
WO
WIPO (PCT)
Prior art keywords
url
request
hotspot
volume
breaking
Prior art date
Application number
PCT/CN2020/075343
Other languages
English (en)
French (fr)
Inventor
郑友声
王康
Original Assignee
贵州白山云科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 贵州白山云科技股份有限公司 filed Critical 贵州白山云科技股份有限公司
Priority to US17/430,399 priority Critical patent/US11562042B2/en
Priority to SG11202108623VA priority patent/SG11202108623VA/en
Publication of WO2020164612A1 publication Critical patent/WO2020164612A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/951Indexing; Web crawling techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9566URL specific, e.g. using aliases, detecting broken or misspelled links
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/957Browsing optimisation, e.g. caching or content distillation
    • G06F16/9574Browsing optimisation, e.g. caching or content distillation of access to content, e.g. by caching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1019Random or heuristic server selection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • the present invention relates to the field of computer network load balancing, in particular to a method, device, storage medium and computer equipment for breaking up intelligent hotspots.
  • URLs Uniform Resource Location
  • hot URLs need to be broken up/balanced.
  • the prior art usually adopts a method of randomly or evenly distributing centralized URLs to back-end machines (for example, a cluster of origin node servers, a cluster of edge node servers) to achieve load balancing for hot URLs.
  • back-end machines for example, a cluster of origin node servers, a cluster of edge node servers
  • the method for breaking up smart hotspots according to the present invention includes:
  • the URL whose request volume is unpredictable and the actual request volume is greater than or equal to the second predetermined request volume threshold corresponding to the URL is determined as the second URL, and the second hot spot breaking operation is performed on the URL.
  • the steps of performing the first hotspot breaking operation on the URL include:
  • the steps of performing a second hotspot breaking operation on the URL include:
  • the old request of the first URL is randomly or evenly redistributed to processes other than the drop-in process among the processes of the multiple cache servers and/or origin server.
  • the step of performing the second hotspot breaking operation on the URL further includes:
  • the new request of the first URL is randomly or evenly distributed to multiple cache servers and/or origin servers.
  • the steps of learning the request volume curve of URL based on the artificial intelligence learning model and predicting the request volume of URL include:
  • the clustering algorithm is used to perform clustering analysis on URLs, and the request volume or real-time request volume of URLs of the same category in the preset historical period is automatically drawn as a request volume curve corresponding to the URL of the category, according to the request of the URL of the category
  • the volume curve is used to predict the request volume of URLs of this category.
  • the smart hotspot breaking device includes:
  • the first hot spot breaking module is configured to determine the URL whose predicted request amount of the URL is greater than or equal to the first predetermined request amount threshold corresponding to the URL as the first URL, and perform the first hot spot breaking operation on the URL;
  • the second hotspot breaking module is used to determine the URL whose request volume is unpredictable and the actual request volume is greater than or equal to the second predetermined request volume threshold corresponding to the URL as the second URL, and perform the second hotspot breaking for the URL operating.
  • the first hotspot breaking module is also used for:
  • the second hotspot breaking module is also used for:
  • the clustering algorithm is used to perform clustering analysis on URLs, and the request volume or real-time request volume of URLs of the same category in the preset historical period is automatically drawn as a request volume curve corresponding to the URL of the category, according to the request of the URL of the category
  • the volume curve is used to predict the request volume of URLs of this category.
  • the second hotspot breaking module is also used for:
  • the new request of the first URL is randomly or evenly distributed to multiple cache servers and/or origin servers.
  • a computer program is stored on the storage medium, and when the program is executed by a processor, the steps of the method described above are realized.
  • the computer device includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and the processor implements the steps of the method described above when the program is executed.
  • non-bursty hot URL requests can be automatically predicted, so that the first breakup operation can be performed in advance, and the second breakup operation can be performed on unpredictable burst hot URL requests, which speeds up the hot spot.
  • Fig. 1 exemplarily shows a schematic flowchart of a method for disassociating smart hotspots according to the present invention.
  • Fig. 2 exemplarily shows a schematic block diagram of a device for dispersing smart hotspots according to the present invention.
  • Fig. 3 exemplarily shows a schematic diagram of an embodiment of the method for dispersing smart hotspots according to the present invention.
  • Fig. 4 exemplarily shows an example of a model diagram matched by an artificial intelligence learning model that can be used in the method for breaking up smart hotspots of the present invention.
  • the present invention is based on predicting in advance before the (predictable) hotspot has burst (that is, predicting the request amount is greater than or equal to the predetermined request amount of the specified URL) and executing the first Processing operation, the general idea of performing the second processing (for example, fast processing) operation when the (unpredictable) hot spot bursts (that is, the request volume is unpredictable and the actual request volume is greater than or equal to the predetermined request volume of the specified URL) , Proposed the following technical solutions.
  • Fig. 1 exemplarily shows a schematic flowchart of a method for disassociating smart hotspots according to the present invention.
  • the method for breaking up smart hotspots according to the present invention includes:
  • Step S102 Learn the request volume curve of URL based on the artificial intelligence learning model and predict the request volume of URL;
  • Step S104 Determine the URL whose predicted request amount of the URL is greater than or equal to the first predetermined request amount threshold corresponding to the URL as the first URL, and perform the first hot spot breaking operation on the URL;
  • Step S106 Determine the URL whose request amount of the URL cannot be predicted and the actual request amount is greater than or equal to the second predetermined request amount threshold corresponding to the URL as the second URL, and perform a second hot spot breaking operation on the URL.
  • step S104 the step of performing the first hotspot breaking operation on the URL includes:
  • step S106 the step of performing a second hotspot breaking operation on the URL includes:
  • the old request of the first URL is randomly or evenly redistributed to processes other than the drop-in process among the processes of the multiple cache servers and/or origin server.
  • step S106 the step of performing a second hotspot breaking operation on the URL further includes:
  • the new request of the first URL is randomly or evenly distributed to multiple cache servers and/or origin servers.
  • Step S102 includes:
  • the clustering algorithm is used to perform clustering analysis on URLs, and the request volume or real-time request volume of URLs of the same category in the preset historical period is automatically drawn as a request volume curve corresponding to the URL of the category, according to the request of the URL of the category
  • the volume curve is used to predict the request volume of URLs of this category.
  • the demand volume curve adopts a three-stage model curve.
  • Fig. 2 exemplarily shows a schematic block diagram of a device for dispersing smart hotspots according to the present invention.
  • the device 200 for dispersing smart hotspots includes:
  • the artificial intelligence learning model 201 is used to learn the request volume curve of URL and predict the request volume of URL;
  • the first hot spot breaking module 203 is configured to determine the URL whose predicted request amount of the URL is greater than or equal to the first predetermined request amount threshold corresponding to the URL as the first URL, and perform the first hot spot breaking operation on the URL;
  • the second hotspot breaking module 205 is used to determine the URL whose request amount of URL is unpredictable and the actual request amount is greater than or equal to the second predetermined request amount threshold corresponding to the URL as the second URL, and perform the second hot spot breaking on the URL ⁇ Operation.
  • the first hot spot breaking module 203 is also used to:
  • the second hot spot breaking module 205 is also used to:
  • the old request of the first URL is randomly or evenly redistributed to processes other than the drop-in process among the processes of the multiple cache servers and/or origin server.
  • the second hot spot breaking module 205 is also used to:
  • the new request of the first URL is randomly or evenly distributed to multiple cache servers and/or origin servers.
  • the artificial intelligence learning model 201 is also used to:
  • the clustering algorithm is used to perform clustering analysis on URLs, and the request volume or real-time request volume of URLs of the same category in the preset historical period is automatically drawn as a request volume curve corresponding to the URL of the category, according to the request of the URL of the category
  • the volume curve is used to predict the request volume of URLs of this category.
  • a storage medium is also provided, and a computer program is stored on the storage medium, and when the program is executed by a processor, the steps of the above-mentioned method are realized.
  • a computer device including a memory, a processor, and a computer program stored in the memory and running on the processor.
  • the processor executes the program to implement the above method A step of.
  • Fig. 3 exemplarily shows a schematic diagram of an embodiment of the method for dispersing smart hotspots according to the present invention.
  • this embodiment (“Hot File Analysis and Prefetching Breakup", corresponding to the above-mentioned smart hot spot breaking method described in conjunction with Figure 1) includes the following processing steps:
  • the artificial intelligence learning model predicts possible hot URLs (corresponding to the above step S102).
  • the main core of hot URL prediction can be a model graph matched by artificial intelligence (corresponding to the aforementioned request volume curve), and parameters are calculated for different services (e.g., corresponding to different URLs) (e.g., the first URL corresponding to the aforementioned URL).
  • the predetermined request volume threshold e.g., the request volume of the service.
  • Fig. 4 exemplarily shows an example of a model diagram matched by an artificial intelligence learning model that can be used in the method for breaking up smart hotspots of the present invention.
  • the storage medium that implements the above-mentioned smart hotspot breaking method using the model diagram can store software installation packages in the form of apk to facilitate product release.
  • phase A the business has not burst yet, and the request is small when the volume is just started.
  • phase B the number of requests rises and reaches a higher business value.
  • phase C business bursts, and the request volume reaches a high point in an instant.
  • the model predicts that there may be a sudden trend of C during the process of A->B.
  • the first predetermined request amount threshold corresponding to the above URL may be set as the value of the ordinate corresponding to the dividing point T of stage B and stage C.
  • the determined possible URL (that is, the URL whose predicted request amount is greater than or equal to the first predetermined request amount threshold corresponding to the URL) can be submitted to the group (for example, a cluster composed of multiple cache servers)
  • the prefetch center that is, the cache server designated in the server cluster as a unified port to perform prefetch operations).
  • the prefetch center in the group can perform the following operations:
  • the obtained URL is calculated to obtain the upper node of the URL, the parent or other machines in the same node province. At this time, try to detect the parent and the source, and calculate which download speed is faster on both sides, as the selected upper layer node.
  • the URL can be broken up on the dispatch server according to the breaking rules (for example, randomly or evenly, etc.), and the hot spot can be quickly resolved when the hot spot has not yet burst.
  • breaking rules for example, randomly or evenly, etc.
  • the hot spot detection device on the server (for example, may correspond to the second hot spot breaking module 205) will recognize the burst hot spot and break it up.
  • the request can also be directed to the port in the group of the original hash, thereby realizing the shared cache in the group.
  • the hot URL is still operating normally The busy process.
  • process isolation can be used.
  • other services are first migrated out of the process of executing the burst service to ensure the normal operation of other services.
  • This step contains the same operations as in step 1.2.
  • the state of the shielded process b (that can be used for allocating and executing other services) is restored, thereby restoring normal services.
  • the operation of redirecting all back-to-origin traffic back to a unified port can be combined to reduce the flow of multiple copies of resources back to the upper layer (that is, the upper-layer source station node), saving costs, and improving the carrying capacity of edge nodes and upper-layer nodes .
  • Such software may be distributed on a computer-readable medium
  • the computer-readable medium may include a computer storage medium (or non-transitory medium) and a communication medium (or transitory medium).
  • the term computer storage medium includes volatile and non-volatile memory implemented in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data).
  • Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassette, tape, magnetic disk storage or other magnetic storage device, or Any other medium used to store desired information and that can be accessed by a computer.
  • communication media usually contain computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as carrier waves or other transmission mechanisms, and may include any information delivery media .
  • the embodiment of the present invention provides a method and device for breaking up smart hotspots. It can learn the request volume curve of URL and predict the request volume of URL based on the artificial intelligence learning model; determine the URL whose predicted request volume of the URL is greater than or equal to the first predetermined request volume threshold corresponding to the URL as the first URL, and execute the URL
  • the first hotspot breakup operation the URL whose request volume cannot be predicted and the actual request amount is greater than or equal to the second predetermined request amount threshold corresponding to the URL is determined as the second URL, and the second hotspot breakup operation is performed on the URL. It can automatically predict non-bursty hot URL requests, so that the first breakup operation can be performed in advance, and the second breakup operation can be performed on unpredictable burst hot URL requests, which speeds up the processing of hot business.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

本发明公开了一种智能热点打散的方法、装置、存储介质及计算机设备。所公开的方法包括:基于人工智能学习模型学习URL的请求量曲线并预测URL的请求量;将URL的预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。所公开的技术方案能够自动预测非突发的热点URL请求、从而提前进行第一种打散操作,能够对不能预测的突发热点URL请求进行第二种打散操作,加快了热点业务的处理。

Description

一种智能热点打散的方法、装置、存储介质及计算机设备
本申请要求在2019年02月15日提交中国专利局、申请号为201910117615.5、发明名称为“一种智能热点打散的方法、装置、存储介质及计算机设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及计算机网络负载均衡领域,尤其涉及一种智能热点打散的方法、装置、存储介质及计算机设备。
背景技术
为了减小服务器端的处理时延,改善用户体验,通常都需要对URL(Uniform Resource Location)(特别是热点URL)进行打散/均衡操作。
现有技术通常采用将集中的URL随机或均匀地分散到后端机器(例如,源站节点服务器集群、边缘节点服务器集群)的方式,来实现针对热点URL的负载均衡。
然而,现有技术方案通常只是在热点URL出现之后,才执行打散操作,从而将针对热点URL的请求分散地分配到多台机器,因此存在以下缺点:
1、没有在前期针对热点URL可能存在的趋势做出判断,做出预先处理。只是在发现热点之后直接打散,没有缓存的机器会直接回上层节点或者源造成大量流量浪费。
2、即使采用组内共享缓存方案,将所有回源流量重新导回统一端口,在热点文件较大(例如,文件大小超过1G)时,在各个机器取完文件之前,热点堆积现象将无法解决,该统一端口的压力并没有减少。
为了解决上述问题,需要提出新的技术方案。
发明内容
根据本发明的智能热点打散的方法,包括:
基于人工智能学习模型学习URL的请求量曲线并预测URL的请求量;
将URL的预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;
将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。
根据本发明的智能热点打散的方法,其对URL执行第一热点打散操作的步骤包括:
将第一URL的请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程。
根据本发明的智能热点打散的方法,其对URL执行第二热点打散操作的步骤包括:
查找第二URL的落点进程;
将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程;和/或
将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程。
根据本发明的智能热点打散的方法,其对URL执行第二热点打散操作的步骤还包括:
确定第二URL的请求文件的大小大于指定文件大小;和/或
在落点进程执行完针对第二URL的请求的、获取并发送请求文件或发送请求文件的响应之后,将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程,和/或,将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程,和/或
基于人工智能学习模型学习URL的请求量曲线并预测URL的请求量的步骤包括:
采用聚类算法对URL进行聚类分析,将相同类别的URL在预设历史时段内的请求量或实时请求量自动绘制为对应于该类别的URL的请求量曲线,根据该类别的URL的请求量曲线来预测该类别的URL的请求量。
根据本发明的智能热点打散的装置,包括:
人工智能学习模型,用于学习URL的请求量曲线并预测URL的请求量;
第一热点打散模块,用于将URL的预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;
第二热点打散模块,用于将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。
根据本发明的智能热点打散的装置,其第一热点打散模块还用于:
将第一URL的请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程。
根据本发明的智能热点打散的装置,其第二热点打散模块还用于:
查找第二URL的落点进程;
将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程;和/或
将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程,和/或
人工智能学习模型还用于:
采用聚类算法对URL进行聚类分析,将相同类别的URL在预设历史时段内的请求量或实时请求量自动绘制为对应于该类别的URL的请求量曲线,根据该类别的URL的请求量曲线来预测该类别的URL的请求量。
根据本发明的智能热点打散的装置,其第二热点打散模块还用于:
确定第二URL的请求文件的大小大于指定文件大小;和/或
在落点进程执行完针对第二URL的请求的、获取并发送请求文件或发送请求文件的响应之后,将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程,和/或,将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程。
根据本发明的存储介质,该存储介质上存储有计算机程序,程序被处理器执行时实现上文所述方法的步骤。
根据本发明的计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现上文所述方法的步骤。
根据本发明的上述技术方案,能够自动预测非突发的热点URL请求、从而提前进行第一种打散操作,能够对不能预测的突发热点URL请求进行第二种打散操作,加快了热点业务的处理。
附图说明
此处所说明的附图用来提供对本发明实施例的进一步理解,构成本申请的一部分,本发明实施例的示意性实施例及其说明用于解释本发明实施例,并不构成对本发明实施例的不当限定。在附图中:
图1示例性地示出了根据本发明的智能热点打散的方法的示意流程图。
图2示例性地示出了根据本发明的智能热点打散的装置的示意框图。
图3示例性地示出了可以实现根据本发明的智能热点打散的方法的一个实施例的示意图。
图4示例性地示出了根据本发明的智能热点打散的方法可以使用的人工智能学习模型所匹配出来的模型图的示例。
具体实施方式
现结合附图和具体实施方式对本发明实施例进一步说明。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
为了解决背景技术部分所述的技术问题,本发明基于针对在(可预测的)热点还未突发之前提前预测(即,预测请求量大于等于指定URL的预定请求量时)并执行第一种处理操作、在(不可预测的)热点突发时(即,请求量无法预测且实际请求量大于或等于指定URL的预定请求量时)执行第二种处理(例 如,快速处理)操作的总体构思,提出了以下技术方案。
图1示例性地示出了根据本发明的智能热点打散的方法的示意流程图。
如图1所示,根据本发明的智能热点打散的方法,包括:
步骤S102:基于人工智能学习模型学习URL的请求量曲线并预测URL的请求量;
步骤S104:将URL的预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;
步骤S106:将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。
可选地,在步骤S104中,对URL执行第一热点打散操作的步骤包括:
将第一URL的请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程。
可选地,在步骤S106中,对URL执行第二热点打散操作的步骤包括:
查找第二URL的落点进程;
将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程;和/或
将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程。
可选地,在步骤S106中,对URL执行第二热点打散操作的步骤还包括:
确定第二URL的请求文件的大小大于指定文件大小;和/或
在落点进程执行完针对第二URL的请求的、获取并发送请求文件或发送请求文件的响应之后,将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程,和/或,将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程,和/或
步骤S102包括:
采用聚类算法对URL进行聚类分析,将相同类别的URL在预设历史时段内的请求量或实时请求量自动绘制为对应于该类别的URL的请求量曲线,根据 该类别的URL的请求量曲线来预测该类别的URL的请求量。
可选地,请求量曲线采用三阶段模型曲线。
图2示例性地示出了根据本发明的智能热点打散的装置的示意框图。
如图2所示,根据本发明的智能热点打散的装置200包括:
人工智能学习模型201,用于学习URL的请求量曲线并预测URL的请求量;
第一热点打散模块203,用于将URL的预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;
第二热点打散模块205,用于将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。
可选地,第一热点打散模块203还用于:
将第一URL的请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程。
可选地,第二热点打散模块205还用于:
查找第二URL的落点进程;
将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程;和/或
将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程中的、除了落点进程之外的进程。
可选地,第二热点打散模块205还用于:
确定第二URL的请求文件的大小大于指定文件大小;和/或
在落点进程执行完针对第二URL的请求的、获取并发送请求文件或发送请求文件的响应之后,将第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程,和/或,将第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程,和/或
人工智能学习模型201还用于:
采用聚类算法对URL进行聚类分析,将相同类别的URL在预设历史时段 内的请求量或实时请求量自动绘制为对应于该类别的URL的请求量曲线,根据该类别的URL的请求量曲线来预测该类别的URL的请求量。
结合根据本发明的上述方法和装置,还提出了一种存储介质,存储介质上存储有计算机程序,程序被处理器执行时实现上文所述方法的步骤。
结合根据本发明的上述方法和装置,还提出了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现上文所述方法的步骤。
为了使本领域技术人员更清楚地理解根据本发明的上述技术方案,下文将结合具体实施例进行描述。
图3示例性地示出了可以实现根据本发明的智能热点打散的方法的一个实施例的示意图。
如图3所示,该实施例(“热点文件分析预取打散”,对应于结合图1所述的上述智能热点打散的方法)包括以下处理步骤:
1、(可预测的)“热点未突发时”(对应于上述步骤S104)
1.1、人工智能学习模型预测可能的热点URL(对应于上述步骤S102)。
例如,热点URL预测主要核心可以为人工智能匹配出来的模型图(对应于上述请求量曲线),针对不同业务(例如,对应于不同的URL)计算出参数(例如,上述URL所对应的第一预定请求量阈值)。
图4示例性地示出了根据本发明的智能热点打散的方法可以使用的人工智能学习模型所匹配出来的模型图的示例。
例如,实现了使用该模型图的上述智能热点打散的方法的存储介质,可以存储apk等形式的软件安装包,以便于产品的发布。
如图4所示的示例模型图所示,其包含A、B、C三个阶段(即,上述三阶段模型曲线)。在阶段A,业务还未突发,刚开始上量时请求较小。在阶段B,请求数升高,达到较高的业务值。在阶段C,业务突发,请求量在瞬间达到高点。
例如,模型会在A->B过程中预测出可能会有C突发的趋势。例如,上述URL所对应的第一预定请求量阈值可以被设置为阶段B和阶段C的分界点T所对应的纵坐标的数值。
可选地,可以将决策出的可能URL(即,上述预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL)提交给组(例如,多个缓存服务器组成的集群)内的预取中心(即,服务器集群中指定的、作为统一端口、执行预取操作的缓存服务器)。
1.2、提交URL到组内预取中心、将URL预取到组内指定进程。
例如,组内预取中心可以执行以下操作:
1.2.1、将获取的URL通过计算获取到URL的上层节点,父或者同节点省份的其他机器,此时尝试对父和源发起探测,计算两边的下载速度哪个更快,作为挑选出来的上层节点。
1.2.2、挑选整组机器中空闲的进程,作为要预取到的那个进程a。让进程a执行http的请求将文件取下,作为缓存基础文件afile(本地文件),待文件取完,执行下一步。
1.2.3、计算出打散后该URL即将落在机器的进程号pid,预取程序告知这些进程都到之前的进程a上拿文件afile,从而保证只有一次回上层节点(即,父节点)或源节点的请求。
1.3、将URL打散。
例如,可以按照打散规则(例如,随机或均匀地等),将该URL在调度服务器上做打散操作,在热点还未突发时将其快速解决。
然而,有些热点是前期没有任何预兆(即,其请求量是不可预测的),此时需要采用下文所述的方法。
2、(不可预测的)“热点突发”时(对应于上述步骤S106)
2.1、热点打散与组内共享缓存。
例如,热点突发时,服务器上的热点检测装置(例如,可以对应于上述第二热点打散模块205)会识别出突发的热点将其打散。例如,在没有缓存的服务器上,还可以将请求导向原先hash的组内端口上,从而实现组内共享缓存。
然而,当文件超过1G(对应于上述指定文件大小)时,请求量大的请求下,此时组内其他服务器无法快速取完URL,会造成请求堆积,没有达到打散的效果,不能快速解决问题。此时,可以执行以下操作,以减小处理时延,改善用 户体验。
2.2、查找热点URL。
2.3、计算URL落点进程,对其他URL做进程屏蔽。
例如,可以计算该热点URL的落点进程b,将其他业务的落点为该进程的请求重新做hash,将业务导向其他空闲进程,保护其他业务不受影响,此时热点URL还是正常运作在该繁忙进程中。
可选地,针对下载类型业务或其他支持断点续传的业务,可以强行kill掉保留在屏蔽进程b中的链接,让链接重新接入服务器时可以享受预取,且在另行打散后自动实现业务的流畅运行。
即,此时能够通过进程隔离,在单个业务突发且无法瞬间恢复时,将其他业务先迁移出执行突发业务的进程,保证其他业务的正常运行。
2.4、将URL提交组内预取中心,将URL预取到组内指定进程。
该步骤包含步骤1.2中的相同操作。
2.5、恢复屏蔽的进程状态。
在进程b的预取操作完成之后,恢复屏蔽的进程b的(可被用于分配和执行其他业务的)状态,从而恢复正常业务。
根据本发明的上述技术方案,具有以下优点:
1、能够自动预测非突发的热点URL请求,从而提前进行第一种打散操作(包括预取到缓存),在热点还没造成实质的业务影响(例如,大的时延)时解决问题,能够对不能预测的突发热点URL请求进行第二种打散操作,加快了热点业务的处理。
2、例如,可以结合将所有回源流量重新导回统一端口的操作,减少多份回上层(即,上层的源站节点)取资源的流量,节省成本,提高边缘节点与上层节点的承载能力。还可以结合挑选较为空闲的链路和较为空闲的进程执行一次快速回源预取、再将文件快速复制到该统一端口对应的、通过内网连接的其他机器上的操作,节省多份回上层节点的回源流量。
3、能够在热点请求处理完成之后,为原本用于热点请求处理的进程分配其他请求,尽快修复热点产生的影响。
4、能够通过进程隔离,在单个业务突发且无法瞬间恢复时,将其他业务先迁移出执行突发业务的进程,保证其他业务的正常运行。
上面描述的内容可以单独地或者以各种方式组合起来实施,而这些变型方式都在本发明的保护范围之内。
本领域普通技术人员可以理解,上文中所公开方法中的全部或某些步骤、系统、装置中的功能模块/单元可以被实施为软件、固件、硬件及其适当的组合。在硬件实施方式中,在以上描述中提及的功能模块/单元之间的划分不一定对应于物理组件的划分;例如,一个物理组件可以具有多个功能,或者一个功能或步骤可以由若干物理组件合作执行。某些组件或所有组件可以被实施为由处理器,如数字信号处理器或微处理器执行的软件,或者被实施为硬件,或者被实施为集成电路,如专用集成电路。这样的软件可以分布在计算机可读介质上,计算机可读介质可以包括计算机存储介质(或非暂时性介质)和通信介质(或暂时性介质)。如本领域普通技术人员公知的,术语计算机存储介质包括在用于存储信息(诸如计算机可读指令、数据结构、程序模块或其他数据)的任何方法或技术中实施的易失性和非易失性、可移除和不可移除介质。计算机存储介质包括但不限于RAM、ROM、EEPROM、闪存或其他存储器技术、CD-ROM、数字多功能盘(DVD)或其他光盘存储、磁盒、磁带、磁盘存储或其他磁存储装置、或者可以用于存储期望的信息并且可以被计算机访问的任何其他的介质。此外,本领域普通技术人员公知的是,通信介质通常包含计算机可读指令、数据结构、程序模块或者诸如载波或其他传输机制之类的调制数据信号中的其他数据,并且可包括任何信息递送介质。
最后应说明的是:以上实施例仅用以说明本发明的技术方案,而非对其限制。尽管参照前述实施例对本发明进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本发明各实施例的技术方案的精神和范围。
工业实用性
本发明的实施例提供了一种智能热点打散的方法、装置。可以基于人工智能学习模型学习URL的请求量曲线并预测URL的请求量;将URL的预测请求 量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。能够自动预测非突发的热点URL请求、从而提前进行第一种打散操作,能够对不能预测的突发热点URL请求进行第二种打散操作,加快了热点业务的处理。

Claims (10)

  1. 一种智能热点打散的方法,包括:
    基于人工智能学习模型学习URL的请求量曲线并预测URL的请求量;
    将URL的预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;
    将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。
  2. 如权利要求1所述的智能热点打散的方法,其中,所述对URL执行第一热点打散操作的步骤包括:
    将所述第一URL的请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程。
  3. 如权利要求1所述的智能热点打散的方法,其中,所述对URL执行第二热点打散操作的步骤包括:
    查找所述第二URL的落点进程;
    将所述第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程中的、除了所述落点进程之外的进程;和/或
    将所述第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程中的、除了所述落点进程之外的进程。
  4. 如权利要求3所述的智能热点打散的方法,其中,所述对URL执行第二热点打散操作的步骤还包括:
    确定所述第二URL的请求文件的大小大于指定文件大小;和/或
    在所述落点进程执行完针对所述第二URL的请求的、获取并发送所述请求文件或发送所述请求文件的响应之后,将所述第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程,和/或,将所述第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程,和/或
    所述基于人工智能学习模型学习URL的请求量曲线并预测URL的请求量 的步骤包括:
    采用聚类算法对URL进行聚类分析,将相同类别的URL在预设历史时段内的请求量或实时请求量自动绘制为对应于该类别的URL的请求量曲线,根据该类别的URL的请求量曲线来预测该类别的URL的请求量。
  5. 一种智能热点打散的装置,包括:
    人工智能学习模型,用于学习URL的请求量曲线并预测URL的请求量;
    第一热点打散模块,用于将URL的预测请求量大于或等于该URL所对应的第一预定请求量阈值的URL确定为第一URL,对URL执行第一热点打散操作;
    第二热点打散模块,用于将URL的请求量无法预测且实际请求量大于或等于该URL所对应的第二预定请求量阈值的URL确定为第二URL,对URL执行第二热点打散操作。
  6. 如权利要求5所述的智能热点打散的装置,其中,所述第一热点打散模块还用于:
    将所述第一URL的请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程。
  7. 如权利要求5所述的智能热点打散的装置,其中,所述第二热点打散模块还用于:
    查找所述第二URL的落点进程;
    将所述第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程中的、除了所述落点进程之外的进程;和/或
    将所述第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程中的、除了所述落点进程之外的进程,和/或
    所述人工智能学习模型还用于:
    采用聚类算法对URL进行聚类分析,将相同类别的URL在预设历史时段内的请求量或实时请求量自动绘制为对应于该类别的URL的请求量曲线,根据该类别的URL的请求量曲线来预测该类别的URL的请求量。
  8. 如权利要求7所述的智能热点打散的装置,其中,所述第二热点打散模 块还用于:
    确定所述第二URL的请求文件的大小大于指定文件大小;和/或
    在所述落点进程执行完针对所述第二URL的请求的、获取并发送所述请求文件或发送所述请求文件的响应之后,将所述第一URL的新请求随机或均匀地分配到多个缓存服务器和/或源站服务器的进程,和/或,将所述第一URL的旧请求随机或均匀地重新分配到多个缓存服务器和/或源站服务器的进程。
  9. 一种存储介质,所述存储介质上存储有计算机程序,所述程序被处理器执行时实现权利要求1至4中任意一项所述方法的步骤。
  10. 一种计算机设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述程序时实现权利要求1至4中任意一项所述方法的步骤。
PCT/CN2020/075343 2019-02-15 2020-02-14 一种智能热点打散的方法、装置、存储介质及计算机设备 WO2020164612A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/430,399 US11562042B2 (en) 2019-02-15 2020-02-14 Intelligent hotspot scattering method, apparatus, storage medium, and computer device
SG11202108623VA SG11202108623VA (en) 2019-02-15 2020-02-14 Intelligent hotspot scattering method, apparatus, storage medium, and computer device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910117615.5A CN111585908B (zh) 2019-02-15 2019-02-15 一种智能热点打散的方法、装置、存储介质及计算机设备
CN201910117615.5 2019-02-15

Publications (1)

Publication Number Publication Date
WO2020164612A1 true WO2020164612A1 (zh) 2020-08-20

Family

ID=72044002

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/075343 WO2020164612A1 (zh) 2019-02-15 2020-02-14 一种智能热点打散的方法、装置、存储介质及计算机设备

Country Status (4)

Country Link
US (1) US11562042B2 (zh)
CN (2) CN114884885B (zh)
SG (1) SG11202108623VA (zh)
WO (1) WO2020164612A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153713B (zh) * 2020-10-23 2021-08-20 珠海格力电器股份有限公司 障碍物的确定方法和装置、存储介质、电子装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2805067A1 (en) * 2012-02-02 2013-08-02 Comcast Cable Communications, Llc Content distribution network supporting popularity-based caching
CN104618493A (zh) * 2015-02-12 2015-05-13 小米科技有限责任公司 数据请求处理方法及装置
CN106161485A (zh) * 2015-03-23 2016-11-23 腾讯科技(深圳)有限公司 一种基础服务集群的资源调度方法、装置和系统
CN107124630A (zh) * 2017-03-30 2017-09-01 华为技术有限公司 节点数据管理的方法及装置
CN109327550A (zh) * 2018-11-30 2019-02-12 网宿科技股份有限公司 一种访问请求的分配方法、装置、存储介质和计算机设备

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6728770B1 (en) * 1999-12-03 2004-04-27 Storage Technology Corporation Method and apparatus for workload balancing along multiple communication paths to a plurality of devices
KR20050021752A (ko) 2003-08-26 2005-03-07 정길도 분산 웹 캐싱의 성능 향상을 위한 핫 스팟의 예측 알고리즘
US7680858B2 (en) * 2006-07-05 2010-03-16 Yahoo! Inc. Techniques for clustering structurally similar web pages
JP4950596B2 (ja) * 2006-08-18 2012-06-13 クラリオン株式会社 予測交通情報生成方法、予測交通情報生成装置および交通情報表示端末
US7584294B2 (en) * 2007-03-12 2009-09-01 Citrix Systems, Inc. Systems and methods for prefetching objects for caching using QOS
US20100306249A1 (en) * 2009-05-27 2010-12-02 James Hill Social network systems and methods
US8725866B2 (en) * 2010-08-16 2014-05-13 Symantec Corporation Method and system for link count update and synchronization in a partitioned directory
US8964546B1 (en) * 2012-01-31 2015-02-24 Google Inc. Indirect measurement of user traffic on links
US10261938B1 (en) * 2012-08-31 2019-04-16 Amazon Technologies, Inc. Content preloading using predictive models
CN103281367B (zh) * 2013-05-22 2016-03-02 北京蓝汛通信技术有限责任公司 一种负载均衡方法及装置
CN103414761B (zh) * 2013-07-23 2017-02-08 北京工业大学 一种基于Hadoop架构的移动终端云资源调度方法
US9652538B2 (en) * 2013-12-11 2017-05-16 Ebay Inc. Web crawler optimization system
US9961106B2 (en) * 2014-09-24 2018-05-01 Arbor Networks, Inc. Filtering legitimate traffic elements from a DoS alert
JP2017028431A (ja) * 2015-07-21 2017-02-02 富士通株式会社 伝送装置及び流量計測方法
US10439990B2 (en) 2015-11-25 2019-10-08 Barracuda Networks, Inc. System and method to configure a firewall for access to a captive network
US9990341B2 (en) * 2015-12-04 2018-06-05 International Business Machines Corporation Predictive approach to URL determination
CN105610716B (zh) * 2016-03-09 2019-01-08 北京邮电大学 一种基于sdn的多媒体流量优化调度方法、装置及系统
US10362098B2 (en) * 2016-06-21 2019-07-23 Facebook, Inc. Load balancing back-end application services utilizing derivative-based cluster metrics
CN107707597A (zh) * 2017-04-26 2018-02-16 贵州白山云科技有限公司 一种突发热点访问均衡处理方法及装置
US11195106B2 (en) * 2017-06-28 2021-12-07 Facebook, Inc. Systems and methods for scraping URLs based on viewport views
CN107729139B (zh) * 2017-09-18 2021-02-26 北京京东尚科信息技术有限公司 一种并发获取资源的方法和装置
CN109218441B (zh) * 2018-10-18 2021-05-11 哈尔滨工业大学 一种基于预测和区域划分的p2p网络动态负载均衡方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2805067A1 (en) * 2012-02-02 2013-08-02 Comcast Cable Communications, Llc Content distribution network supporting popularity-based caching
CN104618493A (zh) * 2015-02-12 2015-05-13 小米科技有限责任公司 数据请求处理方法及装置
CN106161485A (zh) * 2015-03-23 2016-11-23 腾讯科技(深圳)有限公司 一种基础服务集群的资源调度方法、装置和系统
CN107124630A (zh) * 2017-03-30 2017-09-01 华为技术有限公司 节点数据管理的方法及装置
CN109327550A (zh) * 2018-11-30 2019-02-12 网宿科技股份有限公司 一种访问请求的分配方法、装置、存储介质和计算机设备

Also Published As

Publication number Publication date
SG11202108623VA (en) 2021-09-29
US11562042B2 (en) 2023-01-24
US20220107986A1 (en) 2022-04-07
CN111585908A (zh) 2020-08-25
CN111585908B (zh) 2022-03-04
CN114884885B (zh) 2024-03-22
CN114884885A (zh) 2022-08-09

Similar Documents

Publication Publication Date Title
US20210144423A1 (en) Dynamic binding for use in content distribution
CN109375872B (zh) 数据访问请求的处理方法、装置和设备及存储介质
EP3334123B1 (en) Content distribution method and system
US11323514B2 (en) Data tiering for edge computers, hubs and central systems
US9503518B2 (en) Method and apparatus for buffering and obtaining resources, resource buffering system
CN110134495B (zh) 一种容器跨主机在线迁移方法、存储介质及终端设备
US8510742B2 (en) Job allocation program for allocating jobs to each computer without intensively managing load state of each computer
CN104679594B (zh) 一种中间件分布式计算方法
CN108900626B (zh) 一种云环境下数据存储方法、装置及系统
JP2013506908A (ja) 企業ネットワーク内の割り当てられたクラウドリソースの動的な負荷分散およびスケーリング
US20160269479A1 (en) Cloud virtual server scheduling method and apparatus
CN109618174A (zh) 一种直播数据传输方法、装置、系统以及存储介质
CN105068755A (zh) 一种面向云计算内容分发网络的数据副本存储方法
CN105516267B (zh) 云平台高效运行方法
CN109960579B (zh) 一种调整业务容器的方法及装置
US9363199B1 (en) Bandwidth management for data services operating on a local network
WO2020164612A1 (zh) 一种智能热点打散的方法、装置、存储介质及计算机设备
JP6374841B2 (ja) 仮想マシン配置装置および仮想マシン配置方法
CN110233892A (zh) Cdn热点资源处理方法、系统及全局热片调度系统
KR102400158B1 (ko) 계층적 5g 네트워크 구조에서 서비스 체이닝을 위한 동적 자원 할당 방법 및 장치
CN115827745A (zh) 内存数据库集群的实现方法、装置及内存数据库集群
US10649816B2 (en) Elasticity engine for availability management framework (AMF)
CN112015515B (zh) 一种虚拟网络功能的实例化方法及装置
CN108989370A (zh) 一种cdn系统中的数据存储方法、设备及系统
US20190052534A1 (en) Managing heterogeneous cluster environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20755255

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20755255

Country of ref document: EP

Kind code of ref document: A1