WO2021051576A1 - 弹性执行热容器方法、装置、设备和存储介质 - Google Patents

弹性执行热容器方法、装置、设备和存储介质 Download PDF

Info

Publication number
WO2021051576A1
WO2021051576A1 PCT/CN2019/117876 CN2019117876W WO2021051576A1 WO 2021051576 A1 WO2021051576 A1 WO 2021051576A1 CN 2019117876 W CN2019117876 W CN 2019117876W WO 2021051576 A1 WO2021051576 A1 WO 2021051576A1
Authority
WO
WIPO (PCT)
Prior art keywords
container
thermal container
queue
target
target thermal
Prior art date
Application number
PCT/CN2019/117876
Other languages
English (en)
French (fr)
Inventor
宋杰
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021051576A1 publication Critical patent/WO2021051576A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5038Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the execution order of a plurality of tasks, e.g. taking priority or time dependency constraints into consideration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • This application relates to the field of operation and maintenance, and in particular to a method, device, equipment and storage medium for elastically executing a thermal container.
  • the rule engine is developed from the inference engine and is embedded in the application program to separate the business decision from the application code and use the predefined semantic module to write the business decision. Accept data input, explain business rules, and make business decisions based on business rules.
  • the rule engine is often used in scenarios such as credit card limit approval, insurance underwriting, and risk control.
  • the existing rule engine platform supports users to write business rules online, but the inventor realizes that users need to prepare a rule operating environment when building a hot container through a warmpool, which is relatively complicated to build, and there are peaks and troughs in the frequency of use of rules. This leads to unreasonable resource allocation.
  • the present application provides a method for configuring a flexible execution thermal container, which can solve the problem of unreasonable resource allocation in the prior art.
  • the first aspect of the present application provides a method for elastically executing a hot container, including: receiving a user's access request; creating a preheating container and a preheating queue according to the user's access request, the preheating container Used to deploy an operating environment, the preheating queue refers to a queue for storing the preheating container; an initial thermal container is created according to the preheating container, and the target is obtained by adding the initial thermal container to a preset identification ID Thermal container, and put the target thermal container into a thermal queue, the thermal queue refers to a queue for storing the target thermal container, the initial thermal container refers to the operating environment and operating rules; the execution is stored in all The executable script in the target thermal container is calculated, and the running result of the executable script is calculated, and the target thermal container is extracted from the thermal queue based on the running result and the target thermal container is activated.
  • the executable The script is used to calculate the resources available in the target thermal container, and the resources include at least one of graphics processor resources, central processing unit resources, memory resources, cache resources, and storage resources; after the running result is returned,
  • the state of the target thermal container is set to preparing, and the timeout period and the container threshold period are set according to the available resources or the parameters input by the user, the timeout period is less than the container threshold period, and the timeout period refers to The maximum duration of the response, the container threshold duration refers to the maximum duration that the container is allowed to exist; if the target thermal container does not receive the first request within the timeout duration, the state of the target thermal container is set To suspend and store the target thermal container in the hot queue; the first request refers to a user's request to access the target thermal container; if the target thermal container receives the target thermal container within the timeout period First request, set the status of the target thermal container to running, and process the first request; calculate the operating time of the target thermal container to obtain the container existence time; if the container existence
  • a second aspect of the present application provides a device for elastically executing a thermal container, including: an input and output module for receiving a user's access request; a processing module for creating a preheating container and a preheating queue according to the user's access request ,
  • the preheating container is used to deploy the operating environment, and the preheating queue refers to a queue for storing the preheating container; an initial heat container is created according to the preheating container, and the initial heat container is added
  • the preset identification ID is used to obtain the target thermal container, and the target thermal container is placed in the thermal queue.
  • the thermal queue refers to the queue for storing the target thermal container, and the initial thermal container refers to the operating environment and the operating environment.
  • the executable script executes the executable script stored in the target thermal container, and calculate the running result of the executable script, extract the target thermal container from the thermal queue based on the running result and enable the target thermal
  • the container the executable script is used to calculate the resources available in the target hot container, and the resources include at least one of graphics processor resources, central processing unit resources, memory resources, cache resources, and storage resources;
  • the state of the target thermal container is set to preparing, and the timeout period and the container threshold period are set according to the available resources or the parameters input by the user, and the timeout period is less than the container threshold period
  • the timeout duration refers to the maximum duration of the response
  • the container threshold duration refers to the maximum duration that the container is allowed to exist; if the target thermal container does not receive the first request within the timeout duration, the The status of the target thermal container is set to pause, and the target thermal container is stored in the hot queue; the first request refers to a user's request to access the target thermal container; if the target thermal container expire
  • a third aspect of the present application provides a device for elastically executing a thermal container, including: a memory and at least one processor, the memory stores instructions, and the memory and the at least one processor are interconnected by wires; A processor calls the instructions in the memory, so that the flexible thermal container-executing device executes the method described in the first aspect.
  • the fourth aspect of the present application provides a computer-readable storage medium that stores instructions in the computer-readable storage medium, and when the computer instructions run on a computer, the computer executes the method described in the first aspect. .
  • a user’s access request is received; a preheating container and a preheating queue are created according to the user’s access request, the preheating container is used to deploy the operating environment, and the preheating queue refers to Store the queue of the preheat container; create an initial thermal container based on the preheat container, obtain a target thermal container by adding a preset identification ID to the initial thermal container, and put the target thermal container in the thermal queue ,
  • the hot queue refers to a queue for storing the target hot container, the initial hot container refers to the operating environment and operating rules; executes the executable script stored in the target hot container, and calculates the available Execute the running result of the script, extract the target thermal container from the hot queue based on the running result and enable the target thermal container, the executable script is used to calculate the resources available in the target thermal container, so
  • the resources include at least one of graphics processor resources, central processing unit resources, memory resources, cache resources, and storage resources; after the operation result is returned, the state of
  • the user is released from resource management by setting the container threshold duration and preheating queue, thereby saving labor costs with high efficiency.
  • flexible scheduling of system resources allows multiple tenants to reuse thermal containers with the same configuration, and solves the waste of resources during the trough period through the container threshold duration, and calmly copes with peak traffic.
  • FIG. 1 is a schematic flowchart of a method for elastically executing a thermal container in an embodiment of the application
  • FIG. 2 is a schematic structural diagram of a device for elastically executing a thermal container in an embodiment of the application
  • Fig. 3 is a schematic structural diagram of a device for elastically executing a thermal container in an embodiment of the application.
  • the embodiments of the present application provide a method, device, device, and storage medium for elastically executing a thermal container.
  • a method, device, device, and storage medium for elastically executing a thermal container.
  • users are released from resource management, which saves labor costs with high efficiency.
  • system resources can be flexibly scheduled, allowing multiple tenants to reuse resources, which is used to solve the waste of resources in the trough time period, support elastic expansion, and calmly cope with peak traffic.
  • the present application provides a method for elastically executing a thermal container, including:
  • FIG. 1 Please refer to FIG. 1, the following is an example of a method for elastically executing a thermal container provided by the present application, and the method includes:
  • the user's access request includes at least the Uniform Resource Locator, the Internet Protocol address of the client, the data stored on the user's local terminal, the user identification, and the source link field in the Hypertext Transfer Protocol header.
  • the preheating container is used to deploy the operating environment, and the preheating queue refers to the queue used to store the preheating container.
  • the server reduces the startup time of the container by using a preheated container.
  • the hot queue refers to the queue used to store the target hot container, and the initial hot container refers to the operating environment and operating rules.
  • the operating environment refers to the hardware support required for the operation of the hot container.
  • windows operating environment For example, windows operating environment, Linux operating environment.
  • the executable script is used to calculate the resources available in the target thermal container, and the resources include at least one of graphics processor resources, central processing unit resources, memory resources, cache resources, and storage resources.
  • the server accesses the target hot container through the container IP, initializes the interface, and puts the executable script into the container for execution.
  • the CPU resources are used to interpret computer instructions and process data in computer software.
  • the memory resource can store data in the memory when it is running, and all the data in it will be automatically cleared after it stops running. Storage resources are used to store data. Shape processor resources are used to display information for conversion drive.
  • the timeout duration refers to the maximum duration of the response, and the container threshold duration refers to the maximum duration allowed for the container to exist;
  • the maximum response time does not exceed 1 minute, and a container with a duration of 2 days is required, the corresponding container is generated according to the user's input.
  • the target thermal container does not receive the first request within the timeout period, the status of the target thermal container is set to pause, and the target thermal container is stored in the hot queue.
  • the first request refers to a user's request to access the target thermal container.
  • the target thermal container receives the first request within the timeout period, set the status of the target thermal container to running, and process the first request.
  • the first request refers to a user's request to access the target thermal container.
  • the server By setting the container status to running, the server prevents the container operation from being interrupted by other request accesses when the request is about to be executed.
  • the server obtains the historical running time of the target thermal container and the current running time through a timer, and accumulates the obtained two numbers to obtain the existence duration of the container.
  • the container's existence duration is greater than or equal to the container's threshold duration, index to the target thermal container through a preset ID, and release the target thermal container.
  • Releasing the target hot container means destroying the container.
  • the user is released from resource management by setting the container threshold duration and preheating queue, thereby saving labor costs with high efficiency.
  • flexible scheduling of system resources allows multiple tenants to reuse thermal containers with the same configuration, and solves the waste of resources during the trough period through the container threshold duration, and calmly copes with peak traffic.
  • executing the executable script stored in the target thermal container and calculating the running result of the executable script, extracting the target thermal container from the hot queue based on the running result and enabling the target thermal container includes:
  • the ID of the target thermal container is calculated according to the namespace, the executable script, and the version number, and the target thermal container existing in the hot queue is queried based on the ID of the target thermal container, and the existing target thermal container is extracted and activated.
  • This key is used to query whether there is a corresponding preheating container in the preheating queue, and the target hot container ID is called through the corresponding preheating container to start the target hot container. If there is no corresponding hot container , The target hot container corresponding to the host name and corresponding memory is created through the key.
  • Preheating the container speeds up the creation of the hot container, and encrypts the container information through BASE64 to increase the security of the container.
  • the ID of the target thermal container is calculated according to the namespace, executable script, and version number, and the target thermal container in the hot queue is inquired based on the ID of the target thermal container, and the existing target thermal container is extracted and activated, including:
  • the user requests to create a 4G memory container as an example.
  • the server determines whether there is a Memory: 4G hot container based on the name space of the resource. When the server determines that it exists, the hot container is directly enabled. Called if it does not exist.
  • creating a thermal container with the same attributes as the target thermal container and storing it in the thermal queue includes:
  • the target hot container into the hot queue, which includes parallel processing queue and serial processing queue;
  • the head node linked list is a linked list of the nodes of the message linked list corresponding to each key in the hot queue.
  • the head node linked list includes the parallel head node linked list in the parallel processing queue and the serial processing queue.
  • the server selects serial processing or parallel processing through user request, which can reduce enterprise expenditure and reduce user waiting time.
  • extracting the target thermal container from the hot queue based on the running result and enabling the target thermal container includes:
  • the first threshold number of thread pools are used to enable the target hot container
  • the target hot container is activated according to the thread pool of the number of user access requests.
  • the server when multiple users want to call the hot container at the same time, the server uses multi-threading technology to increase the generation speed of the container.
  • the method further includes:
  • a bad access record appears in the user's access information, the user is determined to be an abnormal user and the user is prohibited from accessing.
  • the server finds that the user is not accessing normally, it denies the user access to ensure the normal operation and startup of the target thermal container. To maintain the normal operation of the server.
  • the method before analyzing the user's access information and determining whether the user is an abnormal user, the method further includes:
  • the carried data is written into the back-end memory, and the request completion instruction is fed back to the sender of the user's access request.
  • the server can save the data input by the user in the database through the above method.
  • FIG. 2 a schematic structural diagram of a device 20 for elastically executing a thermal container, which can be applied to an elastically executing thermal container.
  • the device for flexibly executing the thermal container in the embodiment of the present application can implement the steps corresponding to the method for flexibly executing the thermal container executed in the embodiment corresponding to FIG. 1 above.
  • the function implemented by the device 20 for flexibly executing the thermal container can be implemented by hardware, or implemented by hardware executing corresponding software.
  • the hardware or software includes one or more modules corresponding to the above-mentioned functions, and the modules may be software and/or hardware.
  • the device for flexibly executing the thermal container may include an input-output module 201 and a processing module 202.
  • the input and output module 201 can be used to control the input, output, and acquisition operations of the input and output module 201.
  • the input/output module 201 can be used to receive a user's access request.
  • the processing module 202 can be used to create a preheating container and a preheating queue according to the user's access request.
  • the preheating container is used to deploy an operating environment, and the preheating queue refers to a preheating container used to store the preheating container. Queue; create an initial thermal container according to the preheat container, obtain a target thermal container by adding a preset identification ID to the initial thermal container, and put the target thermal container into a thermal queue, the thermal queue refers to A queue for storing the target thermal container.
  • the initial thermal container refers to the operating environment and operating rules; the executable script stored in the target thermal container is executed, and the running result of the executable script is calculated, based on The running result extracts the target hot container from the hot queue and activates the target hot container, the executable script is used to calculate the resources available in the target hot container, and the resources include graphics processor resources , At least one of central processing unit resources, memory resources, cache resources, and storage resources; after the operation result is returned, the state of the target thermal container is set to be in preparation, and based on the available resources or user
  • the input parameters set the timeout duration and the container threshold duration, the timeout duration is less than the container threshold duration, the timeout duration refers to the maximum duration of the response, and the container threshold duration refers to the maximum duration allowed for the container; if If the target thermal container does not receive the first request within the timeout period, the status of the target thermal container is set to pause, and the target thermal container is stored in the thermal queue; if the target thermal container When the container receives the first request
  • the first request refers to the user's access to the target thermal container.
  • the request of the container calculate the operating time of the target thermal container to obtain the container existence time; if the container existence time is greater than or equal to the container threshold duration, index to the target thermal container through the preset ID, and Release the target thermal container.
  • the processing module 202 is further configured to:
  • Base64 operations are performed on the values of the central processing unit and the memory through an executable script as the key of the preheating queue, and based on the key, querying whether the preheating container exists in the preheating queue;
  • the preheating container exists, acquiring the ID of the target thermal container through the preheating container, extracting and starting the target thermal container based on the ID of the target thermal container;
  • the ID of the target thermal container is calculated according to the namespace, the executable script and the version number, and the target thermal container existing in the hot queue is queried based on the ID of the target thermal container, and all existing ones are extracted and activated.
  • the target thermal container is calculated according to the namespace, the executable script and the version number, and the target thermal container existing in the hot queue is queried based on the ID of the target thermal container, and all existing ones are extracted and activated.
  • the processing module 202 is further configured to:
  • the target thermal container is extracted and activated based on the available resources.
  • the processing module 202 is further configured to:
  • the hot queue including a parallel processing queue and a serial processing queue
  • the head node linked list is a linked list in which the nodes of the message linked list corresponding to each key in the hot queue are connected together, and the head node linked list includes the items in the parallel processing queue Parallel head node linked list and serial head node linked list in the serial processing queue;
  • serial head node linked list or the parallel head node linked list is processed based on each key.
  • the processing module 202 is further configured to:
  • the target hot container is activated by using the first threshold number of thread pools;
  • the target hot container is activated according to the thread pool of the number of access requests to the user.
  • the processing module 202 is further configured to:
  • a bad access record appears in the user's access information, the user is determined to be an abnormal user, and the user is prohibited from accessing.
  • the processing module 202 is further configured to:
  • the carried data is written into the back-end memory, and the request completion instruction is fed back to the sender of the user's access request.
  • FIG. 3 which includes: a processor, a memory, and an input and output unit (It can also be a transceiver, not identified in Figure 3) and a computer program stored in the memory and running on the processor.
  • the computer program may be a program corresponding to the method of elastically executing a thermal container in the embodiment corresponding to FIG. 1.
  • the processor executes the computer program to realize the device for flexibly executing the thermal container in the embodiment corresponding to FIG.
  • the flexible execution of 20 executes the steps in the method of the thermal container.
  • the processor executes the computer program
  • the function of each module in the apparatus 20 for flexibly executing the thermal container in the embodiment corresponding to FIG. 2 is realized.
  • the computer program may be a program corresponding to the method for elastically executing a thermal container in the embodiment corresponding to FIG. 1.
  • the so-called processor may be a central processing unit (CPU), other general-purpose processors, digital signal processors (digital signal processors, DSP), application specific integrated circuits (ASICs), ready-made Field-programmable gate array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor may also be any conventional processor, etc.
  • the processor is the control center of the computer device, and various interfaces and lines are used to connect various parts of the entire computer device.
  • the memory may be used to store the computer program and/or module, and the processor implements the computer by running or executing the computer program and/or module stored in the memory and calling data stored in the memory.
  • the memory may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may store Data created based on the use of mobile phones (such as audio data, video data, etc.), etc.
  • the memory may include high-speed random access memory, and may also include non-volatile memory, such as hard disk, memory, plug-in hard disk, smart media card (SMC), secure digital (SD) card , Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • non-volatile memory such as hard disk, memory, plug-in hard disk, smart media card (SMC), secure digital (SD) card , Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • the input and output units can be replaced by receivers and transmitters, and can be the same or different physical entities. When they are the same physical entity, they can be collectively referred to as input and output units.
  • the input and output can be a transceiver.
  • the memory may be integrated in the processor, or may be provided separately from the processor.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium may be a non-volatile computer-readable storage medium or a volatile computer-readable storage medium.
  • the computer-readable storage medium stores computer instructions, and when the computer instructions are executed on the computer, the computer executes the following steps:
  • the preheating container is used to deploy an operating environment, and the preheating queue refers to a queue for storing the preheating container;
  • Execute the executable script stored in the target thermal container calculate the running result of the executable script, extract the target thermal container from the thermal queue based on the running result, and activate the target thermal container,
  • the executable script is used to calculate the resources available in the target thermal container, and the resources include at least one of graphics processor resources, central processing unit resources, memory resources, cache resources, and storage resources;
  • the status of the target thermal container is set to be in preparation, and the timeout period and the container threshold period are set according to the available resources or the parameters input by the user, and the timeout period is less than the container threshold.
  • Duration, the timeout duration refers to the maximum duration of the response, and the container threshold duration refers to the maximum duration allowed for the container to exist;
  • the status of the target thermal container is set to pause, and the target thermal container is stored in the thermal queue, and the first A request refers to a request from a user to access the target thermal container;
  • the target thermal container receives the first request within the timeout period, set the status of the target thermal container to running, and process the first request;
  • the target thermal container is indexed by the preset ID, and the target thermal container is released.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

一种弹性执行热容器方法、装置、设备和存储介质。方法包括:创建预热容器以及预热队列(102);根据所述预热容器创建初始热容器,通过将初始热容器加上预置标识ID得到目标热容器,并将目标热容器放至热队列中(103);执行存储在目标热容器中的可执行脚本,并计算可执行脚本的运行结果,基于运行结果在热队列中提取目标热容器并启用目标热容器(104);若目标热容器在超时时长内接收到第一请求,则将目标热容器的状态设置为运行中,并处理第一请求(107);计算目标热容器的运行时长,得到容器存在时长(108);若容器存在时长大于或等于容器阈值时长,则通过所预置ID索引到目标热容器,并释放目标热容器(109)。以解决波谷时间段资源闲置浪费。

Description

弹性执行热容器方法、装置、设备和存储介质
本申请要求于2019年9月19日提交中国专利局、申请号为201910886055.X、发明名称为“弹性执行热容器方法、装置、设备和存储介质”的中国专利申请的优先权,其全部内容通过引用结合在申请中。
技术领域
本申请涉及运维领域,尤其涉及一种弹性执行热容器方法、装置、设备和存储介质。
背景技术
规则引擎由推理引擎发展而来,嵌入到应用程序中,将业务决策从应用程序代码中分离出来,并使用预定义的语义模块编写业务决策。接受数据输入,解释业务规则,并根据业务规则做出业务决。规则引擎经常使用在信用卡额度审批、保险承保、风险控制等场景。已有的规则引擎平台,支持用户在线编写业务规则,但是发明人意识到用户通过热队列(warmpool)搭建热容器时还需要准备规则运行环境,搭建相对复杂,同时规则的使用频率存在波峰波谷,从而导致资源分配的不合理。
发明内容
本申请提供了一种通过配置弹性执行热容器的方法,能够解决现有技术中资源分配不合理的问题。
为解决上述问题,本申请第一方面提供了一种弹性执行热容器的方法,包括:接收用户的访问请求;根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资 源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列;所述第一请求是指用户访问所述目标热容器的请求;若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;计算所述目标热容器的运行时长,得到容器存在时长;若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目标热容器,并释放所述目标热容器。
本申请第二方面提供了一种弹性执行热容器的装置,包括:输入输出模块,用于接收用户的访问请求;处理模块,用于根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列;所述第一请求是指用户访问所述目标热容器的请求;若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;计 算所述目标热容器的运行时长,得到容器存在时长;若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目标热容器,并释放所述目标热容器。
本申请第三方面提供了一种弹性执行热容器的设备,包括:存储器和至少一个处理器,所述存储器中存储有指令,所述存储器和所述至少一个处理器通过线路互联;所述至少一个处理器调用所述存储器中的所述指令,以使得所述弹性执行热容器的设备执行上述第一方面所述的方法。
本申请的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述计算机指令在计算机上运行时,使得计算机执行上述第一方面所述的方法。
本申请提供的技术方案中,接收用户的访问请求;根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列;所述第一请求是指用户访问所述目标热容器的请求;若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;计算所述目标热容器的运行时长,得到容器存在时长;若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目 标热容器,并释放所述目标热容器。本申请通过设置容器阈值时长以及预热队列,让用户从资源管理释放出来,高效率节约人力成本。通过高可靠的规则弹性执行方法,灵活调度系统资源,让多个租户复用相同配置的热容器,通过容器阈值时长解决波谷时间段资源闲置浪费,从容应对峰值流量。
附图说明
图1为本申请实施例中弹性执行热容器的方法的流程示意图;
图2为本申请实施例中弹性执行热容器的装置的结构示意图;
图3为本申请实施例中弹性执行热容器的设备的结构示意图。
具体实施方式
本申请实施例提供了一种弹性执行热容器方法、装置、设备和存储介质,通过设置有效时长以及预热队列,让用户从资源管理释放出来,高效率节约人力成本。通过高可靠的规则弹性执行方法,灵活调度系统资源,让多个租户复用资源,用于解决波谷时间段资源闲置浪费,支持弹性扩容,从容应对峰值流量。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例进行描述。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”、“第三”、“第四”等(如果存在)是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。此外,术语“包括”或“具有”及其任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
为便于理解,下面对本申请实施例的具体流程进行描述,请参阅图1,本申请提供一种弹性执行热容器的方法,包括:
请参照图1,以下对本申请提供一种弹性执行热容器的方法进行举例说明,所述方法包括:
101、接收用户的访问请求。
用户的访问请求至少包括统一资源定位符、客户端的网际协议地址、储存在用户本地终端上的数据、用户标识和超文本传输协议报头中的来源链接字段。
102、根据用户的访问请求创建预热容器以及预热队列。
预热容器用于部署运行环境,预热队列是指用于存放预热容器的队列。
服务器通过使用预热容器,减少容器的启动时间。
103、据预热容器创建初始热容器,通过将初始热容器加上预置标识ID得到目标热容器,并将目标热容器放至热队列中。
热队列是指用于存放目标热容器的队列,初始热容器是指运行环境以及运行规则。
运行环境是指热容器运行所需要的硬件支持。例如windows运行环境,Linux运行环境。
104、执行存储在目标热容器中的可执行脚本,并计算可执行脚本的运行结果,基于运行结果在热队列中提取目标热容器并启用目标热容器。
可执行脚本用于计算目标热容器中可用的资源,资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种。
服务器通过容器IP访问目标热容器并初始化接口,将可执行脚本放入容器中执行。中央处理器资源用于解释计算机指令以及处理计算机软件中的数据。内存资源在运行状态时内存中可存储数据,停止运行后将自动清空其中的所有数据。存储资源用于存储数据。形处理器资源用于显示信息进行转换驱动。
105、在运行结果返回后,将目标热容器的状态设置为准备中,并根据可用的资源或用户输入的参数设置超时时长与容器阈值时长,超时时长小于容器阈值时长。
超时时长是指响应的最大时长,容器阈值时长是指容器允许存在的最大时长;
例如用户输入主机名8C,最大响应时间不超过1分钟,需要持续时间2天的容器,则根据用户的输入生成对应的容器。
106、若目标热容器在超时时长内没有接收到第一请求,则将目标热容器 的状态设置为暂停,并将目标热容器存放至热队列。
第一请求是指用户访问目标热容器的请求。
107、若目标热容器在超时时长内接收到第一请求,则将目标热容器的状态设置为运行中,并处理第一请求。
第一请求是指用户访问所述目标热容器的请求。
服务器通过将容器状态设置为运行中,防止请求要被执行时,其他请求访问导致容器运行被打断。
108、计算目标热容器的运行时长,得到容器存在时长。
服务器通过获取目标热容器的历史运行时长,以及通过计时器获取此次的运行时长,通过将获取的两个数做累加,以得到容器的存在时长。
109、若容器存在时长大于或等于容器阈值时长,则通过预置ID索引到目标热容器,并释放目标热容器。
释放目标热容器是指将容器销毁。
本申请通过设置容器阈值时长以及预热队列,让用户从资源管理释放出来,高效率节约人力成本。通过高可靠的规则弹性执行方法,灵活调度系统资源,让多个租户复用相同配置的热容器,通过容器阈值时长解决波谷时间段资源闲置浪费,从容应对峰值流量。
一些实施方式中,执行存储在目标热容器中的可执行脚本,并计算可执行脚本的运行结果,基于运行结果在热队列中提取目标热容器并启用目标热容器,包括:
通过可执行脚本将中央处理器和内存的值进行Base64运算后作为预热队列的键key,基于key从预热队列中查询是否存在预热容器;
若存在预热容器,则通过预热容器获取目标热容器的ID,基于目标热容器的ID提取并启动目标热容器;
创建一个与目标热容器具有相同属性的热容器存放至热队列;
或者,根据命名空间、可执行脚本以及版本号计算目标热容器的ID,并基于目标热容器的ID查询热队列中存在的目标热容器,提取并启用存在的目标热容器。
上述实施方式中,以主机名4C,内存8G的服务器为例,内存8G的服务 器为例,将4C以及8G转换成2进制,通过编码表得到计算后得到NGM4Zw==,即NGM4Zw==,NGM4Zw==为预热队列中的key,通过这个key查询预热队列中是否存在对应的预热容器,通过对应的预热容器调取目标热容器ID启动目标热容器,如果没有对应的热容器,则通过key创建对应主机名和对应内存的目标热容器。
通过预热容器加快了热容器的创建,以及通过BASE64对容器信息进行加密,以增加容器的安全性。
一些实施方式中,根据命名空间、可执行脚本以及版本号计算目标热容器的ID,并基于目标热容器的ID查询热队列中存在的目标热容器,提取并启用存在的目标热容器,包括:
获取可执行脚本的运行结果;
基于运行结果中对应的资源数量及资源类型,确定预置设备中的可用资源;
调用与可执行脚本的运行结果相应的命名空间、可执行脚本以及版本号,计算得到目标热容器的ID,查询热队列中存在的目标热容器;
基于可用资源提取并启用目标热容器。
上述实施方式中,以用户请求中,用户请求创建4G内存的容器为例,服务器根据资源的命名空间,服务器判断是否存在Memory:4G的热容器,当服务器判断存在则直接启用该热容器,若不存在则调用。
一些实施方式中,创建一个与目标热容器具有相同属性的热容器存放至热队列,包括:
将目标热容器放入热队列中,热队列包括并行处理队列和串行处理队列;
获取热队列中的头节点链表,头节点链表为热队列中每个key对应的消息链表的节点连接在一起的链表,头节点链表包括并行处理队列中的并行头节点链表以及串行处理队列中的串行头节点链表;
基于每个key处理串行头节点链表或者并行头节点链表。
上述实施方式中,服务器通过用户请求选择串行处理或者并行处理,可以减少企业支出和减少用户的等待时间。
一些实施方式中,基于运行结果在热队列中提取目标热容器并启用目标热 容器,包括:
当在预设时长中接收到的用户的访问请求数量大于第一阈值时,采用第一阈值个数的线程池启用目标热容器;
当在预设时长中接收到的用户的访问请求数量小于或等于第一阈值时,根据到用户的访问请求个数的线程池启用目标热容器。
上述实施方式中,通过当同时多个用户要调用热容器时,服务器通过多线程技术提高容器的生成速度。
一些实施方式中,接收用户的访问请求之后,根据用户的访问请求创建预热容器以及预热队列之前,方法还包括:
分析用户的访问信息,并判断用户是否为非正常用户;
若用户的访问信息中出现不良访问记录,则确定用户为非正常用户,并禁止用户进行访问。
上述实施方式中,若服务器发现用户不是正常访问,则拒绝用户访问,保证目标热容器的正常运行和启动。以维护服务器正常运行。
一些实施方式中,分析用户的访问信息,并判断用户是否为非正常用户之前,方法还包括:
响应用户的访问请求,获取用户的访问请求所中携带的数据;
将携带的数据写入后端的存储器中,向用户的访问请求的发送端反馈请求完成指令。
上述实施方式中,服务器通过上述方式,可以将用户输入的数据保存至数据库中。
如图2所示的一种弹性执行热容器的装置20的结构示意图,其可应用于弹性执行热容器。本申请实施例中的弹性执行热容器的装置能够实现对应于上述图1所对应的实施例中所执行的弹性执行热容器的方法的步骤。弹性执行热容器的装置20实现的功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块,所述模块可以是软件和/或硬件。所述弹性执行热容器的装置可包括输入输出模块201和处理模块202,所述处理模块202和输入输出模块201的功能实现可参考图1所对应的实施例中所执行的操作,此处不作赘述。输入输出模块201可用于控制 所述输入输出模块201的输入、输出以及获取操作。
一些实施方式中,所述输入输出模块201可用于接收用户的访问请求。
所述处理模块202可用于根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列;若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求,所述第一请求是指用户访问所述目标热容器的请求;计算所述目标热容器的运行时长,得到容器存在时长;若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目标热容器,并释放所述目标热容器。
一些实施方式中,所述处理模块202还用于:
通过可执行脚本将中央处理器和内存的值进行Base64运算后作为所述预热队列的键key,基于所述key从所述预热队列中查询是否存在所述预热容器;
若存在所述预热容器,则通过所述预热容器获取所述目标热容器的ID,基于所述目标热容器的ID提取并启动所述目标热容器;
创建一个与所述目标热容器具有相同属性的热容器存放至所述热队列;
或者,根据命名空间、所述可执行脚本以及版本号计算所述目标热容器的 ID,并基于所述目标热容器的ID查询所述热队列中存在的目标热容器,提取并启用存在的所述目标热容器。
一些实施方式中,所述处理模块202还用于:
获取所述可执行脚本的运行结果;
基于所述运行结果中对应的资源数量及资源类型,确定预置设备中的可用资源;
调用与所述可执行脚本的运行结果相应的命名空间、所述可执行脚本以及版本号,计算得到所述目标热容器的ID,查询所述热队列中存在的目标热容器;
基于所述可用资源提取并启用所述目标热容器。
一些实施方式中,所述处理模块202还用于:
将所述目标热容器放入所述热队列中,所述热队列包括并行处理队列和串行处理队列;
获取所述热队列中的头节点链表,所述头节点链表为所述热队列中每个key对应的消息链表的节点连接在一起的链表,所述头节点链表包括所述并行处理队列中的并行头节点链表以及串行处理队列中的串行头节点链表;
基于每个key处理所述串行头节点链表或者所述并行头节点链表。
一些实施方式中,所述处理模块202还用于:
当在预设时长中接收到的所述用户的访问请求数量大于第一阈值时,采用第一阈值个数的线程池启用所述目标热容器;
当在预设时长中接收到的所述用户的访问请求数量小于或等于第一阈值时,根据所述到所述用户的访问请求个数的线程池启用所述目标热容器。
一些实施方式中,所述处理模块202还用于:
分析用户的访问信息,并判断用户是否为非正常用户;
若所述用户的访问信息中出现不良访问记录,则确定所述用户为非正常用户,并禁止所述用户进行访问。
一些实施方式中,所述处理模块202还用于:
响应所述用户的访问请求,获取所述用户的访问请求所中携带的数据;
将所述携带的数据写入后端的存储器中,向所述用户的访问请求的发送端 反馈请求完成指令。
上面从模块化功能实体的角度分别介绍了本申请实施例中的创建装置,以下从硬件角度介绍一种弹性执行热容器设备,如图3所示,其包括:处理器、存储器、输入输出单元(也可以是收发器,图3中未标识出)以及存储在所述存储器中并可在所述处理器上运行的计算机程序。例如,该计算机程序可以为图1所对应的实施例中弹性执行热容器的方法对应的程序。例如,当计算机设备实现如图2所示的弹性执行热容器的装置20的功能时,所述处理器执行所述计算机程序时实现上述图2所对应的实施例中由弹性执行热容器的装置20执行的弹性执行热容器的方法中的各步骤。或者,所述处理器执行所述计算机程序时实现上述图2所对应的实施例的弹性执行热容器的装置20中各模块的功能。又例如,该计算机程序可以为图1所对应的实施例中弹性执行热容器的方法对应的程序。
所称处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等,所述处理器是所述计算机装置的控制中心,利用各种接口和线路连接整个计算机装置的各个部分。
所述存储器可用于存储所述计算机程序和/或模块,所述处理器通过运行或执行存储在所述存储器内的计算机程序和/或模块,以及调用存储在存储器内的数据,实现所述计算机装置的各种功能。所述存储器可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、视频数据等)等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
所述输入输出单元可以用接收器和发送器代替,可以为相同或者不同的物理实体。为相同的物理实体时,可以统称为输入输出单元。该输入输出可以为收发器。所述存储器可以集成在所述处理器中,也可以与所述处理器分开设置。
本申请还提供一种计算机可读存储介质,该计算机可读存储介质可以为非易失性计算机可读存储介质,也可以为易失性计算机可读存储介质。计算机可读存储介质存储有计算机指令,当所述计算机指令在计算机上运行时,使得计算机执行如下步骤:
接收用户的访问请求;
根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;
根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;
执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;
在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;
若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列,所述第一请求是指用户访问所述目标热容器的请求;
若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;
计算所述目标热容器的运行时长,得到容器存在时长;
若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID 索引到所述目标热容器,并释放所述目标热容器。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM)中,包括若干指令用以使得一台终端(可以是手机,计算机,服务器或者网络设备等)执行本申请各个实施例所述的方法。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,这些均属于本申请的保护之内。

Claims (20)

  1. 一种弹性执行热容器的方法,包括:
    接收用户的访问请求;
    根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;
    根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;
    执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;
    在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;
    若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列,所述第一请求是指用户访问所述目标热容器的请求;
    若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;
    计算所述目标热容器的运行时长,得到容器存在时长;
    若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目标热容器,并释放所述目标热容器。
  2. 根据权利要求1所述的弹性执行热容器的方法,所述执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,包括:
    通过可执行脚本将中央处理器和内存的值进行Base64运算后作为所述预 热队列的键key,基于所述key从所述预热队列中查询是否存在所述预热容器;
    若存在所述预热容器,则通过所述预热容器获取所述目标热容器的ID,基于所述目标热容器的ID提取并启动所述目标热容器;
    创建一个与所述目标热容器具有相同属性的热容器存放至所述热队列;
    或者,
    根据命名空间、所述可执行脚本以及版本号计算所述目标热容器的ID,并基于所述目标热容器的ID查询所述热队列中存在的目标热容器,提取并启用存在的所述目标热容器。
  3. 根据权利要求2所述的弹性执行热容器的方法,所述根据命名空间、所述可执行脚本以及版本号计算所述目标热容器的ID,并基于所述目标热容器的ID查询所述热队列中存在的目标热容器,提取并启用存在的所述目标热容器,包括:
    获取所述可执行脚本的运行结果;
    基于所述运行结果中对应的资源数量及资源类型,确定预置设备中的可用资源;
    调用与所述可执行脚本的运行结果相应的命名空间、所述可执行脚本以及版本号,计算得到所述目标热容器的ID,查询所述热队列中存在的目标热容器;
    基于所述可用资源提取并启用所述目标热容器。
  4. 根据权利要求2所述的弹性执行热容器的方法,所述创建一个与所述目标热容器具有相同属性的热容器存放至所述热队列,包括:
    将所述目标热容器放入所述热队列中,所述热队列包括并行处理队列和串行处理队列;
    获取所述热队列中的头节点链表,所述头节点链表为所述热队列中每个key对应的消息链表的节点连接在一起的链表,所述头节点链表包括所述并行处理队列中的并行头节点链表以及串行处理队列中的串行头节点链表;
    基于每个key处理所述串行头节点链表或者所述并行头节点链表。
  5. 根据权利要求1-4中任一项所述的弹性执行热容器的方法,所述基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,包 括:
    当在预设时长中接收到的所述用户的访问请求数量大于第一阈值时,采用第一阈值个数的线程池启用所述目标热容器;
    当在预设时长中接收到的所述用户的访问请求数量小于或等于第一阈值时,根据所述到所述用户的访问请求个数的线程池启用所述目标热容器。
  6. 根据权利要求1所述的方法,所述接收用户的访问请求之后,所述根据所述用户的访问请求创建预热容器以及预热队列之前,所述的弹性执行热容器的方法还包括:
    分析用户的访问信息,并判断用户是否为非正常用户;
    若所述用户的访问信息中出现不良访问记录,则确定所述用户为非正常用户,并禁止所述用户进行访问。
  7. 根据权利要求6所述的方法,所述分析用户的访问信息,并判断用户是否为非正常用户之前,所述的弹性执行热容器的方法还包括:
    响应所述用户的访问请求,获取所述用户的访问请求所中携带的数据;
    将所述携带的数据写入后端的存储器中,向所述用户的访问请求的发送端反馈请求完成指令。
  8. 一种弹性执行热容器的装置,包括:
    输入输出模块,用于接收用户的访问请求;
    处理模块,用于根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时 时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列;所述第一请求是指用户访问所述目标热容器的请求;若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;计算所述目标热容器的运行时长,得到容器存在时长;若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目标热容器,并释放所述目标热容器。
  9. 根据权利要求8所述的弹性执行热容器的装置,所述处理模块具体用于:
    通过可执行脚本将中央处理器和内存的值进行Base64运算后作为所述预热队列的键key,基于所述key从所述预热队列中查询是否存在所述预热容器;
    若存在所述预热容器,则通过所述预热容器获取所述目标热容器的ID,基于所述目标热容器的ID提取并启动所述目标热容器;
    创建一个与所述目标热容器具有相同属性的热容器存放至所述热队列;
    或者,
    根据命名空间、所述可执行脚本以及版本号计算所述目标热容器的ID,并基于所述目标热容器的ID查询所述热队列中存在的目标热容器,提取并启用存在的所述目标热容器。
  10. 根据权利要求9所述的弹性执行热容器的装置,所述处理模块还用于:
    获取所述可执行脚本的运行结果;
    基于所述运行结果中对应的资源数量及资源类型,确定预置设备中的可用资源;
    调用与所述可执行脚本的运行结果相应的命名空间、所述可执行脚本以及版本号,计算得到所述目标热容器的ID,查询所述热队列中存在的目标热容器;
    基于所述可用资源提取并启用所述目标热容器。
  11. 根据权利要求9所述的弹性执行热容器的装置,所述处理模块还用于:
    将所述目标热容器放入所述热队列中,所述热队列包括并行处理队列和串 行处理队列;
    获取所述热队列中的头节点链表,所述头节点链表为所述热队列中每个key对应的消息链表的节点连接在一起的链表,所述头节点链表包括所述并行处理队列中的并行头节点链表以及串行处理队列中的串行头节点链表;
    基于每个key处理所述串行头节点链表或者所述并行头节点链表。
  12. 根据权利要求8-11中任一项所述的弹性执行热容器的装置,所述处理模块还用于:
    当在预设时长中接收到的所述用户的访问请求数量大于第一阈值时,采用第一阈值个数的线程池启用所述目标热容器;
    当在预设时长中接收到的所述用户的访问请求数量小于或等于第一阈值时,根据所述到所述用户的访问请求个数的线程池启用所述目标热容器。
  13. 根据权利要求8所述的弹性执行热容器的装置,所述处理模块还用于:
    分析用户的访问信息,并判断用户是否为非正常用户;
    若所述用户的访问信息中出现不良访问记录,则确定所述用户为非正常用户,并禁止所述用户进行访问。
  14. 根据权利要求13所述的弹性执行热容器的装置,所述处理模块还用于:
    响应所述用户的访问请求,获取所述用户的访问请求所中携带的数据;
    将所述携带的数据写入后端的存储器中,向所述用户的访问请求的发送端反馈请求完成指令。
  15. 一种弹性执行热容器的设备,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现如下步骤:
    接收用户的访问请求;
    根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;
    根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;
    执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;
    在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;
    若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列,所述第一请求是指用户访问所述目标热容器的请求;
    若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;
    计算所述目标热容器的运行时长,得到容器存在时长;
    若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目标热容器,并释放所述目标热容器。
  16. 根据权利要求15所述的弹性执行热容器的设备,所述处理器执行所述计算机程序实现所述执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器时,包括以下步骤:
    通过可执行脚本将中央处理器和内存的值进行Base64运算后作为所述预热队列的键key,基于所述key从所述预热队列中查询是否存在所述预热容器;
    若存在所述预热容器,则通过所述预热容器获取所述目标热容器的ID,基于所述目标热容器的ID提取并启动所述目标热容器;
    创建一个与所述目标热容器具有相同属性的热容器存放至所述热队列;
    或者,
    根据命名空间、所述可执行脚本以及版本号计算所述目标热容器的ID,并基于所述目标热容器的ID查询所述热队列中存在的目标热容器,提取并启 用存在的所述目标热容器。
  17. 根据权利要求16所述的弹性执行热容器的设备,所述处理器执行所述计算机程序实现所述根据命名空间、所述可执行脚本以及版本号计算所述目标热容器的ID,并基于所述目标热容器的ID查询所述热队列中存在的目标热容器,提取并启用存在的所述目标热容器时,包括以下步骤:
    获取所述可执行脚本的运行结果;
    基于所述运行结果中对应的资源数量及资源类型,确定预置设备中的可用资源;
    调用与所述可执行脚本的运行结果相应的命名空间、所述可执行脚本以及版本号,计算得到所述目标热容器的ID,查询所述热队列中存在的目标热容器;
    基于所述可用资源提取并启用所述目标热容器。
  18. 根据权利要求16所述的弹性执行热容器的设备,所述处理器执行所述计算机程序实现所述创建一个与所述目标热容器具有相同属性的热容器存放至所述热队列时,包括以下步骤:
    将所述目标热容器放入所述热队列中,所述热队列包括并行处理队列和串行处理队列;
    获取所述热队列中的头节点链表,所述头节点链表为所述热队列中每个key对应的消息链表的节点连接在一起的链表,所述头节点链表包括所述并行处理队列中的并行头节点链表以及串行处理队列中的串行头节点链表;
    基于每个key处理所述串行头节点链表或者所述并行头节点链表。
  19. 根据权利要求15-18中任一项所述的弹性执行热容器的设备,所述处理器执行所述计算机程序实现所述基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器时,包括以下步骤:
    当在预设时长中接收到的所述用户的访问请求数量大于第一阈值时,采用第一阈值个数的线程池启用所述目标热容器;
    当在预设时长中接收到的所述用户的访问请求数量小于或等于第一阈值时,根据所述到所述用户的访问请求个数的线程池启用所述目标热容器。
  20. 一种计算机可读存储介质,所述计算机可读存储介质中存储计算机指 令,当所述计算机指令在计算机上运行时,使得计算机执行如下步骤:
    接收用户的访问请求;
    根据所述用户的访问请求创建预热容器以及预热队列,所述预热容器用于部署运行环境,所述预热队列是指用于存放所述预热容器的队列;
    根据所述预热容器创建初始热容器,通过将所述初始热容器加上预置标识ID得到目标热容器,并将所述目标热容器放至热队列中,所述热队列是指用于存放所述目标热容器的队列,所述初始热容器是指运行环境以及运行规则;
    执行存储在所述目标热容器中的可执行脚本,并计算所述可执行脚本的运行结果,基于所述运行结果在所述热队列中提取所述目标热容器并启用所述目标热容器,所述可执行脚本用于计算所述目标热容器中可用的资源,所述资源包括图形处理器资源、中央处理器资源、内存资源、缓存资源、存储资源中的至少一种;
    在所述运行结果返回后,将所述目标热容器的状态设置为准备中,并根据所述可用的资源或用户输入的参数设置超时时长与容器阈值时长,所述超时时长小于所述容器阈值时长,所述超时时长是指响应的最大时长,所述容器阈值时长是指所述容器允许存在的最大时长;
    若所述目标热容器在所述超时时长内没有接收到第一请求,则将所述目标热容器的状态设置为暂停,并将所述目标热容器存放至所述热队列,所述第一请求是指用户访问所述目标热容器的请求;
    若所述目标热容器在所述超时时长内接收到所述第一请求,则将所述目标热容器的状态设置为运行中,并处理所述第一请求;
    计算所述目标热容器的运行时长,得到容器存在时长;
    若所述容器存在时长大于或等于所述容器阈值时长,则通过所述预置ID索引到所述目标热容器,并释放所述目标热容器。
PCT/CN2019/117876 2019-09-19 2019-11-13 弹性执行热容器方法、装置、设备和存储介质 WO2021051576A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910886055.X 2019-09-19
CN201910886055.XA CN110764903B (zh) 2019-09-19 2019-09-19 弹性执行热容器方法、装置、设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021051576A1 true WO2021051576A1 (zh) 2021-03-25

Family

ID=69329970

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117876 WO2021051576A1 (zh) 2019-09-19 2019-11-13 弹性执行热容器方法、装置、设备和存储介质

Country Status (2)

Country Link
CN (1) CN110764903B (zh)
WO (1) WO2021051576A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880475A (zh) * 2012-10-23 2013-01-16 上海普元信息技术股份有限公司 计算机软件系统中基于云计算的实时事件处理系统及方法
CN107111519A (zh) * 2014-11-11 2017-08-29 亚马逊技术股份有限公司 用于管理和调度容器的系统
US20180203742A1 (en) * 2015-06-19 2018-07-19 Vmware, Inc. Resource management for containers in a virtualized environment
CN108337314A (zh) * 2018-02-07 2018-07-27 北京百度网讯科技有限公司 分布式系统、用于主服务器的信息处理方法和装置
CN108475251A (zh) * 2016-01-22 2018-08-31 环球互连及数据中心公司 针对容器的虚拟网络、热交换、热缩放与灾难恢复

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107707593B (zh) * 2017-02-21 2018-08-17 贵州白山云科技有限公司 一种提高缓存命中率的动态资源访问加速方法及装置
CN109710402A (zh) * 2018-12-17 2019-05-03 平安普惠企业管理有限公司 处理资源获取请求的方法、装置、计算机设备和存储介质
CN109684092B (zh) * 2018-12-24 2023-03-10 新华三大数据技术有限公司 资源分配方法及装置
CN109753356A (zh) * 2018-12-25 2019-05-14 北京友信科技有限公司 一种容器资源调度方法、装置及计算机可读存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880475A (zh) * 2012-10-23 2013-01-16 上海普元信息技术股份有限公司 计算机软件系统中基于云计算的实时事件处理系统及方法
CN107111519A (zh) * 2014-11-11 2017-08-29 亚马逊技术股份有限公司 用于管理和调度容器的系统
US20180203742A1 (en) * 2015-06-19 2018-07-19 Vmware, Inc. Resource management for containers in a virtualized environment
CN108475251A (zh) * 2016-01-22 2018-08-31 环球互连及数据中心公司 针对容器的虚拟网络、热交换、热缩放与灾难恢复
CN108337314A (zh) * 2018-02-07 2018-07-27 北京百度网讯科技有限公司 分布式系统、用于主服务器的信息处理方法和装置

Also Published As

Publication number Publication date
CN110764903B (zh) 2023-06-16
CN110764903A (zh) 2020-02-07

Similar Documents

Publication Publication Date Title
US10320623B2 (en) Techniques for tracking resource usage statistics per transaction across multiple layers of protocols
JP6348937B2 (ja) オブジェクト記憶システムにおけるオブジェクトデータの更新方法及び更新装置
WO2020233059A1 (zh) 一种基于数据处理的登录处理方法及相关设备
WO2019192103A1 (zh) 并发访问控制方法、装置、终端设备及介质
CN110489417A (zh) 一种数据处理方法及相关设备
WO2020029388A1 (zh) 文件传输方法、系统、计算机设备和存储介质
US10129264B2 (en) Method and apparatus for implementing document sharing between user groups
WO2019184164A1 (zh) 自动部署Kubernetes从节点的方法、装置、终端设备及可读存储介质
JP2017509936A (ja) リソースへの繰返しアクセスについてリソースオーナーから認可を要求する要求のバッチ処理の、サードパーティによる実行の容易化
JP2001147901A (ja) ローカル・ジョブ制御システムを有する分散処理システム内での外部ジョブ・スケジューリング方法及びシステム
WO2018014868A1 (zh) 混合云的用户管理方法和装置
CN105516086B (zh) 业务处理方法及装置
CN110971700B (zh) 分布式锁的实现方法及装置
US9577950B2 (en) Method and system for reclaiming unused resources in a networked application environment
WO2022148254A1 (zh) 一种用户信息分析结果反馈方法及其装置
CN109600385B (zh) 一种访问控制方法及装置
WO2019134402A1 (zh) 设备操作方法、集群系统、电子设备及可读存储介质
US10698863B2 (en) Method and apparatus for clearing data in cloud storage system
CN111367693B (zh) 基于消息队列调度插件任务的方法、系统、设备及介质
CN111382985A (zh) 待办消息集成推送系统和工作方法
CN106034113A (zh) 数据处理方法及装置
WO2015154416A1 (zh) 一种上网行为管理方法及装置
WO2019062066A1 (zh) 终端设备联机任务执行方法、服务器及可读存储介质
WO2020024458A1 (zh) 业务接口的管理方法及装置、存储介质、计算机设备
CN108520401B (zh) 用户名单管理方法、装置、平台及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19945517

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19945517

Country of ref document: EP

Kind code of ref document: A1