CN115391030A - Control method and device, computer equipment and storage medium - Google Patents

Control method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN115391030A
CN115391030A CN202210945249.4A CN202210945249A CN115391030A CN 115391030 A CN115391030 A CN 115391030A CN 202210945249 A CN202210945249 A CN 202210945249A CN 115391030 A CN115391030 A CN 115391030A
Authority
CN
China
Prior art keywords
parallel
tasks
task
preset
resources
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210945249.4A
Other languages
Chinese (zh)
Inventor
蒋龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202210945249.4A priority Critical patent/CN115391030A/en
Publication of CN115391030A publication Critical patent/CN115391030A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a control method, a control device, a computer device and a non-volatile computer readable storage medium. The method comprises the steps of obtaining different index information of the current parallel task; determining the target parallel quantity according to the index information and a preset telescopic strategy; and scheduling resources according to the target parallel quantity so as to adjust the quantity of the current parallel tasks. The method comprises the steps of monitoring different index information of a current parallel task, determining whether capacity expansion (expanding resource usage to increase the number of the current tasks which can be parallel) or capacity reduction (reducing resource usage to reduce the number of the current tasks which can be parallel) is needed or not based on the index information and a preset expansion strategy, determining the target parallel number after capacity expansion or capacity reduction, and finally scheduling resources in real time according to the target parallel number to adjust the number of the current tasks which are parallel, so that the scheduled resources are always matched with the resources needed by the current tasks which need to be executed, resource waste is prevented, and the execution efficiency of the tasks is guaranteed to the maximum extent.

Description

Control method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of path planning technologies, and in particular, to a control method, a control apparatus, a computer device, and a non-volatile computer-readable storage medium.
Background
Currently, in a cloud computing process, fixed resources are allocated to a plurality of tasks, so that the tasks can be normally executed. However, the resources required by the task may be constantly changing, and too much resource allocation may result in resource waste, and too little resource allocation may result in inefficient task execution.
Disclosure of Invention
The embodiment of the application provides a control method, a control device, computer equipment and a non-volatile computer readable storage medium.
The control method of the embodiment of the application comprises the steps of obtaining different index information of a current parallel task; determining the target parallel quantity according to the index information and a preset telescopic strategy; and scheduling resources according to the target parallel quantity so as to adjust the quantity of the tasks which are parallel currently.
The control device comprises an obtaining module, a first determining module and an adjusting module. The acquisition module is used for acquiring different index information of the current parallel task; the first determining module is used for determining the target parallel quantity according to the index information and a preset telescopic strategy; and the adjusting module is used for scheduling resources according to the target parallel quantity so as to adjust the quantity of the tasks which are parallel at present.
The computer device of the embodiment of the application comprises a processor, and the processor is used for executing the control method. The control method comprises the steps of obtaining different index information of a current parallel task; determining the target parallel quantity according to the index information and a preset telescopic strategy; and scheduling resources according to the target parallel quantity so as to adjust the quantity of the tasks which are parallel currently.
The present embodiments provide a non-transitory computer-readable storage medium having a computer program stored thereon. The computer program realizes the control method when executed by a processor. The control method comprises the steps of obtaining different index information of a current parallel task; determining the target parallel quantity according to the index information and a preset expansion strategy; and scheduling resources according to the target parallel quantity so as to adjust the quantity of the tasks which are parallel currently.
According to the control method, the control device, the computer equipment and the nonvolatile computer readable storage medium, whether capacity expansion (resource usage is enlarged to increase the number of tasks which can be performed in parallel) or capacity reduction (resource usage is reduced to reduce the number of tasks which can be performed in parallel) is needed or not is determined by monitoring different index information of the tasks which are performed in parallel at present based on the index information and a preset expansion strategy, the number of target parallel numbers after capacity expansion or capacity reduction is determined, and finally the resources are scheduled in real time according to the target parallel numbers to adjust the number of the tasks which are performed in parallel at present, so that the scheduled resources are always matched with the resources needed by the tasks which are performed at present, waste of the resources is prevented, and the execution efficiency of the tasks is guaranteed to the maximum.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a control method according to certain embodiments of the present application;
FIG. 2 is a block diagram illustrating a control method according to some embodiments of the present disclosure;
FIG. 3 is a schematic flow chart diagram of a control method according to certain embodiments of the present application;
FIG. 4 is a schematic flow chart diagram of a control method according to certain embodiments of the present application;
FIG. 5 is a block diagram of a control method according to some embodiments of the present application;
FIG. 6 is a block schematic diagram of a control device according to certain embodiments of the present application;
FIG. 7 is a schematic plan view of a computer device according to some embodiments of the present application; and
FIG. 8 is a schematic diagram of the interaction of a non-volatile computer readable storage medium and a processor of certain embodiments of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
The terms appearing in the present application are explained first below:
flink: the open-source computing platform is an open-source computing platform oriented to distributed data stream processing and batch data processing, and provides functions of supporting two types of applications of stream processing and batch processing. Currently, flink adopts a mode of internal automatic application and automatic release in the management of computing resources. However, data traffic often has peaks and troughs, and the computing resources cannot change along with the traffic change. The amount of resources initially set is easily too much or too little, which results in resource waste or insufficient resources and operation delay.
Cloud-first: the cloud technology product system is a distributed cloud based on distributed deployment and unified transportation and management, and is established on the basis of technologies such as containers, micro-services and the like.
Kubernetes (K8 s): is an open source container orchestrator technique for automated deployment, expansion and management of containerized applications. K8s makes it simple to deploy and manage microservice architecture applications. It does this by forming an abstraction layer above the cluster, allowing the development team to deploy the application smoothly.
Referring to fig. 1, a control method according to an embodiment of the present disclosure includes:
a step 011: and acquiring different index information of the current parallel task.
Specifically, the cloud monitoring platform can acquire different index information of the current parallel tasks reported by the Flink cluster, wherein the index information can include the number of the current parallel tasks, the average CPU utilization rate, the average memory utilization rate, the current time and the like.
Step 012: and determining the parallel quantity of the targets according to the index information and a preset expansion strategy.
Specifically, after the index information is received, since the index information is general, in order to facilitate subsequent diagnosis, the received index information needs to be classified first (for example, classified according to tag information of the index information), so as to form different types of index information. And then, acquiring preset type index information, such as the current parallel task number, the average CPU utilization rate, the average memory utilization rate, the current moment and the like, expanding or contracting the capacity through the preset type index information, and determining the target parallel number after expanding or contracting the capacity.
Optionally, the target parallel number may be determined according to a difference between an average CPU (central processing unit) utilization and a preset CPU utilization; and/or determining the target parallel quantity according to the average memory utilization rate and the preset memory utilization rate; and/or determining the target parallel quantity as a preset parallel quantity corresponding to the preset time under the condition that the current time is the preset time.
For example, the preset CPU utilization generally includes an upper limit and a lower limit, and when the target parallel amount is determined according to a difference between the average CPU utilization and the preset CPU utilization, if the average CPU utilization is greater than the upper limit (for example, 80%) of the preset CPU utilization, it can be understood that the CPU utilization is too high and is likely to be abnormal, so that the execution efficiency of the task is not increased or decreased, and at this time, the average CPU utilization needs to be decreased. In order to reduce the average CPU utilization, capacity expansion is required to expand the resource usage, so as to increase the number of the current parallel tasks, and the increased number of the current parallel tasks (i.e., the target parallel number) may be determined according to a difference between the average CPU utilization and a preset upper limit of the CPU utilization, where if the difference is larger, the target parallel number is larger. After the number of the current parallel tasks is increased, the CPU utilization rate of a single task can be reduced, so that the average CPU utilization rate of all the tasks is reduced, the average CPU utilization rate is reduced below the upper limit of the preset CPU utilization rate, the average CPU utilization rate is guaranteed not to be too high, meanwhile, each task has enough CPU resources to carry out calculation, and the task execution efficiency is improved. Or, when the target parallel amount is determined according to the difference between the average CPU utilization rate and the preset CPU utilization rate, if the average CPU utilization rate is less than the lower limit (for example, 30%) of the preset CPU utilization rate, it can be understood that a large amount of resources allocated by the task are wasted due to too low CPU utilization rate, and at this time, the average CPU utilization rate needs to be increased. In order to improve the average CPU utilization, it is necessary to perform capacity reduction to reduce the resource usage, so as to reduce the number of the current parallel tasks, where the reduced number of the current parallel tasks (i.e., the target parallel number) may be determined according to a difference between a preset lower limit of the CPU utilization and the average CPU utilization, and if the difference is larger, the target parallel number is smaller. After the number of the current parallel tasks is reduced, the CPU utilization rate of a single task can be improved, so that the average CPU utilization rate of all the tasks is improved, the average CPU utilization rate is improved to be higher than the lower limit of the preset CPU utilization rate, the average CPU utilization rate is ensured not to be too low, and resource waste is prevented.
For example, the preset memory usage rate generally includes an upper limit and a lower limit, and when the target parallel amount is determined according to a difference between the average memory usage rate and the preset memory usage rate, if the average memory usage rate is greater than the upper limit (for example, 90%) of the preset memory usage rate, it can be understood that the execution efficiency of the task is not increased or decreased due to abnormal occurrence when the memory usage rate is too high, and at this time, the average memory usage rate needs to be decreased. In order to reduce the average memory usage rate, capacity needs to be expanded to expand the resource usage amount, so as to increase the number of the current parallel tasks, and the increased number of the current parallel tasks (i.e., the target parallel number) may be determined according to a difference between the average memory usage rate and a preset upper limit of the memory usage rate, where if the difference is larger, the target parallel number is larger. After the number of the current parallel tasks is increased, the memory utilization rate of a single task can be reduced, so that the average memory utilization rate of all the tasks is reduced, the average memory utilization rate is reduced to be lower than the upper limit of the preset memory utilization rate, the average memory utilization rate is not overhigh, meanwhile, each task has enough memory resources to carry out calculation, and the task execution efficiency is improved. Or, when the target parallel amount is determined according to the difference between the average memory usage rate and the preset memory usage rate, if the average memory usage rate is smaller than the lower limit (for example, 20%) of the preset memory usage rate, it can be understood that the average memory usage rate needs to be increased at this time because too low memory usage rate causes a large amount of resources allocated by the task to be wasted. In order to improve the average memory usage rate, capacity reduction is required to reduce the resource usage amount, so as to reduce the number of the current parallel tasks, and the reduced number of the current parallel tasks (i.e., the target parallel number) may be determined according to a difference between a preset lower limit of the memory usage rate and the average memory usage rate, where if the difference is larger, the target parallel number is smaller. After the number of the current parallel tasks is reduced, the memory utilization rate of a single task can be improved, so that the average memory utilization rate of all the tasks is improved, the average memory utilization rate is improved to be higher than the lower limit of the preset memory utilization rate, the average memory utilization rate is not too low, and resource waste is prevented.
For another example, in a specific scenario based on some tasks, there is a rule in the change of resources used by the tasks in different time periods, and therefore, the capacity expansion or capacity reduction can be performed periodically. For example, the resource usage amount of a task in odd hours is small, and the resource usage amount in even hours is large, so that capacity expansion or capacity reduction can be performed once every other hour, when the current time is a preset time (if the time of the first capacity expansion is 12 points, each integer point is a preset time), the preset parallel number can be determined according to the preset time, for example, when the preset time is an even integer point, the preset parallel number is 16, when the preset time is an odd integer point, the preset parallel number is 12, and then the target parallel number is determined according to the preset parallel number corresponding to the preset time, so that the number of the currently running tasks changes periodically.
Step 013: and scheduling resources according to the target parallel quantity so as to adjust the quantity of the current parallel tasks.
Specifically, after the target parallel number is determined, the resources may be scheduled according to the target parallel number, and if the number of the current parallel tasks is 10 and the target parallel number is 16, the resources capable of running 6 tasks (e.g., 6 CPU resources and memory resources of a predetermined specification) need to be scheduled again. Thereby re-deploying the 16 tasks to use the 16 resources respectively, such as further splitting the original 10 tasks required to execute the computing task to be executed as 16 tasks by the 16 resources respectively. Therefore, elastic expansion and contraction of resources are achieved through monitoring index information, resource waste is prevented, and task execution efficiency is guaranteed.
The architecture for realizing elastic expansion and contraction is shown in fig. 2, wherein a Flink cluster reports various index information to a cloud monitoring platform 22 in real time, the cloud monitoring platform 22 forwards the index information to an index information diagnosis module 23, the index information diagnosis module 23 classifies the index information, and then diagnoses whether expansion or contraction is needed based on an expansion strategy and index information preset by the index information diagnosis module 23, determines the parallel number of targets after expansion or contraction, forwards the target to a control platform 24 at the rear end, and then the control platform 24 acquires the expansion strategy and index information preset by the index information diagnosis module 23 again and diagnoses again, thereby determining the parallel number of targets to be adjusted finally and storing the target in a database. In addition, in order to ensure that processing can be resumed from the breakpoint of the task interrupt when the task is redeployed after the resource is adjusted, it is necessary to ensure that the already computed part of the task is saved before redeploying the resource and the task to adjust the current parallel task number. Therefore, after the information of the current parallel tasks is saved, the resources and the tasks can be redeployed to adjust the number of the current parallel tasks, so that the processing can be restored according to the saved task information when the tasks are redeployed, the tasks and the repeated calculation of the calculated part are not needed, and the task execution efficiency is improved.
The Flink cluster 21 generally stores the information of the current parallel task automatically every preset time length, so that the time difference between the current time and the last time of storage can be judged, and the time length needing waiting is determined by combining the preset time length, so that after the waiting time length (at this time, the information of the current parallel task is just stored and finished), resources and tasks are re-deployed to adjust the number of the current parallel task, and therefore, the self-storage mechanism of the Flink cluster 21 is utilized to prevent the repeated calculation of the calculated part of the task, and the task execution efficiency is improved. Certainly, the control platform 24 may directly notify the Flink cluster 21 to immediately store the information of the current parallel task before redeploying the resources and the tasks to adjust the number of the current parallel tasks, so that it is not necessary to wait for the Flink cluster 21 to automatically store the information, and the efficiency of task deployment is improved on the premise of avoiding the repeated calculation of the already calculated part of the tasks.
In addition, when the task is redeployed, the task can be deployed on the server storing the information of the task, so that the server can acquire the information of the task from the local place without acquiring the information of the task from other servers through data transmission, and the task deployment efficiency is further improved.
The control method comprises the steps of monitoring different index information of a current parallel task, calculating various index information by monitoring data flow actually used by the task, determining resources used by the task in real time, determining whether capacity expansion (expanding the resource usage to increase the current parallel task quantity) or capacity reduction (reducing the resource usage to reduce the current parallel task quantity) is needed or not based on the index information and a preset expansion strategy, determining the target parallel quantity after capacity expansion or capacity reduction, scheduling the resources in real time according to the target parallel quantity, adjusting the current parallel task quantity, enabling the scheduled resources to be matched with the resources needed by the current task to be executed all the time, preventing resource waste and guaranteeing the task execution efficiency to the maximum.
Referring to fig. 3, optionally, there are a maximum resource amount and a minimum resource amount for each task, and the resource amount actually used by the task can fluctuate between the maximum resource amount and the minimum resource amount, and the control method further includes:
step 014: determining the actual resource amount used by the task according to the index information;
step 015: and executing the task according to the actual resource amount under the condition that the actual resource amount is between the maximum resource amount and the minimum resource amount.
Specifically, when allocating resources to each task, a maximum resource amount and a minimum resource amount may be set for the task, for example, the minimum resource amount is 40% of the maximum resource amount, when the index information is obtained, current index information of each task, such as a current CPU usage rate, a memory usage rate, and the like, may be obtained, according to the current index information of each task, the resource amount actually used by the task may be determined, and if the actually used resource amount is between the maximum resource amount and the minimum resource amount that are configured in advance, the task may be executed according to the actual resource amount, and compared with executing the task with the maximum resource amount all the time, resource waste may be prevented by setting a resource amount range that is actually usable by each task (i.e., the minimum resource amount to the maximum resource amount), and making the resource amount actually used by the task fluctuate within the resource amount range.
Referring to fig. 4, optionally, the control method further includes:
step 016: based on a preset container arrangement algorithm, a task container is established for each task and a job container for managing all tasks is established.
Step 013: scheduling resources according to the target parallel quantity to adjust the quantity of the current parallel tasks, comprising:
step 0131: adjusting the number of task containers according to the target parallel number;
step 0132: applying for resources to the job containers based on the adjusted number of task containers;
step 0133: monitoring the parallel quantity of tasks corresponding to the resources;
step 0134: and under the condition that the parallel number of the tasks is matched with the number of the adjusted task containers, all the tasks are redeployed so as to adjust the number of the current parallel tasks.
Specifically, referring to fig. 2 and fig. 5, in the Flink cluster 21, each task (for example, TM1, TM2, etc. in fig. 2) is actively managed by the job manager JM, the resource of each task is provided by the server deployed by each task, and the server deployed by each task may maintain a communication connection with the job manager JM through a heartbeat mechanism, for example, each task sends a heartbeat signal every minute to connect the job manager JM, so that the job manager JM needs to sense the change of the task, for example, the change of the number of tasks, the resource actually used by the task, and the like every minute. It is difficult to respond quickly after a task change.
The present application utilizes a container mechanism of a K8s architecture, and based on a preset container arrangement algorithm, establishes a task container for each task of the Flink cluster 21 (for example, the task container 1 in fig. 5 is a task container of TM1 in fig. 2, and the task container 2 is a task container of TM2 in fig. 2), and establishes a job container 25 for a job manager JM that manages all tasks, and by controlling the number of task containers, the number of tasks that are currently parallel can be adjusted. In this way, the control logic inside the Flink cluster 21 can be converted to the external K8S architecture for control.
After the number of task containers is adjusted according to the target parallel number, the number of task containers becomes the target parallel number, and at this time, the task containers will apply for resources to the job container 25 based on the adjusted number, and compared with the case where the job controller JM actively obtains the number of tasks according to the heartbeat mechanism, after the number of task containers changes, the task containers actively apply for resources to the job container 25, and the response speed is obviously faster. After receiving the resource application based on the number of the adjusted task containers, the job container 25 performs resource scheduling, and it can be understood that the scheduled resource is not determined immediately, and therefore, the currently scheduled resource needs to be monitored in real time to determine the parallel number of tasks that can be supported by the resource, and when the parallel number of tasks matches the number of the adjusted task containers (for example, the parallel number of tasks is greater than or equal to the number of the adjusted task containers), the resource scheduling is completed, and at this time, all tasks can be redeployed according to the scheduled resource to adjust the number of the currently parallel tasks, so that the number of the currently parallel tasks is greater than or equal to the target parallel number, thereby achieving elastic expansion and contraction of the resource.
The framework for switching control of the Flink cluster 21 to the K8s framework for control is shown in fig. 5, and after the main controller 26 obtains the target parallel number, the number of the task copies can be adjusted according to the target parallel number, wherein the main controller can also be connected with the client 27 of K8s, and the resource adjustment is performed by receiving the input of the client 27 of K8 s. The task copies apply for resources from the job copies 25 based on the adjusted number, the job copies 25 include a job controller 251, a resource manager 252 and a resource monitor 253, the job controller 251 is used for applying for resources according to the number of the task copies required to be applied, the resource manager 252 is used for managing the applied resources, and the resource monitor 253 is used for monitoring the changes of the applied resources. The resource monitor 253 can calculate the corresponding parallel number of the tasks according to the monitored applied resources, and under the condition that the parallel number of the tasks is matched with the number of the adjusted task containers, the job controller 25 can redeploy the tasks according to the applied resources, so that the number of the current parallel tasks is adjusted, and for example, after the resources are scheduled according to the target parallel number, the number of the current parallel tasks after adjustment reaches the target parallel number. Therefore, resources are actively applied through the task copies, rapid scheduling of the resources is achieved, the applied resources are monitored in real time through the resource monitor 253, changes of the resources can be sensed in real time without waiting for communication timeout of a server providing the resources for the tasks, then deployment of the tasks is immediately executed after the scheduling of the resources is completed, and elastic expansion and contraction of the resources are rapidly achieved.
Alternatively, each task container may include one or more available slots, i.e., the number of available slots of a task container indicates the number of threads that the task container can concurrently run in parallel. The matching of the parallel number of tasks and the adjusted number of task containers may be: the number of task parallels is equal to the number of adjusted task containers x the number of available slots each task container contains.
In order to facilitate management of the resource by the administrator, the requested resource information and the task parallel number that can be supported by the current resource monitored by the resource monitor 253 can be displayed in real time as resource usage information, for example, displayed at the World Wide Web front end 28.
Currently, the history server of flink is used to view the running conditions of the history of jobs. The statistics of completed jobs may be queried after the corresponding Flink cluster 21 is shut down. The history server is mainly applied to batch tasks, and needs to inquire the running state of the batch tasks and corresponding running logs. Due to the large volume of batch tasks, an efficient history server is needed for such query requirements.
According to the method and the device, the information after each task is operated is stored in other high-performance storage equipment, such as a preset database, for example, a MySQL database, and when a manager inquires, the manager can efficiently inquire the information from the preset database. And the scheduling information of the resources can also generate an operation log which is also stored in a preset database, so that the query of background management personnel is facilitated, and the operation log can be opened to the cloud platform, so that a user can rapidly query the operation log of resource scheduling on the cloud platform.
According to the resource management method and the resource management device, the cloud native one-key deployment can be achieved based on the K8s resource elastic expansion framework, the initial resource information of each task is automatically calculated, the management of the applied resources can be achieved through the resource manager, the resource monitor can conveniently monitor the change of the applied resources in real time, the quick scheduling of the resources is facilitated, and the elastic expansion of the resources is quickly achieved.
In order to better implement the control method according to the embodiment of the present application, the embodiment of the present application also provides a control device 10. Referring to fig. 6, the control device 10 may include:
the acquiring module 11 is configured to acquire different index information of a current parallel task;
the first determining module 12 is configured to determine the target parallel quantity according to the index information and a preset scaling strategy; and
and the adjusting module 13 is configured to adjust the number of the currently parallel tasks according to the target parallel number.
The first determining module 12 is specifically configured to:
determining the target parallel quantity according to the difference value of the average CPU utilization rate and the preset CPU utilization rate; and/or
Determining the target parallel number according to the average memory utilization rate and the preset memory utilization rate; and/or
And under the condition that the current moment is a preset moment, determining the target parallel quantity as a preset parallel quantity corresponding to the preset moment.
The control device 10 further includes:
a second determining module 14, configured to determine, according to the index information, an actual resource amount used by the task;
and the execution module 15 is configured to execute the task according to the actual resource amount when the actual resource amount is between the maximum resource amount and the minimum resource amount.
The control device 10 further includes:
the establishing module 16 is used for establishing a task container for each task and establishing a job container for managing all tasks based on a preset container arranging algorithm;
the adjusting module 13 is specifically configured to:
adjusting the number of task containers according to the target parallel number;
applying for resources to the job containers based on the adjusted number of task containers;
monitoring the parallel quantity of tasks corresponding to the resources;
and under the condition that the parallel number of the tasks is matched with the number of the adjusted task containers, redeploying all the tasks to adjust the number of the current parallel tasks.
The control device 10 further includes:
and the display module 17 is used for displaying the resource usage information according to the parallel number of the tasks.
The adjusting module 13 is further configured to redeploy each task at the server that stores the information of each task.
The adjusting module 13 is further configured to schedule resources according to the target parallel quantity after information of the current parallel task is stored, so as to adjust the quantity of the current parallel task.
The control device 10 further includes:
and the storage module 18 is used for recording information of each scheduling of the resource to generate an operation log, and storing the operation log in a preset database.
The respective modules in the control device 10 described above may be implemented in whole or in part by software, hardware, and a combination thereof. The modules may be embedded in hardware or independent from the processor 20 in the computer device, or may be stored in a memory in the computer device in software, so that the processor 20 can call and execute operations corresponding to the modules.
Referring to fig. 7, a computer device 100 according to an embodiment of the present application includes a processor 20. The processor 20 is configured to execute the control method of any of the above embodiments, and for brevity, will not be described herein again.
The computer device 100 may be a mobile terminal, a server (e.g., a cloud server), a tablet computer, a desktop computer, etc.
Referring to fig. 8, the present embodiment further provides a computer-readable storage medium 300, on which a computer program 310 is stored, and steps of the control method according to any of the above embodiments are implemented when the computer program 310 is executed by the processor 20, which is not described herein again for brevity.
It will be appreciated that the computer program 310 comprises computer program code. The computer program code may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable storage medium may include: any entity or device capable of carrying computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), software distribution medium, and the like.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example" or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the various embodiments or examples and features of the various embodiments or examples described in this specification can be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (11)

1. A control method, comprising:
acquiring different index information of the current parallel tasks;
determining the target parallel quantity according to the index information and a preset telescopic strategy; and
and scheduling resources according to the target parallel quantity so as to adjust the quantity of the tasks which are parallel at present.
2. The control method according to claim 1, wherein the index information includes an average CPU usage rate, an average memory usage rate, and/or a current time, and the determining the target parallel amount according to the index information and a preset scaling policy includes:
determining the target parallel quantity according to the difference value of the average CPU utilization rate and a preset CPU utilization rate; and/or
Determining the target parallel quantity according to the average memory utilization rate and a preset memory utilization rate; and/or
And under the condition that the current moment is a preset moment, determining the target parallel quantity as a preset parallel quantity corresponding to the preset moment.
3. The control method according to claim 1, wherein there are a maximum amount of resources and a minimum amount of resources for each of the tasks, the control method further comprising:
determining the actual resource amount used by the task according to the index information;
and executing the task according to the actual resource amount under the condition that the actual resource amount is between the maximum resource amount and the minimum resource amount.
4. The control method according to claim 1, characterized by further comprising:
based on a preset container arrangement algorithm, establishing a task container for each task and establishing a job container for managing all the tasks;
the scheduling resources according to the target parallel quantity to adjust the quantity of the tasks which are parallel at present comprises:
adjusting the number of the task containers according to the target parallel number;
applying for the resource from the job container based on the adjusted number of task containers;
monitoring the parallel quantity of tasks corresponding to the resources;
and under the condition that the parallel number of the tasks is matched with the number of the adjusted task containers, all the tasks are redeployed so as to adjust the number of the tasks which are parallel at present.
5. The control method according to claim 4, characterized by further comprising:
and displaying resource usage information according to the parallel number of the tasks.
6. The control method of claim 4, wherein said redeploying all of said tasks comprises:
redeploying each of the tasks at a server that maintains information for each of the tasks.
7. The control method according to claim 1, further comprising, after determining the target parallel amount according to the index information and a preset scaling strategy:
and after the information of the current parallel tasks is stored, the step of scheduling resources according to the target parallel quantity to adjust the quantity of the current parallel tasks is carried out.
8. The control method according to claim 1, characterized by further comprising:
and recording the scheduling information of the resources each time to generate an operation log, and storing the operation log in a preset database.
9. A control device, characterized by comprising:
the acquisition module is used for acquiring different index information of the current parallel tasks;
the first determining module is used for determining the target parallel quantity according to the index information and a preset telescopic strategy; and
and the adjusting module is used for adjusting the number of the tasks which are currently parallel according to the target parallel number.
10. A computer device comprising a processor configured to perform the control method of any one of claims 1-8.
11. A non-transitory computer-readable storage medium containing a computer program which, when executed by one or more processors, implements execution of the control method of any one of claims 1 to 8.
CN202210945249.4A 2022-08-08 2022-08-08 Control method and device, computer equipment and storage medium Pending CN115391030A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210945249.4A CN115391030A (en) 2022-08-08 2022-08-08 Control method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210945249.4A CN115391030A (en) 2022-08-08 2022-08-08 Control method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115391030A true CN115391030A (en) 2022-11-25

Family

ID=84117923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210945249.4A Pending CN115391030A (en) 2022-08-08 2022-08-08 Control method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115391030A (en)

Similar Documents

Publication Publication Date Title
US11010197B2 (en) Dynamic allocation of physical computing resources amongst virtual machines
CN101645022B (en) Work scheduling management system and method for a plurality of colonies
JP4827097B2 (en) Apparatus, system and method for controlling grid system resources on demand
US8386512B2 (en) System for managing data collection processes
US8739169B2 (en) Method for monitoring operating experiences of images to improve workload optimization in cloud computing environments
CN111796908B (en) System and method for automatic elastic expansion and contraction of resources and cloud platform
US20140282520A1 (en) Provisioning virtual machines on a physical infrastructure
CN109564528B (en) System and method for computing resource allocation in distributed computing
WO2012066604A1 (en) Server system and method for managing the same
CN112559182B (en) Resource allocation method, device, equipment and storage medium
US20150178137A1 (en) Dynamic system availability management
CN104679594B (en) A kind of middleware distributed computing method
US20180039520A1 (en) Methods and Nodes for Scheduling Data Processing
US9607275B2 (en) Method and system for integration of systems management with project and portfolio management
CN114528104A (en) Task processing method and device
CN106843890B (en) Sensor network, node and operation method thereof based on intelligent decision
CN113961353A (en) Task processing method and distributed system for AI task
Deochake Cloud cost optimization: A comprehensive review of strategies and case studies
CN117369990A (en) Method, device, system, equipment and storage medium for scheduling computing power resources
CN111949442A (en) System and method for extensible backup services
CN115391030A (en) Control method and device, computer equipment and storage medium
US11579780B1 (en) Volume remote copy based on application priority
CN113742059B (en) Task allocation method, device, computer equipment and storage medium
CN110457130B (en) Distributed resource elastic scheduling model, method, electronic equipment and storage medium
CN114090201A (en) Resource scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination