CN112231100A - Queue resource adjusting method and device, electronic equipment and computer readable medium - Google Patents

Queue resource adjusting method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN112231100A
CN112231100A CN202011105767.2A CN202011105767A CN112231100A CN 112231100 A CN112231100 A CN 112231100A CN 202011105767 A CN202011105767 A CN 202011105767A CN 112231100 A CN112231100 A CN 112231100A
Authority
CN
China
Prior art keywords
target
task
queue
tasks
allocated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011105767.2A
Other languages
Chinese (zh)
Inventor
李婉洁
刘远
郭颂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Minglue Zhaohui Technology Co Ltd
Original Assignee
Beijing Minglue Zhaohui Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Minglue Zhaohui Technology Co Ltd filed Critical Beijing Minglue Zhaohui Technology Co Ltd
Priority to CN202011105767.2A priority Critical patent/CN112231100A/en
Publication of CN112231100A publication Critical patent/CN112231100A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application provides a queue resource adjusting method and device, electronic equipment and a computer readable medium, and belongs to the technical field of clusters. The method comprises the following steps: inputting the current memory utilization rate and the current number of tasks to be allocated in the cluster into a target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of tasks to be allocated output by the target prediction time sequence model; under the conditions that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of tasks to be distributed is not less than the target number of tasks to be distributed, determining all task identifiers in the cluster; under the condition that the emergency task database contains at least one task identifier, taking the task identifier existing in the emergency task database in the cluster as a target task identifier; and determining a target queue corresponding to the target task identifier, and performing resource adjustment on the target queue. The method and the device improve the flexibility of resource adjustment and avoid resource congestion.

Description

Queue resource adjusting method and device, electronic equipment and computer readable medium
Technical Field
The present application relates to the field of cluster technologies, and in particular, to a method and an apparatus for adjusting queue resources, an electronic device, and a computer-readable medium.
Background
The Hadoop cluster can allocate resources to a plurality of service queues, each service queue has a priority, the corresponding resource allocation is different, generally, more resources are allocated to the service queues with high priorities, and less resources are allocated to the service queues with low priorities.
Each queue has a corresponding configuration file, the queue resources in the cluster are adjusted by switching the configuration files at regular time at present, if the queue resources need to be adjusted in an emergency, the configuration files need to be manually and additionally modified, the timeliness is low, and the flexibility of adjusting the queue resources is poor.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, an electronic device, and a computer-readable medium for adjusting queue resources, so as to solve the problem of poor flexibility in resource adjustment. The specific technical scheme is as follows:
in a first aspect, a method for adjusting queue resources is provided, where the method includes:
inputting the current memory utilization rate and the current number of tasks to be allocated in the cluster into a target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of tasks to be allocated output by the target prediction time sequence model;
determining all task identifiers in the cluster under the condition that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of tasks to be allocated is not less than the target number of tasks to be allocated, wherein the cluster comprises a plurality of tasks, and each task has a corresponding task identifier;
under the condition that an emergency task database contains at least one task identifier, taking the task identifier existing in the emergency task database in the cluster as a target task identifier;
and determining a target queue corresponding to the target task identifier, and performing resource adjustment on the target queue, wherein the cluster comprises a plurality of queues.
Optionally, the cluster includes a plurality of parent queues, each parent queue includes a plurality of child queues, each child queue includes a plurality of tasks, and determining a target queue corresponding to the target task identifier and performing resource adjustment on the target queue includes:
determining a target parent queue corresponding to the target task identifier;
and under the condition that all services of the target parent queue are urgent, adjusting resources in the target parent queue.
Optionally, the cluster includes a plurality of parent queues, each parent queue includes a plurality of child queues, each child queue includes a plurality of tasks, and determining a target queue corresponding to the target task identifier and performing resource adjustment on the target queue includes:
determining a target parent queue corresponding to the target task identifier;
determining a target sub-queue corresponding to the target task identifier under the condition that the service of the target parent queue is partially urgent;
and adjusting the resources in the target sub-queue.
Optionally, the service of the target parent queue for all emergencies includes:
determining a target service identifier of the target parent queue, and acquiring an emergency service identifier in an emergency service database;
and determining that the services of the target parent queue are all urgent under the condition that the emergency service database contains the emergency service identifier matched with the target service identifier.
Optionally, after the emergency task database contains at least one task identifier, the method further comprises:
and sending warning information to the target terminal so that the target terminal can know that the cluster needs to perform resource adjustment.
Optionally, after the current memory usage rate and the current number of tasks to be allocated are input to a target prediction timing model, and a predicted memory usage rate and a predicted number of tasks to be allocated output by the target prediction timing model are obtained, the method further includes:
and under the condition that the predicted memory utilization rate is less than the target memory utilization rate or the predicted task number to be allocated is less than the target task number to be allocated, continuously acquiring the memory utilization rate and the task number to be allocated of the cluster after a preset time length.
Optionally, before the current memory usage rate and the current number of tasks to be allocated are input into a target prediction timing model to obtain a predicted memory usage rate and a predicted number of tasks to be allocated output by the target prediction timing model, the method further includes:
obtaining a sample memory utilization rate, a sample task quantity to be distributed, a first memory utilization rate corresponding to the sample memory utilization rate and a first task quantity to be distributed corresponding to the sample task quantity to be distributed;
inputting the sample memory utilization rate and the number of the tasks to be allocated to the sample into an initial prediction time sequence model to obtain a second memory utilization rate and a second number of the tasks to be allocated output by the initial prediction time sequence model;
and under the condition that the first memory utilization rate is different from the second memory utilization rate or the first task quantity to be allocated is different from the second task quantity to be allocated, adjusting model parameters of the initial prediction time sequence model to obtain the target prediction time sequence model, wherein the second memory utilization rate output by the target prediction time sequence model is consistent with the first memory utilization rate, and the second task quantity to be allocated output by the target prediction time sequence model is consistent with the first task quantity to be allocated.
In a second aspect, an apparatus for adjusting queue resources is provided, the apparatus comprising:
the input and output module is used for inputting the current memory utilization rate and the current number of the tasks to be allocated in the cluster into a target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of the tasks to be allocated output by the target prediction time sequence model;
the determining module is used for determining all task identifiers in the cluster under the condition that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of the tasks to be allocated is not less than the target number of the tasks to be allocated, wherein the cluster comprises a plurality of tasks, and each task has a corresponding task identifier;
the task identification module is used for taking the task identification in the emergency task database in the cluster as a target task identification under the condition that the emergency task database contains at least one task identification;
and the adjusting module is used for determining a target queue corresponding to the target task identifier and adjusting resources of the target queue, wherein the cluster comprises a plurality of queues.
In a third aspect, an electronic device is provided, which includes a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing any of the method steps described herein when executing the program stored in the memory.
In a fourth aspect, a computer-readable storage medium is provided, having stored thereon a computer program which, when being executed by a processor, carries out any of the method steps.
The embodiment of the application has the following beneficial effects:
the embodiment of the application provides a queue resource adjusting method, which comprises the following steps: inputting the current memory utilization rate and the number of tasks to be allocated in the cluster into a target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of tasks to be allocated output by the target prediction time sequence model, determining all task identifiers in the cluster by a server under the condition that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of tasks to be allocated is not less than the target number of tasks to be allocated, taking the task identifier existing in an emergency task database in the cluster as the target task identifier under the condition that the emergency task database contains at least one task identifier, determining a target queue corresponding to the target task identifier, and adjusting resources of the target queue. According to the method and the device, the resource adjustment is performed on the queue in advance through the predicted memory utilization rate and the number of the tasks to be allocated, early warning processing is achieved in advance, the flexibility of the resource adjustment is improved, and resource congestion is avoided.
Of course, not all of the above advantages need be achieved in the practice of any one product or method of the present application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
Fig. 1 is a flowchart of a method for adjusting queue resources according to an embodiment of the present disclosure;
fig. 2 is a flowchart of a method for adjusting resources in a cluster according to an embodiment of the present application;
fig. 3 is a flowchart of a process for adjusting queue resources according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a queue resource adjusting apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment of the application provides a queue resource adjusting method, which can be applied to a server and used for adjusting queue resources in a cluster.
A detailed description will be given below of a queue resource adjustment method provided in an embodiment of the present application with reference to a specific implementation manner, as shown in fig. 1, the specific steps are as follows:
step 101: and inputting the current memory utilization rate and the current number of tasks to be allocated in the cluster into the target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of tasks to be allocated output by the target prediction time sequence model.
In the embodiment of the application, the Hadoop cluster is a distributed system infrastructure, the Hadoop realizes a distributed file system, provides high throughput to access data of application programs, is suitable for the application programs with ultra-large data sets, comprises a plurality of queues, and resources in the Hadoop cluster are divided into the queues in a queue mode for use. Each queue corresponds to a service group, and generally, queue resources are not evenly distributed, and different resources are divided for each service queue according to the priority of services. The service with low priority is distributed with less queue resources, and the service with high priority is distributed with more queue resources. The queue mainly provides receiving and sending of messages, and message synchronization among the micro-services is achieved.
The cluster has a plurality of tasks to be distributed, and the cluster distributes the tasks to the queues according to the service priority. Each queue comprises a plurality of tasks, the urgency of different tasks can be different, and the cluster memory occupied by each task is also different. When the number of emergency tasks in the cluster is too large, resources in the cluster need to be adjusted, so that the cluster resources are effectively utilized.
And the server regularly acquires the current memory utilization rate and the current number of tasks to be distributed of the cluster. And inputting the obtained current memory utilization rate and the current number of the tasks to be allocated into a target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of the tasks to be allocated output by the target prediction time sequence model.
The server acquires the current memory utilization rate and the current number of tasks to be allocated at regular time, the predicted memory utilization rate output by the target prediction time sequence model is the memory utilization rate after the target time length, and the predicted number of tasks to be allocated output by the target prediction time sequence model is the number of tasks to be allocated after the target time length. The target duration is a timed time interval duration, and the target prediction time sequence model can be a TreNet deep learning network.
As an optional implementation manner, before the current memory usage rate and the current number of tasks to be allocated are input into the target prediction timing model to obtain the predicted memory usage rate and the predicted number of tasks to be allocated output by the target prediction timing model, the method further includes: obtaining the sample memory utilization rate, the number of tasks to be distributed of the sample, a first memory utilization rate corresponding to the sample memory utilization rate and a first number of tasks to be distributed corresponding to the number of the tasks to be distributed of the sample; inputting the memory utilization rate of the sample and the number of tasks to be allocated to the sample into an initial prediction time sequence model to obtain the second memory utilization rate and the number of tasks to be allocated output by the initial prediction time sequence model; and under the condition that the first memory utilization rate is different from the second memory utilization rate or the first task quantity to be distributed is different from the second task quantity to be distributed, adjusting model parameters of the initial prediction time sequence model to obtain a target prediction time sequence model, wherein the second memory utilization rate output by the target prediction time sequence model is consistent with the first memory utilization rate, and the second task quantity to be distributed output by the target prediction time sequence model is consistent with the first task quantity to be distributed.
In the embodiment of the application, the server obtains the sample memory usage rate, the number of tasks to be allocated for the sample, the first memory usage rate corresponding to the sample memory usage rate, and the first number of tasks to be allocated corresponding to the number of tasks to be allocated for the sample, and inputs the sample memory usage rate and the number of tasks to be allocated for the sample into the initial prediction time sequence model, so as to obtain the second memory usage rate and the second number of tasks to be allocated output by the initial prediction time sequence model. If the server judges that the first memory utilization rate is different from the second memory utilization rate or the first task quantity to be distributed is different from the second task quantity to be distributed, the server adjusts model parameters of the initial prediction time sequence model until the second memory utilization rate output by the target prediction time sequence model is consistent with the first memory utilization rate and the second task quantity to be distributed output by the target prediction time sequence model is consistent with the first task quantity to be distributed, and obtains a target prediction time sequence model.
Step 102: and under the conditions that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of the tasks to be distributed is not less than the target number of the tasks to be distributed, determining all task identifiers in the cluster.
The cluster comprises a plurality of tasks, and each task has a corresponding task identifier.
In the embodiment of the application, after obtaining the predicted memory usage rate and the predicted number of tasks to be allocated, the server judges whether the predicted memory usage rate is not less than the target memory usage rate and whether the predicted number of tasks to be allocated is not less than the target number of tasks to be allocated.
If the server judges that the predicted memory utilization rate is smaller than the target memory utilization rate or the predicted number of the tasks to be allocated is smaller than the target number of the tasks to be allocated, the server indicates that the memory utilization rate and the number of the tasks to be allocated of the current cluster are not very high, and cluster resources do not need to be readjusted.
If the server judges that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of tasks to be allocated is not less than the target number of tasks to be allocated, the memory utilization rate and the number of tasks to be allocated of the current cluster are indicated to be too high, whether the tasks in the cluster are urgent needs to be judged, and the server obtains task identifiers of all the tasks. The task identifier may be a number corresponding to the task.
Step 103: and under the condition that the emergency task database comprises at least one task identifier, taking the task identifier existing in the emergency task database in the cluster as a target task identifier.
The method comprises the steps that an emergency task database is arranged, task identifiers of emergency tasks are contained in the emergency task database, and if the server determines that the emergency task database contains at least one task identifier in a cluster, the task corresponding to the task identifier existing in the emergency task database is an emergency task, the server takes the task identifier as a target task identifier. If the server determines that the emergency task database does not contain the task identifier in the cluster, indicating that no task in the cluster is an emergency task, the queue resource adjustment is not needed.
Step 104: and determining a target queue corresponding to the target task identifier, and performing resource adjustment on the target queue.
The cluster comprises a plurality of queues, and each queue corresponds to a plurality of tasks.
In the embodiment of the application, the cluster includes a plurality of queues, each queue corresponds to a plurality of tasks, each task has a corresponding task identifier, and after the server determines the target task identifier, the server determines a target queue corresponding to the target task identifier and performs resource adjustment on the target queue, which may specifically be to increase the weight of the target queue or increase the resource storage amount of the target queue.
In the application, the server obtains the predicted memory usage rate and the predicted number of tasks to be allocated output by the target prediction time sequence model, can predict the memory usage rate and the number of tasks to be allocated at the next stage, and if an emergency task is judged, resource adjustment is performed on a queue corresponding to the task. According to the method and the device, the memory utilization rate and the number of tasks to be allocated at the next stage of the queue can be predicted, the queue is subjected to resource adjustment in advance, early warning processing is achieved in advance, the flexibility of resource adjustment is improved, and resource allocation delay is avoided. In addition, the mode of automatically adjusting resources is adopted, and compared with the mode of manually adjusting resources, the operation efficiency is improved.
As an optional implementation manner, the cluster includes multiple parent queues, each parent queue includes multiple child queues, each child queue includes multiple tasks, determining a target queue corresponding to a target task identifier, and performing resource adjustment on the target queue includes: determining a target parent queue corresponding to the target task identifier; and under the condition that all services of the target parent queue are urgent, adjusting the resources in the target parent queue.
In the embodiment of the application, a cluster includes multiple parent queues, each parent queue corresponds to a service group, each parent queue includes multiple child queues, each child queue includes multiple tasks, after a server determines a target task identifier, a target parent queue corresponding to the target task identifier is determined, if all services in the service group corresponding to the target parent queue are emergency services, the server adjusts resources in the target parent queue, that is, adjusts resources of all emergency services in the service group.
As an alternative implementation, the service of the target parent queue for all emergencies includes: determining a target service identifier of a target parent queue, and acquiring an emergency service identifier in an emergency service database; and under the condition that the emergency service database contains the emergency service identifier matched with the target service identifier, determining that the services of the target parent queue are all emergency.
In the embodiment of the application, the emergency service database comprises a plurality of emergency service identifiers, the server obtains the target service identifier of the target parent queue, and if the server judges that the target service identifier is matched with the emergency service identifier in the emergency service database, it indicates that all services in the target parent queue corresponding to the target service identifier are emergency. And if the server judges that the target service identifier is not matched with the emergency service identifier in the emergency service database, the server indicates that the service in the target parent queue corresponding to the target service identifier is partially emergency.
As an alternative implementation, as shown in fig. 2, the cluster includes a plurality of parent queues, each parent queue includes a plurality of child queues, each child queue includes a plurality of tasks, and adjusting the resource in the cluster includes:
step 201: and determining a target parent queue corresponding to the task identifier.
In the embodiment of the application, after the server determines the target task identifier, the server determines a target parent queue corresponding to the target task identifier.
Step 202: and under the condition that the service of the target parent queue is partially urgent, determining a target sub-queue corresponding to the target task identifier.
In this embodiment of the present application, all services in the service group corresponding to the target parent queue are not emergency services, which indicates that only tasks of a part of sub queues in the target parent queue are emergency, and the server only needs to determine the target sub queue corresponding to the task identifier without adjusting all resources in the target parent queue.
Step 203: and adjusting the resources in the target sub-queue.
In the embodiment of the present application, the server adjusts resources in the target sub-queue, which may specifically be to increase the weight of the target sub-queue or increase the resource storage amount of the target sub-queue.
In the application, if the server determines that all services of the target parent queue are urgent, the server performs resource adjustment on the target parent queue, and if the server determines that part of the services of the target parent queue are urgent, the server performs resource adjustment on the target child queue, so that the resource adjustment is performed only on the queue corresponding to the urgent services.
As an optional implementation manner, after determining that the task identifier exists in the emergency task database, the method further includes: and sending warning information to the target terminal so that the target terminal can know that the cluster needs to perform resource adjustment.
In the embodiment of the application, after determining that the task identifier exists in the emergency task database, the server sends the warning information to the target terminal, so that the target terminal knows that the cluster needs to perform resource adjustment.
As an optional implementation manner, after the current memory usage rate and the current number of tasks to be allocated are input into the target prediction timing model, and the predicted memory usage rate and the predicted number of tasks to be allocated output by the target prediction timing model are obtained, the method further includes: and under the condition that the predicted memory utilization rate is less than the target memory utilization rate or the predicted number of the tasks to be distributed is less than the target number of the tasks to be distributed, continuously acquiring the memory utilization rate and the number of the tasks to be distributed of the cluster after a preset time length.
In the embodiment of the application, when the predicted memory usage rate is smaller than the target memory usage rate or the predicted number of tasks to be allocated is smaller than the target number of tasks to be allocated, the server does not adjust the queue resources, and continues to acquire the memory usage rate of the cluster and the number of tasks to be allocated after the preset time length.
In the application, the server can adjust the resources of the queue after identifying the emergency task, and does not adjust the resources of the queue under the condition of too high memory utilization rate or too much number of tasks to be distributed, so that the condition that the emergency task occurs and the resources are not enough to be adjusted is avoided.
Optionally, an embodiment of the present application further provides a processing flow chart of queue resource adjustment, as shown in fig. 3, the specific steps are as follows.
Step 301: and acquiring the current memory utilization rate and the current number of tasks to be distributed in the cluster.
Step 302: and inputting the current memory utilization rate and the current number of tasks to be allocated in the cluster into the target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of tasks to be allocated output by the target prediction time sequence model.
Step 303: judging whether the predicted memory utilization rate is not less than the target memory utilization rate, if so, executing step 304; if not, returning to the step 301, and continuing to acquire the memory usage rate of the cluster and the number of the tasks to be allocated after the preset time length.
Step 304: judging whether the predicted number of the tasks to be distributed is not less than the target number of the tasks to be distributed, if so, executing a step 305; if not, returning to the step 301, and continuing to acquire the memory usage rate of the cluster and the number of the tasks to be allocated after the preset time length.
Step 305: determining all task identifiers in the cluster, judging whether the emergency task database contains at least one task identifier, and if so, executing step 306; if not, returning to the step 301;
step 306: and taking the task identifier existing in the emergency task database in the cluster as a target task identifier.
Step 307: determining a target parent queue corresponding to the target task identifier, judging whether all services of the target parent queue are urgent, if so, executing a step 308; if not, go to step 309.
Step 308: the resources in the target parent queue are adjusted.
Step 309: and adjusting the resources in the target sub-queue.
Based on the same technical concept, an embodiment of the present application further provides a queue resource adjusting apparatus, as shown in fig. 4, the apparatus includes:
the input/output module 401 is configured to input the current memory usage rate and the current number of tasks to be allocated in the cluster into the target prediction timing model, so as to obtain a predicted memory usage rate and a predicted number of tasks to be allocated, which are output by the target prediction timing model;
a determining module 402, configured to determine all task identifiers in a cluster under the conditions that the predicted memory usage rate is not less than the target memory usage rate and the predicted number of tasks to be allocated is not less than the target number of tasks to be allocated, where the cluster includes multiple tasks and each task has a corresponding task identifier;
a module 403, configured to, when the emergency task database includes at least one task identifier, use a task identifier existing in the emergency task database in the cluster as a target task identifier;
a first adjusting module 404, configured to determine a target queue corresponding to the target task identifier, and perform resource adjustment on the target queue, where the cluster includes multiple queues.
Optionally, the cluster includes a plurality of parent queues, each parent queue includes a plurality of child queues, each child queue includes a plurality of tasks, and the first adjusting module 404 includes:
the first determining unit is used for determining a target parent queue corresponding to the target task identifier;
and the first adjusting unit is used for adjusting the resources in the target parent queue under the condition that all the services of the target parent queue are urgent.
Optionally, the cluster includes a plurality of parent queues, each parent queue includes a plurality of child queues, each child queue includes a plurality of tasks, and the first adjusting module 404 includes:
the second determining unit is used for determining a target parent queue corresponding to the target task identifier;
a third determining unit, configured to determine a target sub-queue corresponding to the target task identifier when a service of the target parent queue is partially urgent;
and the second adjusting unit is used for adjusting the resources in the target sub-queue.
Optionally, the first adjusting unit includes:
the first determining subunit is used for determining a target service identifier of the target parent queue and acquiring an emergency service identifier in an emergency service database;
and the second determining subunit is used for determining that the service of the target parent queue is all urgent under the condition that the emergency service database contains the emergency service identifier matched with the target service identifier.
Optionally, the apparatus further comprises:
and the sending module is used for sending the warning information to the target terminal so that the target terminal can know that the cluster needs to perform resource adjustment.
Optionally, the apparatus further comprises:
the first obtaining module is used for continuously obtaining the memory utilization rate of the cluster and the number of the tasks to be allocated after a preset time length under the condition that the predicted memory utilization rate is smaller than the target memory utilization rate or the predicted number of the tasks to be allocated is smaller than the target number of the tasks to be allocated.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring the sample memory utilization rate, the number of the tasks to be allocated for the sample, the first memory utilization rate corresponding to the sample memory utilization rate and the first number of the tasks to be allocated corresponding to the number of the tasks to be allocated for the sample;
the input module is used for inputting the memory utilization rate of the samples and the number of the tasks to be allocated to the samples into the initial prediction time sequence model to obtain the second memory utilization rate and the number of the tasks to be allocated output by the initial prediction time sequence model;
and the second adjusting module is used for adjusting the model parameters of the initial prediction time sequence model to obtain a target prediction time sequence model under the condition that the first memory utilization rate is different from the second memory utilization rate or the first task quantity to be allocated is different from the second task quantity to be allocated, wherein the second memory utilization rate output by the target prediction time sequence model is consistent with the first memory utilization rate, and the second task quantity to be allocated output by the target prediction time sequence model is consistent with the first task quantity to be allocated.
Based on the same technical concept, the embodiment of the present invention further provides an electronic device, as shown in fig. 5, including a processor 501, a communication interface 502, a memory 503 and a communication bus 504, where the processor 501, the communication interface 502 and the memory 503 complete mutual communication through the communication bus 504,
a memory 503 for storing a computer program;
the processor 501 is configured to implement the above steps when executing the program stored in the memory 503.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In a further embodiment provided by the present invention, there is also provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of any of the methods described above.
In a further embodiment provided by the present invention, there is also provided a computer program product containing instructions which, when run on a computer, cause the computer to perform any of the methods of the above embodiments.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for adjusting queue resources, the method comprising:
inputting the current memory utilization rate and the current number of tasks to be allocated in the cluster into a target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of tasks to be allocated output by the target prediction time sequence model;
determining all task identifiers in the cluster under the condition that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of tasks to be allocated is not less than the target number of tasks to be allocated, wherein the cluster comprises a plurality of tasks, and each task has a corresponding task identifier;
under the condition that an emergency task database contains at least one task identifier, taking the task identifier existing in the emergency task database in the cluster as a target task identifier;
and determining a target queue corresponding to the target task identifier, and performing resource adjustment on the target queue, wherein the cluster comprises a plurality of queues, and each queue corresponds to a plurality of tasks.
2. The method of claim 1, wherein the cluster includes a plurality of parent queues, wherein each parent queue includes a plurality of child queues, wherein each child queue includes a plurality of tasks, wherein determining that the target task identifies a corresponding target queue, and wherein performing resource adjustments on the target queue includes:
determining a target parent queue corresponding to the target task identifier;
and under the condition that all services of the target parent queue are urgent, adjusting resources in the target parent queue.
3. The method of claim 1, wherein the cluster includes a plurality of parent queues, wherein each parent queue includes a plurality of child queues, wherein each child queue includes a plurality of tasks, wherein determining that the target task identifies a corresponding target queue, and wherein performing resource adjustments on the target queue includes:
determining a target parent queue corresponding to the target task identifier;
determining a target sub-queue corresponding to the target task identifier under the condition that the service of the target parent queue is partially urgent;
and adjusting the resources in the target sub-queue.
4. The method of claim 2, wherein the target parent queue traffic being all urgent comprises:
determining a target service identifier of the target parent queue, and acquiring an emergency service identifier in an emergency service database;
and determining that the services of the target parent queue are all urgent under the condition that the emergency service database contains the emergency service identifier matched with the target service identifier.
5. The method of claim 1, wherein after determining that an emergency mission database contains at least one of the mission identifications, the method further comprises:
and sending warning information to the target terminal so that the target terminal can know that the cluster needs to perform resource adjustment.
6. The method according to claim 1, wherein after inputting the current memory usage rate and the current number of tasks to be allocated into a target prediction timing model, and obtaining a predicted memory usage rate and a predicted number of tasks to be allocated output by the target prediction timing model, the method further comprises:
and under the condition that the predicted memory utilization rate is less than the target memory utilization rate or the predicted task number to be allocated is less than the target task number to be allocated, continuously acquiring the memory utilization rate and the task number to be allocated of the cluster after a preset time length.
7. The method according to claim 1, wherein before inputting the current memory usage rate and the current number of tasks to be allocated into a target prediction timing model to obtain a predicted memory usage rate and a predicted number of tasks to be allocated output by the target prediction timing model, the method further comprises:
obtaining a sample memory utilization rate, a sample task quantity to be distributed, a first memory utilization rate corresponding to the sample memory utilization rate and a first task quantity to be distributed corresponding to the sample task quantity to be distributed;
inputting the sample memory utilization rate and the number of the tasks to be allocated to the sample into an initial prediction time sequence model to obtain a second memory utilization rate and a second number of the tasks to be allocated output by the initial prediction time sequence model;
and under the condition that the first memory utilization rate is different from the second memory utilization rate or the first task quantity to be allocated is different from the second task quantity to be allocated, adjusting model parameters of the initial prediction time sequence model to obtain the target prediction time sequence model, wherein the second memory utilization rate output by the target prediction time sequence model is consistent with the first memory utilization rate, and the second task quantity to be allocated output by the target prediction time sequence model is consistent with the first task quantity to be allocated.
8. An apparatus for queue resource adjustment, the apparatus comprising:
the input and output module is used for inputting the current memory utilization rate and the current number of the tasks to be allocated in the cluster into a target prediction time sequence model to obtain the predicted memory utilization rate and the predicted number of the tasks to be allocated output by the target prediction time sequence model;
the determining module is used for determining all task identifiers in the cluster under the condition that the predicted memory utilization rate is not less than the target memory utilization rate and the predicted number of the tasks to be allocated is not less than the target number of the tasks to be allocated, wherein the cluster comprises a plurality of tasks, and each task has a corresponding task identifier;
the task identification module is used for taking the task identification in the emergency task database in the cluster as a target task identification under the condition that the emergency task database contains at least one task identification;
and the adjusting module is used for determining a target queue corresponding to the target task identifier and adjusting resources of the target queue, wherein the cluster comprises a plurality of queues.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any of claims 1 to 7 when executing a program stored in the memory.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
CN202011105767.2A 2020-10-15 2020-10-15 Queue resource adjusting method and device, electronic equipment and computer readable medium Pending CN112231100A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011105767.2A CN112231100A (en) 2020-10-15 2020-10-15 Queue resource adjusting method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011105767.2A CN112231100A (en) 2020-10-15 2020-10-15 Queue resource adjusting method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN112231100A true CN112231100A (en) 2021-01-15

Family

ID=74118984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011105767.2A Pending CN112231100A (en) 2020-10-15 2020-10-15 Queue resource adjusting method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN112231100A (en)

Similar Documents

Publication Publication Date Title
CN108449286B (en) Network bandwidth resource allocation method and device
CN110248417B (en) Resource allocation method and system for communication service in power Internet of things
US20170126583A1 (en) Method and electronic device for bandwidth allocation based on online media services
EP2822236A1 (en) Network bandwidth distribution method and terminal
CN105208125B (en) message transmission method, device and system
CN110830964B (en) Information scheduling method, internet of things platform and computer readable storage medium
CN113127168A (en) Service distribution method, system, device, server and medium
CN112600695B (en) RAN side network slice resource allocation method and device and electronic equipment
WO2019144775A1 (en) Resource scheduling method and system based on tdma system
CN106130810B (en) Website monitoring method and device
CN110677854A (en) Method, apparatus, device and medium for carrier frequency capacity adjustment
CN109428926B (en) Method and device for scheduling task nodes
CN110099292B (en) Data center node determination method and device and electronic equipment
CN107426109B (en) Traffic scheduling method, VNF module and traffic scheduling server
CN107634978B (en) Resource scheduling method and device
CN112231100A (en) Queue resource adjusting method and device, electronic equipment and computer readable medium
CN116820769A (en) Task allocation method, device and system
CN108156086B (en) Policy rule issuing method and device
CN113316230B (en) Method and device for scheduling data sending task, electronic equipment and storage medium
CN110381168B (en) Prediction period determining method, prediction content pushing method, device and system
CN109951329B (en) Network resource scheduling method and device
CN110636013B (en) Dynamic scheduling method and device for message queue
CN111800446B (en) Scheduling processing method, device, equipment and storage medium
CN111694670A (en) Resource allocation method, device, equipment and computer readable medium
CN113055199A (en) Gateway access method and device and gateway equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination